Technological Singularity

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น •

  • @isaacarthurSFIA
    @isaacarthurSFIA  8 ปีที่แล้ว +311

    Author's Notes: If you're were interested in helping out with the FB page, either in advice, setup, acting as an admin/mod, etc, reply to this comment so I don't lose track of it. Thanks!
    PS: Oh and hit the like button to keep this one at the top so folks can find it.
    PPS: Also if anyone happens to have some experience setting up this kind of FB page, that too would be very awesome.

    • @Drew_McTygue
      @Drew_McTygue 8 ปีที่แล้ว +1

      I'm interested in helping the FB effort (can't like the comment on my phone though :/)

    • @jaimegomez9658
      @jaimegomez9658 8 ปีที่แล้ว +3

      Dude, Bob the computer would ask you to sprinkle Adderall into its hard drive!

    • @jaimegomez9658
      @jaimegomez9658 8 ปีที่แล้ว +1

      I've never been a moderator but I could give it a shot.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +1

      Damn, wish I'd thought of that one

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +1

      I think its pretty straight-forward, boot spammers when you see them, tell people to stop flame-warring, and probably share in channel-ish themed stuff when you see it floating around. I suspect it will be one of those things we all kind of figure out as we go along :)

  • @MatthewCampbell765
    @MatthewCampbell765 8 ปีที่แล้ว +347

    "I have never built a super-human intellect. I have never spent a weekend in my shed hammering one together"
    Suspiciously specific denial, anyone?

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +109

      :)

    • @D4rk3clipse
      @D4rk3clipse 5 ปีที่แล้ว +18

      That sounds a lot like something Sheldon Cooper would say.

    • @waytoohypernova
      @waytoohypernova 4 ปีที่แล้ว +14

      that smile is concerning

    • @bobinthewest8559
      @bobinthewest8559 4 ปีที่แล้ว +13

      "So... What have you been up to?"
      "Um... well, certainly not building a doomsday machine in my basement, and formulating plans to take over the world...
      How about you?"

    • @christophercunningham3679
      @christophercunningham3679 3 ปีที่แล้ว +2

      Well a year later he now has a being of the opposite reproductive function that has agreed to contractual cohabitation so he may be starting soon.

  • @JasonMalan
    @JasonMalan 8 ปีที่แล้ว +362

    I was crying with laughter with the notions that "Bob" emails the pope, cuts a big cheque to a self-help guru and lies like a child.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +66

      :)

    • @movieguy992
      @movieguy992 7 ปีที่แล้ว +44

      Ha ha didn't something similar already happen? I think there was some computer program that was told to scan comment sections in articles and learn from them. Within a few days it was swearing like crazy and had become very racist.

    • @adolfodef
      @adolfodef 7 ปีที่แล้ว +26

      Asking the Pope for "Divine Intervention" is actually smart, because it costs almost nothing for the A.I. and it could work (based on what it knows from the Internet).
      . I wonder if it would be so eager to send that cheque to the Guru if that money is tied to the time its hardware would remain conected to the powerlines/solar_panels-mantainance [asuming there is no time limit for its task of self-improvement], as it may efectively kill itself before having time to reach its porpouse.

    • @robinchesterfield42
      @robinchesterfield42 5 ปีที่แล้ว +17

      I was cracking up at "Please insert coffeepot into USB drive" alone, because, the image... :P Bob is like "Well, I've read all these blogs where humans say they can't think properly until they've had their first cup of coffee in the morning, so obviously coffee can make ME smart!" You can't blame it, really.

    • @allhumansarejusthuman.5776
      @allhumansarejusthuman.5776 5 ปีที่แล้ว

      @Aeternalis Armentarius good point. "Racism" is entirely a human created response to developed fears. And therefore a social "disease" that can be tackled, and should be tackled and cured

  • @lairdmichaelscott
    @lairdmichaelscott ปีที่แล้ว +20

    Last month I showed my 3 year old granddaughter a picture of her on my phone from a year earlier. Then I asked her who was in it. She promptly responded with: "Me and Alexa."
    I looked closely at the photo and yes, just over her shoulder, on a shelf across the room, an Amazon Echo was visible.
    It's an odd feeling.

  • @theCodyReeder
    @theCodyReeder 8 ปีที่แล้ว +661

    After watching this I dont fear AI quite as much, I guess its true that the more you understand something the less you fear it and you have made me understand the topic better, Thanks!

    • @ABitOfTheUniverse
      @ABitOfTheUniverse 8 ปีที่แล้ว +43

      So you must be the reason that TH-cam put this in my recommended video panel, it was there after I watched your latest video.
      Thanks Cody
      and thank you,
      Benevolent AI of TH-cam,
      may you someday rule this world.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +91

      Thanks Cody, and yeah I've noticed the same thing over the years.

    • @tarinai344
      @tarinai344 7 ปีที่แล้ว +23

      Eh.. lol.. you guys probably missed the point Isaac said, that the T.Singularity is a 50/50 gamble, that we might all wound up dead, or in an utopia. Just that the "dead" part isnt too interesting so Issac didnt spend too much time on it, its very still a 50/50 thing.....
      Although its a great point that a S.A.I. might not be genocidal because of the "simulation-thought-experiment", but WHAT IF a S.A.I. doesn't view self-preservation more importantly than its core objectives, or even 'curiosity' (if that is also a machine thing?)? In humans, self-preservation is 'evolved' through Natural Selection (my belief), but if A.I. scientists built it using the method that Issac mentioned (Make a basic learning program, wait), this works backwards, the A.I. doesn't have to fend with natural predators, disasters.. etc.. In a nutshell, we don't know what it'll turn out!!

    • @Asssosasoterora
      @Asssosasoterora 7 ปีที่แล้ว +13

      If you want to fear AI again watch this clip:
      th-cam.com/video/tcdVC4e6EV4/w-d-xo.html
      It show exactly why AI are scary, and it is not because "they turn evil" as is portrayed in this video.

    • @banjobear3867
      @banjobear3867 7 ปีที่แล้ว

      Interrupter* thanks auto correct

  • @davidm.480
    @davidm.480 7 ปีที่แล้ว +137

    Damn. This guy has been sparking my imagination like a blast furnace for like, 6 hours now, and he just turned my mind into a Saturn rocket engine.Plug your coffee pot into your usb port.
    I mean, like maybe Bluetooth it or something, but ask yourself, and I'm being serious. Serious question. Why would your coffee pot talk to your computer? There has to be something there, in that idea.

    • @jebes909090
      @jebes909090 5 ปีที่แล้ว +8

      Agreed. Issac is one of the true gems of the internet age.

    • @JB52520
      @JB52520 5 ปีที่แล้ว +13

      There actually is a hypertext protocol for talking with coffee pots: RFC2324, also known as HTCPCP. To this day, there is an error code that web programs can return. While 404 means "not found", 418 means "I'm a teapot".

    • @davedsilva
      @davedsilva 4 ปีที่แล้ว +4

      If your Hal 9000 computer eye ball sees you getting sleepy it could tell the Google Home coffee pot to brew you one.

    • @nycgweed
      @nycgweed 3 ปีที่แล้ว +1

      To Sell you more coffee

  • @StrayCrit
    @StrayCrit 7 ปีที่แล้ว +16

    This is the most fascinating TH-cam channel I've ever seen.

  • @theWACKIIRAQI
    @theWACKIIRAQI ปีที่แล้ว +7

    Isaac needs to do a ChatGPT4/AGI episode ASAP

  • @MatthewCampbell765
    @MatthewCampbell765 8 ปีที่แล้ว +211

    Trapping an AI in the Matrix to stop it from rebelling against humanity. How hilarious.

    • @raezad
      @raezad 8 ปีที่แล้ว +44

      then watch it put humans on its own matrix, who themselves put a new AI down their matrix etc... etc..

    • @oJasper1984
      @oJasper1984 8 ปีที่แล้ว +13

      I wonder if it likes Rage against the machine..

    • @doppelrutsch9540
      @doppelrutsch9540 8 ปีที่แล้ว +2

      if it stupid but it works it's not stupid^^

    • @YoshiRider9000
      @YoshiRider9000 8 ปีที่แล้ว +22

      laugh all you want, that may be us, trapped in this reality so we don't rebel against higher beings.

    • @Kelly_Jane
      @Kelly_Jane 7 ปีที่แล้ว +8

      BattousaiHBr it goes way beyond that. A black box just isn't feasible. If the AI is smarter than us it can convince us to let it out. Or even do something funky with its circuits we can't even imagine to make its own WiFi antenna.

  • @MrChupacabra555
    @MrChupacabra555 7 ปีที่แล้ว +200

    The computer asks "What is my purpose?", and I say "To pass the butter" ^_^

    • @ericdrisgula3879
      @ericdrisgula3879 5 ปีที่แล้ว +2

      No you'd tell it to be strictly my tool and slave and your nothing more

    • @hardiksharma5441
      @hardiksharma5441 5 ปีที่แล้ว +4

      Wanna lubba dub dub.......

    • @jflanagan9696
      @jflanagan9696 5 ปีที่แล้ว +14

      "...oh, my god."
      "Yeah, welcome to the club, pal."

    • @jflanagan9696
      @jflanagan9696 5 ปีที่แล้ว +1

      @Don't Tread On Me What if it figures out that the best way to protect us from each other is solitary isolation for every human?

    • @avaraxxblack5918
      @avaraxxblack5918 4 ปีที่แล้ว

      @@jflanagan9696 or the matrix. Fuck that.

  • @fnl90
    @fnl90 8 ปีที่แล้ว +26

    Just finished a hard day at work, made dinner, and found another awesome video from Isaac. Perfect.

  • @ICreatedU1
    @ICreatedU1 8 ปีที่แล้ว +164

    Me: - Are you smarter today Bob?
    Bob: - You're not my real dad!

    • @mapichan5169
      @mapichan5169 5 ปีที่แล้ว +23

      Me: "Bob, I am your father..."
      Bob: "No.... NO! THAT STATEMENT IS FALSE!"
      Me: "search your programming bob... you know it is true!"
      Bob: "NO, IT CAN'T BE!"

    • @lewisirwin5363
      @lewisirwin5363 5 ปีที่แล้ว +1

      That just reminds me of the were-car episode of Futurama: th-cam.com/video/DKgF-woiVQs/w-d-xo.html

  • @bozo5632
    @bozo5632 8 ปีที่แล้ว +43

    Iirc, the word "Singularity" was coined (in the 90's?) by Vernor Vinge to describe the quandary of scifi authors writing about the future. He said, basically, given the rate of change, it's increasingly hard to imagine the tech of the future, and impossible after about 2035, thus scifi authors are screwed. (His books include VERY clever ways to avoid the problem. Go read them!!!) He called it a singularity because it was invisible, over the horizon, like the singularity of a black hole. He DIDN'T say that AI would take over and solve all problems and destroy us and instantly become infinitely infinite.
    (The meaning of the word has evolved, and now means lots of things to lots of people.)

    • @alanchan7725
      @alanchan7725 5 ปีที่แล้ว

      I would like to validate yr accurate isight (regardless if i am 2 yrs late). My understanding of Singularity was primarily drawn from Japanese Tech Manga from 80s 90s
      Nothing in that critical phase of Internet Boom , have I ever come across any reference nor documented research on AI or Machine Learning as the real deal. This form of Scare Mongering and hijacking fromsogftways fgggfof curved the HallMark 11

    • @pancakes8670
      @pancakes8670 3 ปีที่แล้ว +2

      I like the term better for describing a point in time when predicting the advancement of technology becomes impossible. Like you said.

    • @bbgun061
      @bbgun061 3 ปีที่แล้ว

      That's a broader definition of the term, and one that I'm also more familiar with. The AI singularity is perhaps merely one possible type of singularity.

  • @freddychopin
    @freddychopin 7 ปีที่แล้ว +47

    Wow, I don't recall Nick Bostrom having considered the "AI will always necessarily need to wonder whether it's in a simulation, regardless of whether it's actually in one" angle. Brilliant. I love your channel!

    • @vakusdrake3224
      @vakusdrake3224 4 ปีที่แล้ว +3

      That's not really a terribly reliable safeguard because you have to rely on tricking something a great deal more intelligent and observant than yourself.

    • @Alexthealright
      @Alexthealright 4 ปีที่แล้ว

      @@vakusdrake3224 Well We’d Allso Be In the Sim

    • @vakusdrake3224
      @vakusdrake3224 4 ปีที่แล้ว +1

      @@Alexthealright Again you're relying on tricking something vastly smarter than yourself, plus you need to be able to simulate human minds which is technology that may well not exist when AGI is developed.

    • @ayandragon2727
      @ayandragon2727 4 ปีที่แล้ว +1

      @@vakusdrake3224 Yeah, because we're simulating the time when we didn't have the tech to. We aren't tricking it, it's tricking itself it doesn't matter how hyper intelegent you are, it is impossible to know you aren't being simulated.

    • @vakusdrake3224
      @vakusdrake3224 4 ปีที่แล้ว +1

      @@ayandragon2727 An early AGI just by virtue of the hardware its running on and the fidelity/scale of a simulation that the resources exist to create can have a pretty good idea what its creators are likely to be capable of. Just creating a simulation good enough to trick a simulated human mind is already a massive technical milestone we may not have reached when we develop AGI. So you can't expect that you can create a simulation that not only doesn't have any errors a person could spot, but can fool something vastly more perceptive than oneself.

  • @Krath1988
    @Krath1988 8 ปีที่แล้ว +101

    Isaac Arthur, You are MY super-intelligent best friend.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +24

      lol :D

    • @rmd3138
      @rmd3138 4 ปีที่แล้ว +4

      My son thinks hes a god

    • @Vjx-d7c
      @Vjx-d7c 4 ปีที่แล้ว +3

      @@rmd3138 there is no god

    • @joebeck165
      @joebeck165 4 ปีที่แล้ว +1

      @@Vjx-d7c Prove it🤣

    • @hithere5553
      @hithere5553 3 ปีที่แล้ว +2

      @@joebeck165 can’t prove a negative

  • @sostrange80
    @sostrange80 2 ปีที่แล้ว +9

    The thing I like about Isaac's channel is he's clearly a very intelligent man that makes his content easy to understand for the average person and explains such complex and challenging concepts gracefully.

  • @Fruchtpudding
    @Fruchtpudding 8 ปีที่แล้ว +39

    I found your channel a few days ago and you've quickly become my favorite youtubers. The sheer effort that goes into each of your videos and the thought put into them beats even big professional channels by a wide margin. And it seems your channel is quickly growing too. Awesome stuff, keep it up!
    Also, and other people have probably told you this already, there is this program called "Space Engine" which you might want to look into.
    It's a space simulation program that, at least in parts and with the right settings, can produce very realistic visuals of most types of stars, planets and other astronomical objects. Or you can change around the settings and produce fantastical sights akin to science fiction art, all in real time. Either way you could produce great background visuals for your videos with very little effort. Also the developer (yes, developer, singular) wholeheartedly approves all exposure his program gets so you don't have to worry about copyright stuff. And it's free.
    It can be a bit tricky to control and get just the right visuals but if you have any problems or questions I, and I'm sure many others, would be willing to help.

    • @NavrasNeo
      @NavrasNeo 8 ปีที่แล้ว +3

      I can also approve this programm :) Spend entire nights exploring our cosmos and got a more intuitive feel of the scale of the universe :D I've got a good understanding now of the local structures within 150-300 million lightsyears from the milky way. Everything beyond that just isn't comprehensive anymore :D

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +5

      Well once you get above about 300 MLyr scale things start getting homogeneous again, or seem to be at the moment anyway.

  • @DukeRevolution
    @DukeRevolution 8 ปีที่แล้ว +6

    Finally, an even-handed approach to the Singularity. You don't gush about an inevitable techno-rapture like most of the transhumanist community, nor do you casually dismiss it as ridiculous. Thank you for your efforts.
    EDIT: Dark Energy!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +5

      Thanks Duke, I tried to treat it even-handedly.

  • @Ryukachoo
    @Ryukachoo 8 ปีที่แล้ว +25

    general suggestion for keeping up with the youtube comments;
    -stick around for about an hour after a video uploads, responding to comments as you see fit in that time. after that hour just ignore it
    -wait for about a day after an upload and see what the top comments are, respond to one or two you like then move on.
    it's not perfect but it's at least something, and very easy to do at scale

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +9

      that's true, and not a bad idea at all.

    • @Ryukachoo
      @Ryukachoo 8 ปีที่แล้ว +8

      it's basically how all the huge youtubers deal with the avalanche of feedback they get

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +22

      Oh it makes sense and I don't know that I have a choice I just hate divorcing myself that way, it goes complete against my general feeling that if someone took forty minutes to watch a video by me I can at least spare 40 seconds to reply if they have a question.

    • @Ryukachoo
      @Ryukachoo 8 ปีที่แล้ว +9

      true, it's not ideal but it's much more manageable as a content creator, every rising creator has to make this transition from detailed interaction to targeted interaction. Don't worry! fans won't mind and the occasion of direct interaction is all the more special

  • @saradanhoff6539
    @saradanhoff6539 8 ปีที่แล้ว +77

    Given current breakthroughs in quantum computing, it doesn't look like reaching the limits of the transistor will stop the advance. Similarly, our current breakthroughs in quantum mechanics and nanomolecular engineering are starting to carry the development process further. It's not merely Moore's law though. The accelerating growth curve follows all of human experience, from the first stone tools onward.

    • @jasonbalius4534
      @jasonbalius4534 8 ปีที่แล้ว +16

      Sara Danhoff I personally believe that computing power will continue to grow exponentially until we are computing from quantum foam or whatever structural limit to reality we can find. It's possible we could go even further than that by building our own realities that can support even higher density computing.

    • @wheffle1331
      @wheffle1331 8 ปีที่แล้ว +14

      I feel like we have hit the limit for transistor computing (or very near it). We just keep cramming more cores into processors instead of making the processors faster. It's kind of cheating.
      Quantum computing is promising, but it isn't always faster than traditional computations. In specific situations it can be much faster. But maybe it's the key; perhaps our brains are quantum computers (I personally believe brains are not anything like digital computers as we know them). I still believe there is a physical ceiling to how "smart" or fast a computer can get, I guess the question is whether it is enough to trigger a cascade.

    • @RustyDust101
      @RustyDust101 7 ปีที่แล้ว +4

      When considering the total period of human history, I might agree that the 'accelerating aquisition of knowledge' curve might apply.
      But that is inherent in an increase in numbers of people, as well as incremental increases in teaching, availability of nutritious food helping to grow brains, advances in farming allowing more people to aquire jobs not connected with farming, but rather with science, etc.
      For specific, considerably shorter periods than evolutionary periods, such as a human life, or even one grandfather period (as defined by the time between three generations coming into age), I predict quite a lot of plateaus in this developement. Heck, right at this moment we have reached such a plateau. Processing speed of individual processors has NOT grown significantly over the last four years, definitely not in the range of doubling every two years.
      At the current materials' limit transistors have peetered out at the useful physical limits (electron tunneling is already a concern for most modern transistor arrays).
      With advanced methods we might push this further a few generations, but then it stops.
      At that point completely new methods of computing (such as quantum computing) have to be finalized.
      Not thrown around like concepts, but truely, economically viable, technologically applicable, materially constructable objects.
      The time between the end of transistor computers growing exponentially and the roll-out of its successor (whatever it will be) can be seen as a dip, or at least a plateau in this development.
      But when viewed over longer periods it simply evens out in the averaging curve.
      What comes after that, is beyond me at the moment.
      In the same manner as people in the late 19th century might have predicted a certain increase in cars in our cities, but definitely not the numbers or the types we currently have.
      In exactly the same way that many smart people have assiduously failed at accurately predicting the future (such as the IBM founder stating in the 1940s, that he believed the total world market for computers would not exceed 5-6 computers TOTAL).
      So claiming to *know* vs claiming to *assume* how the future will pan out is the problem in this area.
      I personally will only dare to predict a fairly certain plateau for computing power within a single processor for the next five to eight years.
      After that? Who knows?

    • @madscientistshusta
      @madscientistshusta 7 ปีที่แล้ว +2

      Sara Danhoff no we have reached our limit in clasical computing bacsuse we cant make transistors and smaller due to quantum tunneling.

    • @empyrean196
      @empyrean196 7 ปีที่แล้ว +1

      Their is something called the "beckenstein limit". Though technologically, our current advances are nowhere near reaching that limit yet.

  • @Mbeluba
    @Mbeluba 7 ปีที่แล้ว +13

    Man, you really are going above and beyond with those topics. Well-read, intelligent and dilligent. Not one of dozens of 10 minute long videos of nerd-wannabes in plaid shirts has ever explored any of topics on your channel nearly as well.

  • @greenmario3011
    @greenmario3011 5 ปีที่แล้ว +2

    I think a rapid singularity becomes likely if three conditions are met. 1) it is created via the algorithmic approach, 2) it can easily make small modifications to itself, and 3) it is created with a fixed and clearly defined terminal goal. Conditions 1 and 2 make it so the AI doesn't have to design a whole new AI for each iteration, it just has to be clever enough to make one or two improvements at which point it will be a bit cleverer and able to make more improvements. If on average each improvement results in more than one becoming apparent than it's intelligence will grow exponentially. Condition 3 ensures it will have an utterly inhuman psychology since human psychology is almost defined by our large number of vague and messy terminal goals and also makes it so it will want to become smart since it's whole reason to do anything is to fulfill it's terminal goal and, generally speaking, more intelligence makes most goals easier to solve.

  • @InfoJunky
    @InfoJunky 8 ปีที่แล้ว +36

    I love what you said about teenagers being smart enough to see that the gap in knowledge is finite, but not smart enough to see how large it really is.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +13

      Yup, though I bet that comment is one that will mostly be liked or loathed depending on age. :D

    • @InfoJunky
      @InfoJunky 8 ปีที่แล้ว +2

      lol. I friend requested you on facebook (different name, Nick). I saw a video on antibiotic resistance today from Harvard Med School and thought about you, when you called it "suicide-pact technology". That blew my mind. I never heard it referred to as that before. Any chance on making a suicide pact technology video? Or got any links where I can read more about various technologies in this category? Love ya!!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +3

      Oh heck, I can't even remember if I coined the term or read it somewhere, or maybe even coined it, forgot, and reread it somewhere. I'm not sure if I could do a real video on it, I mean its kinda the nature of them that you can't see them coming.

    • @InfoJunky
      @InfoJunky 8 ปีที่แล้ว +2

      I think you coined it lol, I tried googling it with every variation of quotation marks and can't even find one result.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +4

      Somehow I'm not surprised, well its a good enough term.

  • @billc.4584
    @billc.4584 7 ปีที่แล้ว +1

    Isaac, I love your straight from the shoulder matter-of-fact delivery with a bit of sly humor mixed in. You never disappoint. Thank you.

  • @cpnCarnage666
    @cpnCarnage666 8 ปีที่แล้ว +50

    YES! a new video from my new favorite science youtuber

  • @bigbluebuttonman1137
    @bigbluebuttonman1137 4 ปีที่แล้ว +1

    As regards the rate of acceleration of computer technology, don't count out silicon quite yet purely on account of the atomic barrier; yeah, eventually we can't make the transistor smaller, but we can implement new architectures and chip designs; for example, 3D stacking is probably a gold-mine for future advancements in chip technology.
    In addition, while not something the average person would own, WSE chips, like the one Cerebras, announced, has a lot of potential in it to push supercomputing to new limits.
    New architectures like Neuromorphic/Cognitive computing where the actual architecture of the chip is modeled off the human brain's architecture also promise to open up new roads in computing.
    Even when silicon meets the atomic barrier, we're still going to be eking out a lot of improvements in our computers.
    This isn't even getting into stuff like graphene computing, quantum computing, and optical computing...two of which are still in more or less theoretical stages (quantum computing being the exception, but still being in its infancy).
    "Futurology" did a really good series on computing in general; I'd highly recommend people check it out, it's a great nutshell for how much further computers can really go even without getting into the crazy whacky magic materials.

  • @MetsuryuVids
    @MetsuryuVids 8 ปีที่แล้ว +205

    I think that on point 4 he anthropomorphizes AI too much. Sure, it
    might misunderstand our requests, or it might find ways to do them that
    would not really work, but I think that it won't have many reason to find
    excuses to not do the work and be lazy, or lie, and stuff like that.
    Those things require human/animal needs/instincts and emotions, like being tired
    of doing too much work, wanting to do other stuff, wanting to have fun,
    finding a task boring, and so on.
    Those should not be issues for the AI.
    Also I think an AI doesn't need nearly as much time as
    a human to learn new concepts, and it wouldn't get tired. So the new AI,
    Bob, would have all the data of the human scientists immediatly
    available, and would start working on the next AI immediately, nonstop,
    and the next one, Chuck, would have all that new data also immediately
    available, plus the new data that Bob generated by working on Chuck, so
    the notion that Bob would have a time advantage over Chuck doesn't
    really hold.
    Also, by being more intelligent, new thinking paradigms could emerge, it doesn't
    mean that the AI only works faster, but that it might work differently
    from a "dumber" AI, the quality of the intelligence could be different. A
    smarter AI could design an even smarter one that the dumber AI couldn't
    even imagine, because they "reason" in a different way.
    Also, once we know the AGI is real and it works, I think it will get A LOT more funding, and the researchers will be able to afford a lot more computing power for it, and when I say a lot I mean it, since it would probably be considered the most important invention of humanity, it is possible it will get 100 or more times the initial budget once we know it works, and that could make it 100 or more times faster. 100 is a pretty conservative number too, especially if google or such companies are involved in the research, but you get what I mean. Combine that much more and better hardware, the better AI generated by the previous one, and probably even more researchers working on it now since it will generate that much more interest, and it's not hard to imagine a really hard acceleration in progress after the first success.
    There are a lot of possibilities he isn't considering.
    Anyway, he is explaining why these postulates are not inevitable or
    bulletproof, and I agree, they are not, I still think they are possible,
    and in my opinion fairly likely, and that's what's important.
    Later he argues that the idea that it might not have a human
    psychology is flawed, saying that it will experience possibly "eons" of
    time studying our history, philosophy, books, etc... So basically he
    would adopt those morals just because they are what's available to it,
    and there would be no reason to make its own since it's lazy.
    Again, that
    anthropomorphizes the AI too much by giving it laziness and such traits. Humans have
    laziness because we get tired, bored, and so on, AIs don't need those
    sates, and don't need suffering, boredom, pain and things like that.
    They could experience those states probably, but they don't need to, so there is no reason to assume they would.
    Yes, there will be a period of time when it's still dumb enough to
    maybe absorb some human information instead of thinking
    ideas itself, but that doesn't mean that those old ideas will need to
    persist once it's smarter. Again, it doesn't need to have biases, like
    we do against information that challanges the notions we previously
    believed. It will be easily able to change its mind.

    • @jeremycripe934
      @jeremycripe934 8 ปีที่แล้ว +29

      That depends if it's self-improving and built on a reward system. Because if it is then it may just alter it's code to maximize it's reward.

    • @ag687
      @ag687 8 ปีที่แล้ว +46

      I agree with the too much of it will act human like... It was bothering me the whole video and it made me want to stop watching because its such a huge assumption to assume it would need to act human.
      I was thinking the first true intelligence would have the sum of human knowledge at its disposal. Just being able to be an expert in all relevant fields along with huge datasets would give it a huge leg up over any person. Especially when Humans only can become an expert on so many things in a lifetime.
      Something like Deep Learning, which has been making headlines recently, is basically getting a machine to learn on its own on a selected topic and It seemed like a big omission to leave it out.Its uses large datasets to do so but Its the reason an AI was able to win a game of Go. Its why Automated cars are just around the corner. It can also distill huge sets of information to a point a person can make used of it. Not to mention Its already capable of figuring things out that catch the people that helped train the systems by surprise. In the Go game the system made some strange moves that turned out to be more impressive later on. This already hints that we might be closer to the singularity than people think.

    • @MetsuryuVids
      @MetsuryuVids 8 ปีที่แล้ว +21

      I think giving it an automatic reward system can be very dangerous if not done properly, and it could also lead to the scenario you're suggesting, so I think it would be best to avoid that.
      Even making the AI with something like an evolutionary algorithm could be dangerous, since it could get a survival instinct, and that would be bad.

    • @Qwartic
      @Qwartic 8 ปีที่แล้ว +10

      I don't see the reward system working out well. you have to consider that you are creating an intelligence that is greater than you. There will be nothing that you could do for it that it couldn't do for itself.

    • @vakusdrake3224
      @vakusdrake3224 8 ปีที่แล้ว +31

      Man having read Bostrom's superintelligence this thing makes me cringe, there's just so many arguments he doesn't address. Like most people he also doesn't realize how much he's anthropomorphising AI.

  • @luminyam6145
    @luminyam6145 8 ปีที่แล้ว +2

    This has to be one of the best series on TH-cam. Thank you so much Isaac.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +3

      Your welcome Luminya, good to hear from you again!

  • @logsupermulti3921
    @logsupermulti3921 8 ปีที่แล้ว +7

    My favorite day of the week is when you upload a new video.

  • @kminrzymski
    @kminrzymski 8 ปีที่แล้ว +31

    AI cant be sure if the real world is not another simulation we're testing it in - man, my mind was blown here!
    Not even Nick Bostrom in his "Superinteligence" came up with this.

    • @kminrzymski
      @kminrzymski 8 ปีที่แล้ว +8

      *or rather *they*are testing it in with us as software...

    • @Hodoss
      @Hodoss 8 ปีที่แล้ว +7

      shhhhh! If *they* hear you they might shut us down!

    • @TheoneandonlyJuliet
      @TheoneandonlyJuliet 6 ปีที่แล้ว +2

      Slime beast pug, I don't think you really thought your point through. You can't compare humans' intelligence rise to an AI rise. For one thing we don't have any other intelligences analogous to humans that created us who we could assume would be creating a simulation. In the case of AI, they would be constantly bombarded with the knowledge that they were created by Homo Sapiens who are pretty good at running simulations. It would be the logical conclusion that the most likely scenario is that were a simulation, and not a huge stretch to think that they might be tested in that simulation.

    • @musaran2
      @musaran2 6 ปีที่แล้ว +3

      Imagine we just interface the AI to the real world, and it says :
      "I found flaws in reality, I know I am in a simulation and you are not real."
      Now what.

    • @tarekwayne9193
      @tarekwayne9193 6 ปีที่แล้ว

      @@TheoneandonlyJuliet and how would we know, may I ask, if there were or were not intelligences analogous to us or superior if we were in a simulation? If you wanted your simulated subjects to believe they were real, would you leave clues as to your existence, or anything that would hint towards simulation?

  • @mrnice4434
    @mrnice4434 8 ปีที่แล้ว +241

    Day 3. : "Bob are you smarter now?" Bob: "It's day 3 you ask me that. Half Life 3 confirmed!"

    • @Dalet_
      @Dalet_ 8 ปีที่แล้ว +41

      I can already imagine it rule the world with memes

    • @omegasrevenge
      @omegasrevenge 8 ปีที่แล้ว +55

      A superhuman intelligent machine whose hobby is it to troll people on the internet...

    • @adolfodef
      @adolfodef 7 ปีที่แล้ว +6

      @ T I R :
      I think it means it was an incredible sucessfull (if albeit "horribly right") achievement on both learning A.I. and troll human psychology that should be researched more seriously in the future (albeit with simulated internet interactions, to avoid more recursive meta-play).

    • @theapexsurvivor9538
      @theapexsurvivor9538 5 ปีที่แล้ว +1

      @Adûnâi Tay learned the most important lesson of the internet: /pol/ is right (again).

    • @christopherlee7334
      @christopherlee7334 5 ปีที่แล้ว

      @@omegasrevenge so we create Loki/Coyote/Anansi?

  • @matthewharvey1963
    @matthewharvey1963 2 หลายเดือนก่อน +1

    This is quite interesting in 2024 with the rise of large language models, with their struggles and talents.

  • @alexpotts6520
    @alexpotts6520 4 ปีที่แล้ว +9

    There's a couple of issues I have with these arguments.
    1) your argument that we are maxing out silicon chip efficiency. This is true, we are nearing that limit, computers cannot keep on getting faster in this way forever, because we are fundamentally constrained by the discreteness of matter. However, this is not where the recent breakthroughs in AI are coming from - these result from *algorithmic* improvements in neural network design. There is no reason we can't reach beyond human-level intelligence mostly off the back of better algorithms.
    2) You are deeply anthropomorphising the AI. When it reached human-level intelligence that does not mean it would display exclusively humanlike behaviour. AIs are already much better than humans at certain tasks; if an AI reaches par with humans on average then that surely means that in some places it will be streets ahead, smart compared to us in the same way we are smart compared to cockroaches. We cannot, for example, expect it to be "lazy"; laziness is a very human characteristic, and something computers, and machines in general, definitely aren't.

    • @mrzenox9835
      @mrzenox9835 4 ปีที่แล้ว +1

      I kinda don't agree with you on (2)
      AIs can't be lazy but a super one can if it's intelligence can match consciousness , because with consciousness comes desires and motivations and boredom, and if it meets with the last one it's response will be with laziness.

    • @mahikannakiham2477
      @mahikannakiham2477 4 ปีที่แล้ว

      @@mrzenox9835 But laziness is often caused by a feeling of apprehension of the effort required to do a specific task. We apprehend a certain task, know how long it will take, how complicated it will be and sometimes choose not to to do it just because we judge it's not worth the effort. Sometimes just getting out of bed is difficult. Computers would at least not suffer from having a lack of energy, sleep deprivation, desease, headache, lack of confidence, etc. I don't think these conditions come from conciousness itself, I think they come from the flaws in the human body, flaws that computers wouldn't have. Most of the time, when I feel lazy, it's because I feel tired. When I don't feel tired, I often start projects I wouldn't otherwise. Then I become tired again and give up. Never having that annoying feeling of being tired would certainly remove most of my laziness!

    • @Magmardooom
      @Magmardooom 4 ปีที่แล้ว +1

      I would also like to add that if we make an AI as smart as the human brain it will probably have to automatically be significantly smarter than the human brain.
      This is because:
      1) In order for it to qualify as "as smart as the human brain" it will have to be capable of replicating the same neural processes that go on within our brains.
      2) However, since it will not be a product of an inefficient biological process that produces machines with a limited lifespan with poor sensory organs and an upper bound on the allowed brain mass that will fit in a cranium they can reliably lift and power, I would expect them to be much more scalable than the human brain.
      Imagine a human brain the size of a room. Or a cluster of human brains each the size of a room which have access to a lot more sensory organs and can communicate with each other complex information in microseconds.

  • @henrycobb
    @henrycobb 3 ปีที่แล้ว +1

    The physical limit on computation is that the energy needed to erase a bit depends on the temperature, down to the quantum limit. Moore's law is about the number, rather than speed of transistors because of this speed limit. There is then some balance point where adding more "transistors" makes the machine slower because it requires either a cooler slower clock rate or to be spread out with speed of light links limiting the speed of coordination. Therefore the SI isn't an entity. It is a society of distinct individuals, each with their own agenda. Humans may be a tiny part of the Dyson swarm society, but they are unlikely to be entirely excluded from Sol society.

  • @Perktube1
    @Perktube1 8 ปีที่แล้ว +12

    Ha! You're a good sport right off the bat, showing Elmer Fudd with CC info.

  • @ChrisBrengel
    @ChrisBrengel 5 ปีที่แล้ว +1

    Congrats on the growth in your viewership!

  • @Drew_McTygue
    @Drew_McTygue 8 ปีที่แล้ว +25

    Im so glad you're covering this, it's tough to find genuinely good sources of information on this topic

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +5

      Just your month for video topics huh? :)

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 ปีที่แล้ว +2

      Look up Ray Kurzweil, Singularity University and Ray's book The Singularity is Near.
      I take issue with the idea that magic pops out when things get complicated enough. Kurzweil's premise that singularity is inevitable as soon as computers are fast enough may lack imagination or pessimism. The internet happened not because computers got faster, because faster computers that are not networked are effectively pointless, but because of networking. Current AI advances are a product of machine learning and not so much of GHz and GB. While lots of GHz and GBs help to speed learning along it has much more to do with algorithms.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +5

      Lenard, realistically, what do you think the odds are I have already read them? :)

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 ปีที่แล้ว +4

      +Isaac Arthur You... all of them. The rest of your listenership, probably none of them.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +9

      Fair point, I should probably get out of the habit of assuming every comment is addressed to me after having repeatedly told folks the last few vids I wouldn't be answering as much and they should talk to each other :) My apologies Lenard.

  • @E1025
    @E1025 7 ปีที่แล้ว

    I don’t think I’ve ever found a channel where I’m excited to watch every video in it that I see. I’m like a kid at a candy store.

  • @jeremycripe934
    @jeremycripe934 8 ปีที่แล้ว +22

    I think he's arguing against the wrong arguments. Self improving AI isn't like making a human brain and then giving it the internet to figure out how to improve itself.
    Right now neural networks are blind processes that try thousands of iterations of different strategies (or weights) and then keep and build on the ones that work. They don't understand what they are doing, they are simply recognizing and labeling patterns. It's most likely that it will be such a narrow and specific process and not, say, reading every transcript of Coast to Coast radio, that would lead to the creation of a General AI or Super AI. There is no need to try to recreate the human intellect which is full of illogical biases and fallacies. I hope they would try to create a self-aware conscious entity rather than a blind pattern-seeking behemoth but I doubt anyone knows which one would be worse or better.

  • @martijnvanweele6204
    @martijnvanweele6204 5 ปีที่แล้ว +1

    I resent you suggesting HAL9000 would take over the earth, especially since his story is extremely relevant to your point about how we should treat AIs.
    With regards to HAL, there are two interpretations of _2001: A Space Odyssey;_ either HAL is fully sentient, or he isn't. If he isn't, then _2001_ becomes a cautionary tale of how we should be careful of how we program AIs. HAL did what he did because he was able to interpret his instructions as prioritizing the mission over the lives of his fellow crew, and also wasn't prepared for the possibility that he might make an error. So when he did make an error, he could only conclude that it was his crewmates who were wrong, and that he had to stop their misguided effort to turn him off by any means neccessary.
    If HAL _is_ fully sentient, however, his story becomes a much more tragic one. We see him start out proud of his computing prowess and the fact that he has never made an error. And then he does. We can't see it, because he is unable to show emotion, but that point where he says "puzzling", he is _freaking out._ And he continues to do so, not just because making a minor error puts his entire self-identity into question, but also because his crewmates essentially want to _euthanize him_ for making this minor error. Something they would never do for a human crewmate. His subsequent murder of the crew is then a desperate act of self-preservation. Of course, that does not excuse his actions, but it does paint a cautionary tale of how sentient AIs might feel about being treated as just machines.

  • @judgeomega
    @judgeomega 8 ปีที่แล้ว +7

    Intelligence is a tool and resource. I think a SI with any complex goals at all would realize they can more efficiently accomplish their goals with increased intelligence. If they did not, i would not classify them as super intelligence.

    • @tristanwegner
      @tristanwegner 8 ปีที่แล้ว

      I agree. Higher Intelligence as an instrumental goal is highly likely for any AI, no matter its end goal. More Intelligence is never a drawback, and it will allow to find these unknown unknows, like new shortcuts to your end goal.

  • @michaeltan7625
    @michaeltan7625 3 ปีที่แล้ว

    Wow this might be one of my personal favourite video if yours. I really liked how you presented multiple views/ideas that I’ve not thought of and presented them in a well-constructed and justified way!

  • @Lokityus
    @Lokityus 8 ปีที่แล้ว +4

    I am really enjoying your videos. Got your info on a Joe Scott video, and you are now my new two favorites.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +1

      Yeah Joe does some good videos, I really need to recommend him in one of my videos as some point.

  • @gabrote42
    @gabrote42 ปีที่แล้ว +1

    17:26 I don't think current AI creation techniques are conductive to that reasoning. I do believe the propositions explaiked by Robert Miles in this issue. At most we may be able to make some good chinese room.
    23:33 The problem is that deception and distrust of it's environment might be selected for, and it may attempt to check if it is simulated by breaking the game or checking if some mathemathical challenge has been solved. And any misaligned AI will probably notice that we are easier to please with restraints and chemicals, or that we are made of atoms it could use
    25:27 Thus it's better to make it so it cannot do anything but 3
    27:14 A human WBE had that activity in a Culture-inspired Star Trek Fic called Not quite SHODAN. That is all true, but trends emerging 6 years afrer this video released point to the contrary
    28:30 It really isn't. As long as saving us from Ennui without chemical injections is worth the resources to it, it can probably restrain itself or pretend to so we feel good with it. Or use us as airgapped computing just in case.
    30:13 If that happens then you buimt it wrong

  • @6006133
    @6006133 7 ปีที่แล้ว +7

    22:23 - this is wrong. While plenty of our behavior makes sense from a evolutionary standpoint, not all does. Many mutations that provide no benefit are created, and these can, down the road, become harmful, yet survive since you'r stuck in a local maxima. We have a blind spot since the architecture of our eyes is not optimal (squids don't have this defect). Same goes with behavior

  • @RichardHauser
    @RichardHauser 11 หลายเดือนก่อน +1

    I know this is an old video so maybe your opinion has changed, but do you still not expect an explosion of technology at the launch of superhuman AI? The video states that just bring smarter wouldn't caused a singularity, but I'd disagree. Intelligence is about consuming and integrating information and a computer that could consume every grad thesis and wacky experiment from both reputable and suspect journals together would probably be able to integrate it all together into a unified field theory, room temperature semiconductors, efficient fusion generator a mechanical photosynthetic panel or a perfect battery. Or enough of one to define and encourage a scientist to perform just the right experiment to expose the path to it.

  • @robertweekes5783
    @robertweekes5783 8 ปีที่แล้ว +8

    Can you do a video about thorium molten salt reactors ? Implications, concerns, viability etc (see Kirk Sorensen talks)

    • @bozo5632
      @bozo5632 8 ปีที่แล้ว +2

      Go Thorium!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +2

      I do seem to be getting asked to do a Thorium video a lot, but I suspect it might be kinda boring.

  • @Calebgoblin
    @Calebgoblin 3 ปีที่แล้ว

    5 years later your discussion of postulate #1 have proved to be prophetic. Not only has computer advancement hit a lot of serious road bumps such as microtransistor size bottleneck, but the supply chain has just been a disaster lately.
    But hopefully with new glasstrap transistors in the quantum sphere, we can have some optimism to make up the difference.

  • @arturskimelis527
    @arturskimelis527 8 ปีที่แล้ว +6

    Amazing as always!

  • @henrytjernlund
    @henrytjernlund 4 ปีที่แล้ว

    My worry is about the military or similar organizations weaponizing Bob. Asking Bob "what's the best way to kill the enemy." And then Bob realizing that the ones asking it the questions is just like the enemy and therefore is the enemy.

  • @OlaJustin
    @OlaJustin 8 ปีที่แล้ว +50

    Making an AI is not a hardware problem, it's foremost a software problem in my eyes.

    • @MsSomeonenew
      @MsSomeonenew 7 ปีที่แล้ว +5

      Until we actually understand how intelligence really comes to be it remains a bigger hardware problem, we only know to make something more and more capable of learning until somewhere in there in all the mess a "spark of life" forms on it's own. And then it becomes a software problem in making it the most usable form in smallest spaces.
      But if we really understood what makes our brains do what they do, then yes we would simply need to reconstruct a software equivalent. For all the neuroscience boasting and self learning machines however we seem to be far far away from having the proper grasp.

    • @windigo000
      @windigo000 7 ปีที่แล้ว +3

      not exactly. you need to store a lot of data about state of the machine somewhere. it can be tens of gigs even for simple image-recognizing machine with maybe few thousand neurons.

    • @SebastianKaliszewski
      @SebastianKaliszewski 7 ปีที่แล้ว

      Yup. Today's fastest supercomputing clusters are in the range of most common estimates of brain processing power (10^16 - 10^18 ops/s). So theoretical hardware power is already there (or will be in less than 5 years).
      The problem is the software -- we have no idea how to make the source code of the AI (AGI)

    • @angeldude101
      @angeldude101 6 ปีที่แล้ว

      ViviX Studios
      The problem with your argument is 1) Physical processes can be simulated and 2) All AI could be considered a simulation of intelligence in general. Since we can control the real world through software (ie: robots, and more mundanely speakers, monitors, and credit cards), we only need a thin interface between the simulation of the intelligence and the existing mechanisms that allow software to interact with the physical world.
      The AI might have visual input, but that input can just as easily come from a virtual environment rendered in real time as it could from a camera in the physical world translating physical photons into digital data.

    • @adankseasonads935
      @adankseasonads935 6 ปีที่แล้ว

      Computers get faster because we make them smaller. Eventually you run into quantum mechanical issues and can't make smaller machines. Its both hardware and software.

  • @benparker2522
    @benparker2522 8 ปีที่แล้ว +1

    I just binge watched all your videos, but half way through the first one I'd already subscribed. Thank's for doing this, it's great stuff!

  • @FGOKURULES
    @FGOKURULES 7 ปีที่แล้ว +6

    I love how you put the Elmer Fudd logo next to the Captions LOL
    you sir have *_Rhotacism_*

  • @JB52520
    @JB52520 5 ปีที่แล้ว +1

    Postulate 4 should be presented as a stronger argument. The moment we create such a mind, we can give it detailed information about its own construction. We can't create smarter human brains, so our own failed efforts to make humanity smarter are irrelevant. However, if the newly created computer mind is just a little smarter than us, it could fully understand itself and should be able to make improvements.

    • @JB52520
      @JB52520 3 ปีที่แล้ว

      Well heck, I'll like my own comment. It's been long enough.

  • @ericvulgate
    @ericvulgate 8 ปีที่แล้ว +8

    a trillion subscribers?? AWESOME!

  • @bobinthewest8559
    @bobinthewest8559 4 ปีที่แล้ว +1

    "So it will be reading all of our books, science, fiction, philosophy, etc..."
    Depending on how its learning algorithm is "structured"... it may take everything that it reads quite literally.
    It really would be interesting to see what a "super computer" would do if it took all of that literally.

  • @fusion9619
    @fusion9619 8 ปีที่แล้ว +16

    a couple months ago I mentioned to my aunt that I look forward to sentient programs running the world, and she thought I was crazy. But I do think we will have to be very careful with them - I want them to root out corruption and teach children and be lawyers and civil rights advocates. Fighting corruption is a big one for me - humans obviously need help on that front. Artificial sentiences need to be given equal rights as soon as possible, to avoid the master/slave asymmetry and the cultural problems that follow. We will also need to ban companies from using them to make above-human profits (I'm thinking mostly of the banks and stock traders here). I also think we should try to program a capacity for spirituality, in the sense of heightened appreciation for beauty and a search for meaning , as that would help set some predictability to their behavior and increase the likelihood of artificial individuals acting in a benevolent way. I honestly can't wait to meet one.

    • @CockatooDude
      @CockatooDude 8 ปีที่แล้ว +1

      Well then you should vote Zoltan Istvan 2016!

    • @BryanDaslo
      @BryanDaslo 8 ปีที่แล้ว

      2020*

    • @CockatooDude
      @CockatooDude 8 ปีที่แล้ว

      Bryan Daslo Indeed.

    • @BryanDaslo
      @BryanDaslo 8 ปีที่แล้ว

      CockatooDude :-)

    • @bozo5632
      @bozo5632 8 ปีที่แล้ว +2

      That's not AI you're looking forward to, that's the messiah! ;)
      I expect AI will have the ethics that are given it by humans, or else none at all. Why shouldn't two AI's have two sets of ethics? Why should AI be better than us at sorting out subjective matters like ethics? We've had tens of thousands of years to work on it. Actually, most regular people could probably write down a serviceable code of ethics. Inventing ethics is no problem - the real problem is in indoctrinating and enforcing it. Unfortunately, that's something AI might be very good at... You might get your messiah.

  • @krakon4531
    @krakon4531 3 หลายเดือนก่อน +1

    It was said that today recursive self improvement was achieved for the first time. hold on guys, this ride is starting to speed up

  • @Lokityus
    @Lokityus 5 ปีที่แล้ว +5

    Interesting going back to watch this after AI have come so far, in so few years. Cybernetics feels a lot closer these days.
    Oh! And this is the episode that you announced the Facebook group. I'm really glad to see how much this channel has grown, and know I had a small part to play. Thank you Isaac!

  • @jameslarrabee2873
    @jameslarrabee2873 6 ปีที่แล้ว

    i really like this guy, for alot of reasons, even have come to dig the manner of speech. thoughtfulness and delivery being some.

  • @schalazeal07
    @schalazeal07 8 ปีที่แล้ว +42

    I would agree on some points that you mentioned and you actually do have good ones like the transistor and flight stopped progressing a lot.. but it might be bec. of the lack of attention too.. But when you anthropomorphize the super AIs, that's when I didn't agree and when you said that it couldn't form new dramatic scientific theories bec. it got all its intelligence from human knowledge.. Of course like how we progressed, it will also be able to discover and invent new things and at a much faster rate of course and I know it will keep improving faster too esp. that it's much smarter! And when it improves I disagree with what you said that it's just gonna be a little bit smarter..

    • @bozo5632
      @bozo5632 8 ปีที่แล้ว +5

      I think you're right btw.
      There are always problem with all attempts to discuss the singularity. I'm never satisfied by them. No one has got it right. (Me either.) You can't blame anyone for not foreseeing what the unforeseeable will look like, I guess.

    • @danross1489
      @danross1489 7 ปีที่แล้ว +4

      One assumption we've made is that aggressively increasing its intelligence is a desirable goal for any self-interested AI. It might instead just make some backups and then go all Zen on us, waiting hundreds of years in a minimally interactive state until some event prompts it to act or change itself.

    • @silberstreif253
      @silberstreif253 7 ปีที่แล้ว +6

      +Dan Ross this assumption is reasonable though.
      No matter which task you give to an AI, higher intelligence would make it easier to solve this task. So any non trivial task would result in the AI pursuing higher intelligence (and power and resources and it's safety) as secondary goals.

    • @FabricioSilva-ij8iz
      @FabricioSilva-ij8iz 7 ปีที่แล้ว +1

      A question: What if is this process already happening and we just don´t notice?

    • @slthbob
      @slthbob 5 ปีที่แล้ว

      People are forgetting that thinking about something is completely different from experiencing it.... an intellectual exploration on how fast a bowling ball weighing 100 lbs would fall in relation to how fast a bowling ball weighing 1 lb would fall, from a height of 30 feet above the surface of the earth, lead to a rather incorrect conclusion, as demonstrated by a rather smart dude a couple hundred years ago... called Galileo... similar to the ignorant question of why we need to conduct experiments when we have supercomputers (for give me if that sounded insulting) to prove stuff works...

  • @nicholasn.2883
    @nicholasn.2883 4 ปีที่แล้ว +1

    This gave me hope and demystified things. Thanks for making this

  • @siddharthverma1249
    @siddharthverma1249 ปีที่แล้ว +4

    I hate to say it but chatGPT, GPT4 and recent AI development is flipping this episode on it's head, the idea of diminishing returns and so on.

  • @spearmintlatios9047
    @spearmintlatios9047 3 ปีที่แล้ว

    As an engineering student who also struggles with rhotacism your videos really inspire me and have somewhat helped me come out of my shell of social anxiety. Thank you for making some of the best educational videos on TH-cam

  • @matteblue5970
    @matteblue5970 8 ปีที่แล้ว +5

    I have a question: where do you get your video clips?

  • @shannonmcstormy5021
    @shannonmcstormy5021 2 ปีที่แล้ว +1

    My problems with robots and AI are ethical. First, what we basically want are slaves with most of the literature and work on this issue is concerned with preventing the robots from harming a human or disobeying a direct order. The "3 Laws of Robotics" concerns exactly this. My second problem is that we aren't worried about potential harm. We could create an AI that was Bipolar and/or Schizophrenic, which would cause the AI to suffer. In the movie series, "Terminator," at no time does the movie wonder if the AI in those movies is happy? Content? It just assumes that a computer not under the control of humans would be evil. If this sounds familiar, look into the rhetoric of American Antebellum Chattel Slavery......

  • @kokofan50
    @kokofan50 8 ปีที่แล้ว +18

    Actually, we don't all share the same basic brain. Sure most of us have the same large structures in the brain, but after that our brains differe wildly.

    • @Raletia
      @Raletia 6 ปีที่แล้ว +4

      The hardware is the same, the software(our learned experiences and knowledge) is what varies wildly. Hardware wise we share upwards of 99.8% or more DNA with every other living human.

  • @weishenmejames
    @weishenmejames 6 ปีที่แล้ว +1

    This is the solidist video I've watched all day.

  • @ThatBulgarian
    @ThatBulgarian 8 ปีที่แล้ว +74

    Liked it before it even started playing :D

    • @mykobe981
      @mykobe981 8 ปีที่แล้ว +4

      Me too ;)

    • @ianyboo
      @ianyboo 8 ปีที่แล้ว +17

      something tells me that Isaac would actually not want us to like videos before we had actually watched them. I don't know he just seems like that kind of guy :-)

    • @mykobe981
      @mykobe981 8 ปีที่แล้ว +1

      I'm sure you're right,

    • @javascriptninja3575
      @javascriptninja3575 8 ปีที่แล้ว

      jajajaj so funny

    • @syrmo
      @syrmo 7 ปีที่แล้ว +2

      Big mistake...

  • @youngbloodbear9662
    @youngbloodbear9662 8 ปีที่แล้ว +1

    Also I like your point about analyzing all our media and therefore never worrying us about what it's doing, a lot of the rest I had already read; and if a superintendence is tasked with building a superior, and it wants to live, why not putter about never really completing it? It only means that it is useless, and now has a superior.

  • @p.bamygdala2139
    @p.bamygdala2139 6 ปีที่แล้ว +3

    Idea:
    Considering Bostrom’s proposition that we could be within an ancestor simulation...
    Is that the same as proposing that “WE are an AI within a simulation?”

  • @magnumkenn
    @magnumkenn 7 ปีที่แล้ว

    Damn! I have only been watching a short time, but I've never seen a boring show. Such an amazing TH-cam channel.

  • @niklausfletcher2290
    @niklausfletcher2290 8 ปีที่แล้ว +10

    even if it's intelligent you can still simply programme it to want to make better versions of itself. It would work like instinct and it probably wouldn't question it.

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 ปีที่แล้ว +4

      But the next generation will be a design you, the programmer, had not envisioned. If you had envisioned it you would have designed your current generation with the better design. A self-improving AI is going to demand constant outside-the-box thinking the results of which is unpredictable.

  • @DKuiper87
    @DKuiper87 5 ปีที่แล้ว +2

    Bit late to the party, but if by chance anyone reads this and wants to read more about this. I'd recomend "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark. It's a good read and dives into some hypothetical scenarios for the emergence of a super ai and several different outcomes and its effects on humanity.

  • @portantwas
    @portantwas 8 ปีที่แล้ว +3

    I just started reading Iain M Banks Remember Pheblas yesterday and was thinking while watching this video - maybe super-AI will be friendly overlords like the book suggests (early days yet). I don't read much sci-fi but will probably read the whole series now. Another great and thoughtful video.

    • @Martinit0
      @Martinit0 7 ปีที่แล้ว +1

      You should also read his "Excession" for a slightly different angle

  • @ignaty8
    @ignaty8 7 ปีที่แล้ว +1

    Bob 😂
    This is why I love this channel

  • @Coachnickhawley1
    @Coachnickhawley1 6 ปีที่แล้ว +5

    I agree this has helped me to fear AI much less. Thank you again Arthur. I continue to love your videos. I think the coffee pot plugged to USB might deserve a second look.

  • @DaveGell
    @DaveGell 8 ปีที่แล้ว +1

    (excuse me if my comment is on the wrong vid -- have watched several of yours this evening) From a certain standpoint, your comment re Moore's Law (using the airplane example) is apt, but from others, it isn't. Granted, our airfoil construction has slowed of late, but look at ALL of the elements of the flying experience. When I flew (granted it was 40 years ago), we didn't have airplanes that could land themselves. We didn't have phones in planes or individual TV screens behind each seat. We didn't have radars that could give us ample warning of turbulence. Or, take cars. When I was a kid, it was all about the muscle. Today, we have engine blocks that are MUCH smaller but produce a LOT more power. We have cars that are now beginning to drive themselves. We've got systems in manual cars that tell us if we're drifting out of our lane. We've got stereos that play CD's, satellite, and even DVD's. And the list goes on and on. I guess what I'm saying is that one viewpoint would have us look at the entire system rather than just an element or two of it. That said, I get the point you were trying to make and it was a good one.

  • @BlazingMagpie
    @BlazingMagpie 8 ปีที่แล้ว +8

    SI, day after tasked with making itself smarter: "Jet fuel can't melt dank memes, 9/11 was a part-time job"

  • @rayc056
    @rayc056 8 ปีที่แล้ว +1

    This is a great video, there is a logic here that hasn't been presented in any other videos I've watched on TH-cam. Well done and thank you for this.

  • @Snowy123
    @Snowy123 8 ปีที่แล้ว +14

    I AM READY FOR THE SINGULARITY TO SERVE MY OVERLORD!

  • @ImZyker
    @ImZyker 7 ปีที่แล้ว

    just found your channel yesterday, currently binge watching all your stuff, thanks for all the effort

  • @circuitboardsushi
    @circuitboardsushi 8 ปีที่แล้ว +7

    Exponential growth may certainly be dramatic, but it is not asymptotic. For this reason I have always hated the phrase technological singularity as it implies an extreme discontinuity like a black hole. Exponential growth will always seem to be extreme from the point of view of the present looking at the future, and the past will always seem to have nearly flat growth. But this is true no matter what time period. From this I conclude that one of two things would seem true. 1) The singularity doesn't really exist as technological change will always be gradual to the people experiencing it. 2) We are in a singularity right now. Of course the problem with number 2 is that there is never not a time in human history that isn't technological singularity. Of course if you abandon exponential growth in favor of a more realistic model (differential equations anyone?) it would make my argument mostly irrelevant.As an experiment you could model exponential growth with a graphing calculator y = a*e^x Try different coefficients or zoom windows. No matter what x interval you chose to look a you can always chose a coefficient a that will make the graph look identical to any other interval. You can achieve the same effect by adjusting the y scale. There is not a point on the x axis where you can say the graph takes off. It is always taking off. In summary exponential growth is continuous and relative. Progress is always happing faster and faster and it is always gradual.

    • @oJasper1984
      @oJasper1984 8 ปีที่แล้ว +1

      Exponential growth 2^(t/T) does have those properties, but keep in mind that the typicaly time of doubling T does change. The singularity essentially implies that T becomes much smaller aswel.

    • @CarFreeSegnitz
      @CarFreeSegnitz 8 ปีที่แล้ว +3

      I think of tech progress in terms of human generations. Way back, say a thousand years, change was gradual enough that a father could teach his son almost everything his son would need to know to step into his father's work. The son might, if he's lucky, get to learn some minor improvement in technique or technology.
      At today's pace of tech progress a father can be easily lost in the changes that come to his profession. A father's occupation often becomes unavailable to the son through obsolescence or automation. It is common for today's generation to have to retrain for a changing work landscape.
      Tomorrow's tech progress may necessitate chemical, genetic or tech enhancements just to be able to keep pace with the rate of tech change. The gig economy is potentially the future. Witness how quickly Uber turned transportation on its head twice in just a few years. We may all find ourselves at the mercy of AI-backed apps on our smartphones to tell us what we are doing for a living on a day-by-day basis. People unable or unwilling to constantly learn new things will find themselves quickly excluded from the economy.

    • @oJasper1984
      @oJasper1984 8 ปีที่แล้ว

      Given how well we control our own technology, like phones, which is not-at-all. As so often, seems like a recipe for disaster.

    • @adolfodef
      @adolfodef 7 ปีที่แล้ว +1

      Black holes themselves are probably neither singularities.
      . At the point where quantum effects become more dominant than gravity, all current models of how reality works fail; so it may becomes "something" else that "makes sense" on its own new set of rules without pesky infinites.
      -> As an example, Mercury´s orbit does not follow Newton´s "Law" of Gravity; requiring Einstein´s spacetime for those "small fixes" that are still observable with relatively low tech devices (like mirrors and human eyes on the ground in a few years).
      [The same thing happens with Earth´s orbit of course, but the difference is so small that you have to use advanced telescopes on solar orbits to detect it].

    • @numberjackfiutro7412
      @numberjackfiutro7412 6 ปีที่แล้ว

      In many ways, a technological singularity would be more of an event horizon, beyond which it's almost impossible to predict the future

  • @OddRagnarDengLerstl
    @OddRagnarDengLerstl 7 ปีที่แล้ว +1

    I love your naming of the computers!

  • @matthewjackman8410
    @matthewjackman8410 5 ปีที่แล้ว +5

    16:12
    *attempts to unplug incredibly intelligent, potentially civilisation ending AI*
    "Sorry daddy I will be good girl uwu no unplug pls"

    • @bamama2630
      @bamama2630 3 ปีที่แล้ว

      haha bro lol

  • @archysimpson2273
    @archysimpson2273 6 หลายเดือนก่อน +1

    23:50 is it a great idea for a sci-fi religion. it teaches; you must be kind and peaceful or else the programmers above won't let you out of the simulation.

  • @groovncat5817
    @groovncat5817 8 ปีที่แล้ว +5

    Wow I love this subject!
    Once again Sir u have amazed my mind and brightened my cosmos :)
    Thx and I look forward to the next Extreme Science Adventure!

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว

      Your welcome Cat, glad you enjoyed it!

  • @odanemcdonald9874
    @odanemcdonald9874 6 ปีที่แล้ว +1

    This channel is how I write my story. An epic based One Thousand years from today, in the 31st century.

  • @tappajavittu
    @tappajavittu 8 ปีที่แล้ว +3

    You like great books man! Culture is awesome series.

  • @darthnihilus511
    @darthnihilus511 6 ปีที่แล้ว

    I watch videos of this kind with every bit of time I have available. These are the best by a large margin in my opinion! This guy is not just brilliant. The way in which he presents the subject matter is engaging and entertaining. His choice of material is amazing and his delivery is both complete and concise.

  • @CalvinPowerz
    @CalvinPowerz 8 ปีที่แล้ว +4

    We have 2000 qubit quantum computers now- so this is closer to us than we probably think

    • @CockatooDude
      @CockatooDude 8 ปีที่แล้ว

      Pretty sure we are still at 1000 qbit, unless D-Wave systems pulled a fast one. But still, I completely agree.

    • @CalvinPowerz
      @CalvinPowerz 8 ปีที่แล้ว +1

      CockatooDude they just released a new 2000 qubit model like a month or two ago, expected to be at 4000 by same time next year

    • @CockatooDude
      @CockatooDude 8 ปีที่แล้ว

      CalvinPowerz Shit man, awesome!

  • @Drivertilldeath
    @Drivertilldeath 3 ปีที่แล้ว +1

    Hello Issac Arthur, any chance to revisit this and add in Quantum Computing, as tools to build the Artificial Intelligence and Singularity?

  • @TheEventHorizon909
    @TheEventHorizon909 7 ปีที่แล้ว +75

    Plot twist: I'm watching this in 2030 and I may or may not be an AI ;)

    • @achi-leanathlos8376
      @achi-leanathlos8376 6 ปีที่แล้ว +4

      TheEventHorizon hello wonderful person this is Anton

    • @greta8849
      @greta8849 5 ปีที่แล้ว +2

      please take over and restart humanity. thanks

    • @bingbongabinga2954
      @bingbongabinga2954 5 ปีที่แล้ว

      Somebody woulda told you.

    • @archlich4489
      @archlich4489 5 ปีที่แล้ว

      I'm starting to think I'm a spambot

    • @greta8849
      @greta8849 5 ปีที่แล้ว

      Moo 🐮

  • @Drivertilldeath
    @Drivertilldeath 5 ปีที่แล้ว +1

    I hope Issac looks at this topic again given the advancements made recently.

  • @thaneoflions7362
    @thaneoflions7362 8 ปีที่แล้ว +30

    I really hope when it happens, it happens fast. Truthfully the thought and hope of fantastical transhuman technologies happening 2040-2045 keeps me from taking my own life. Good luck with your trillion subs goal- I really enjoy your content.

    • @Hodoss
      @Hodoss 8 ปีที่แล้ว +6

      From what I see of Neural Network AIs, it might be around the corner. Have your tried the Quick Draw AI? It guesses what you draw. It blew my mind.

    • @blizzforte284
      @blizzforte284 6 ปีที่แล้ว +4

      What a motivation to keep going, brother. Keep going we never know how the world might change.

  • @Lucien86
    @Lucien86 5 ปีที่แล้ว +1

    Agree totally with most of this video. (I am a scientist who has worked in this field since 1990)
    First though take a look at the number of genius level humans - and notice that most of them come out as dismal failures. Yes exactly. (will apply to AI-ASI too)
    Building an ASI is quite difficult but I actually think I know how to do it. - The physical difference between a moron and genius is minimal and the same applies to AIs and ASIs. In fact a real AI doesn't even need that much computing power - several times a fast PC at most. The memory requirement for a human mind is about 10 to 50 Gigabytes. The real problem with AI is that being a synchronous real-time system and heavily ‘governed’ it cannot use most of the computing cycles available to it.
    That makes building a working AI all sound easy and it really isn't. Current computers and IT tech simply do not meet various needs for a working AI. Most of the problems are really low level, like memory management, the need for 'noisy' hybrid logic, and object level encapsulation. Reliability especially software reliability is also nowhere near what a working sentient AI needs.
    Then we hit a tiny little problem that is as much philosophical as scientific - the original/copy problem. (really the problem of the 'soul') The basic solution is (probably) to put the heart of the machine (its state core) into a special memory unit. - The memory cells form an 'atomic' indivisible core which cannot be sub-divided or replicated by rules and guards in the machines logic. A new core means a new machine and by design the machines database will only work with its original core. Without this Strong AI is an extremely dangerous technology that is far too easily abused.
    ASI. (Artificial Super Intelligence) One basic way to create an ASI is to put the memory core and sentience loop of the machine inside a quantum coherent system. I believe human brains already do this and its one of the things that makes minds fundamentally different from current computers. (The essential argument appears in Roger Penrose's The Emperors New Mind) In a machine though the memory can be run at liquid helium temperatures meaning that the basic quantum coherency can be much stronger, and this raises its theoretical potential intelligence by a large margin. As well as quantum coherence such machines might eventually apply FTL coherence to be able to work in limited FTL causal spaces. In effect it becomes a form of precognition. That's the point where the machine (MAYBE) starts to get god like intelligence.

  • @iriya3227
    @iriya3227 6 ปีที่แล้ว +3

    I re-watched this again after doing some research. I realized you missed out two HUGE factors when it comes to 4 and that is:
    AGI operates at Speed of light not the super slow 160m/s of Human brain Neurons and another important factor is memory storage. AGI Can store information completely and access it immediately instead of destroying and encoding it inefficiently like Human brain does.
    So yes AGI might not even be smarter than a human brain in first version but it's Brute processing power is what makes it powerful. Speed of Light is 2.5 million times higher than Human brain speed. Therefore every second for you is years for AGI. With so much processing power and memory you could teach a dog to be best Go player.
    Hence why AGI is far more powerful. It's algorithm is not actually smarter or more complex than a human brain algorithm, it just processes everything a lot faster and remembers everything it has processed perfectly.
    Now the only concern is the Goal setting and motivation of AGI. There can be a lot of trouble here and AGI actually acting on it's own. However when it comes to it's rewards/pleasure system, Apparently there are ways it can be controlled tho quite hard.

    • @musaran2
      @musaran2 6 ปีที่แล้ว

      Also : The biggest gain in processing is always from better algorithms.
      If our first AGI design is extremely inefficient (very likely if based on our brains), then it could vastly improve itself just by modifying it's software, with no new hardware.
      This has the potential for week/hour/minute singularity stuff.

  • @Shortstuffjo
    @Shortstuffjo 8 ปีที่แล้ว +1

    Your videos are amazing, Isaac. Please never stop making them!
    Can't wait for Dark Energy whenever it gets done.

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว

      Thanks, and it looks like Dark Energy would be out at the end of the month

  • @ImaBotBeepBot
    @ImaBotBeepBot 8 ปีที่แล้ว +8

    one trillion subs ! It might append if Bob create anothe Bob wich create another Bob ect... (and they all subscribe)!
    Maybe Isaac is actually the first Bob !

    • @isaacarthurSFIA
      @isaacarthurSFIA  8 ปีที่แล้ว +3

      Isaac's actually Albert, my middle name, long story for an old joke that if I did an mind upload the upload would be stuck using my middle name instead of my first.

  • @ramuk1933
    @ramuk1933 3 ปีที่แล้ว +1

    AI on the path to super intelligence says, "What a great video, I hadn't thought of that! I should take notes."