Tap to unmute

Is the Intelligence-Explosion Near? A Reality Check.

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ค. 2025
  • Learn more about neural networks and large language models on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/....
    I had a look at Leopold Aschenbrenners recent (very long) essay about the supposedly near "intelligence explosion" in artificial intelligence development. I am not particularly convinced by his argument. You can read his essay here: situational-aw...
    🤓 Check out my new quiz app ➜ quizwithit.com/
    💌 Support me on Donorbox ➜ donorbox.org/swtg
    📝 Transcripts and written news on Substack ➜ sciencewtg.sub...
    👉 Transcript with links to references on Patreon ➜ / sabine
    📩 Free weekly science newsletter ➜ sabinehossenfe...
    👂 Audio only podcast ➜ open.spotify.c...
    🔗 Join this channel to get access to perks ➜
    / @sabinehossenfelder
    🖼️ On instagram ➜ / sciencewtg
    #science #sciencenews #tech #technews #ai

ความคิดเห็น • 5K

  • @pirobot668beta
    @pirobot668beta 11 หลายเดือนก่อน +1141

    In 1997, I was working at University.
    A Faculty member gave me an assignment: write a program that can negotiate as well as a human.
    "The test subjects shouldn't be able to tell if it's a machine or a human."
    Apparently, she had never heard of the Turing Test.
    When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks."
    The point?
    There are far too many people with advanced degrees but no common sense making predictions about something never seen before.

    • @mikemondano3624
      @mikemondano3624 11 หลายเดือนก่อน +19

      One bad grade shouldn't breed lasting resentment.

    • @darelvanderhoof6176
      @darelvanderhoof6176 11 หลายเดือนก่อน +127

      We call them "PhD Stupid". It's afflicts about half of them. Seriously.

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน +49

      ​@@darelvanderhoof6176and the other half humorously.

    • @jaredf6205
      @jaredf6205 11 หลายเดือนก่อน +8

      It’s just I can’t imagine why it wouldn’t happen. There’s just no way to get people to stop developing this technology. Even if you were to governments would still work on it, people in their basements would still work on it.

    • @ogungou9
      @ogungou9 11 หลายเดือนก่อน +14

      @pirobot668beta: There is no such thing as common sense. She didn't lack common sense, that was just stupidity. She was an idiot savant ... I don't know ...

  • @pablovirus
    @pablovirus 11 หลายเดือนก่อน +591

    I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes

    • @NamelessArchiver
      @NamelessArchiver 11 หลายเดือนก่อน +9

      In all seriousness, I want to know why have I gone to the kitchen.
      Better yet... the lack of remembering an empty fridge.

    • @hvanmegen
      @hvanmegen 11 หลายเดือนก่อน +24

      I love this sane German attitude of hers.. the fact that she spends time to read an essay like this to call him on his bullshit (especially with the conflict of interest) brings me so much hope for the future. We need more people like her.

    • @DanielMasmanian
      @DanielMasmanian 11 หลายเดือนก่อน +29

      Yes, a German sense of humour is no laughing matter.

    • @rohitnirmal1024
      @rohitnirmal1024 11 หลายเดือนก่อน +2

      @@DanielMasmanian I had a German professor. Boy, he had a sense of humor. I have not laughed since I have met hem.

    • @deBRANNETreu
      @deBRANNETreu 11 หลายเดือนก่อน +3

      @@hvanmegenshe’s the best!

  • @lokop-bq3ov
    @lokop-bq3ov 11 หลายเดือนก่อน +3026

    Artificial Intellignce is nothing compared to Natural Stupidity

    • @GnosticAtheist
      @GnosticAtheist 11 หลายเดือนก่อน +50

      lol - true that. While I am certain we will get there, I hope we can avoid creating AGI that has our natural capabilities to be stupid.

    • @Ann-op5kj
      @Ann-op5kj 11 หลายเดือนก่อน +29

      It's the same thing. Where is AI generated from?

    • @generichuman_
      @generichuman_ 11 หลายเดือนก่อน +16

      so edgy...

    • @acidjumps
      @acidjumps 11 หลายเดือนก่อน +22

      I use both about equally at work.

    • @turkeytrac1
      @turkeytrac1 11 หลายเดือนก่อน +18

      That's tshirt worthy

  • @calmhorizons
    @calmhorizons 10 หลายเดือนก่อน +143

    Selling shovels has always been the best way to make money in a goldrush.

    • @ishaanrawat9846
      @ishaanrawat9846 10 หลายเดือนก่อน +23

      Thats what nvidia has done

    • @Derek_Garnham
      @Derek_Garnham 8 หลายเดือนก่อน +2

      apparently, there are longer term profits available to those who make trousers from tents in a gold rush.

    • @luck484
      @luck484 8 หลายเดือนก่อน +1

      Seems correct, and the reason selling equipment to do a job is a better business business model, I believe has to do with risk. I believe risk is unknown and unknowable, despite what any person or population believes and can "demonstrate." With human decision makers, deception, including self deception is part of the formula of making a great fortune. Putting it another way, people selling shovels are not engaging in deception and approximately one in a million gold rush shovel buyers makes a fortune.

  • @msromike123
    @msromike123 11 หลายเดือนก่อน +1214

    If I will be able to ask Google home why I went to the kitchen, I am on board!

    • @sebastianeckert1947
      @sebastianeckert1947 11 หลายเดือนก่อน +51

      You can ask today! Answer quality may vary

    • @ThatOpalGuy
      @ThatOpalGuy 11 หลายเดือนก่อน +18

      this is a real problem for many of us.

    • @HardcoreHokage
      @HardcoreHokage 11 หลายเดือนก่อน +28

      You went into the kitchen to make a samich.

    • @HardcoreHokage
      @HardcoreHokage 11 หลายเดือนก่อน +27

      Make me one too.

    • @juiceman110
      @juiceman110 11 หลายเดือนก่อน +6

      Skibidi

  • @HmmmBlyat
    @HmmmBlyat 11 หลายเดือนก่อน +938

    All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.

    • @b0nes95
      @b0nes95 11 หลายเดือนก่อน +94

      I'm always amazed by our energy efficiency as well

    • @nickv8334
      @nickv8334 11 หลายเดือนก่อน +56

      well, agriculture and food production/disposal is kind of responsible for 18% of the worlds production of greenhouse emissions (excluding transport), so i think the jury is still out on who wins this round though........

    • @TheManinBlack9054
      @TheManinBlack9054 11 หลายเดือนก่อน +38

      Technology improves, just think of how big and ineffecient computers used to be and how small efficient they are now

    • @jozefwoo8079
      @jozefwoo8079 11 หลายเดือนก่อน +63

      It's only to train the model. Afterwards it becomes cheaper than humans.

    • @draftymamchak
      @draftymamchak 11 หลายเดือนก่อน +4

      Our efficiency doesn’t matter, the creator is superior than the creation thus no matter what AI does it’ll be because we created it. Sure it'll also be responsible for what it does but for now I'm worried about generative AI being too good and being used to fake evidence etc.

  • @michaelbuckers
    @michaelbuckers 11 หลายเดือนก่อน +481

    There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.

    • @Michael-Has-Opinions
      @Michael-Has-Opinions 11 หลายเดือนก่อน +24

      "The learning database already includes virtually 100% of all text written by humans, " No, before starting training they run all the text through AI inference of the previous model. This improves quality by a significant percentage. In reality, there is always going to be another layer of AI between the current one being trained and the data.

    • @michaelbuckers
      @michaelbuckers 11 หลายเดือนก่อน

      @@Michael-Has-Opinions It improves metrics, not quality. Sure enough when AI is predicting its own text, the preplexity will be less than when it predicts human text. And this is especially a huge issue for small models fine-tuned on ChatGPT. People are already sick and tired of unpromted "as a language model" and such garbage in their anime character simulator chatbox, and yet it's only gonna get worse when next gen ChatGPT will be fine tuned on last gen ChatGPT.

    • @bbgun061
      @bbgun061 11 หลายเดือนก่อน +50

      That doesn't make sense.
      Garbage in, garbage out.
      Current AI models produce garbage a lot of the time. If you use that to train another AI model, it's going to produce more garbage.

    • @tannerroberts4140
      @tannerroberts4140 11 หลายเดือนก่อน +31

      I think it’s good to remember that, in terms of societal contributions, the quality of human activities in general are garbage in. But society got built. We waste our time our money, our effort, get pointlessly hooked on rage bait, romcom, addictions, etc. One might say we’re mostly enjoying life, but in terms of societal contribution, it’s pretty much trash.
      An honest look at even the leaders in every field of study shows that each leader is either somebody with one good idea that attracted a lot of positive attention, or an exemplary personality that attracts a lot of of collective intelligence.

    • @michaelbuckers
      @michaelbuckers 11 หลายเดือนก่อน +3

      @@tannerroberts4140 Language models replicate training data. Between replicating humans and replicating itself, it's a very easy pick.

  • @RigelOrionBeta
    @RigelOrionBeta 11 หลายเดือนก่อน +131

    In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer.
    There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault.
    AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.

    • @modelenginerding6996
      @modelenginerding6996 11 หลายเดือนก่อน +8

      A major accuracy problem with AI is not only does it train itself on information from the internet, it is also training on itself and creating a vicious feedback loop. I had a location glitch in an area with poor cell reception saying I had visited a vape shop. I got no-smoking ads from my state for two years! My social credit score has been marred 😂.

    • @thumpthumper9856
      @thumpthumper9856 11 หลายเดือนก่อน +6

      With the advancements in digital twins and replicators, nuanced synthetic data is becoming better and better. The garbage in garbage out narrative becomes less and less salient. Why worry about finding new data when fake data is just as good? At least for tasks involving computer vision and movement, to be fair.

    • @danlightened
      @danlightened 10 หลายเดือนก่อน +4

      We're in the post truth era? 🤔😕

    • @FirstTheyCameForThe1mmigrants
      @FirstTheyCameForThe1mmigrants 10 หลายเดือนก่อน +2

      I get frustrated with ChatGPT because it doesn't respond like a real person would. My experiences anyway. It will never have sentience or consciousness so it can never really understand how to respond like a person. It always feels robotic to me. Of course that could just be because I know it's artificial.

    • @Beremor
      @Beremor 10 หลายเดือนก่อน

      @@FirstTheyCameForThe1mmigrants I've had the same experience. Once I asked some questions that require some interpretation or an understanding of the subject matter beyond the wording of the question, it completely breaks down and gives milquetoast, superficial and half-baked answers.
      Large language models are incapable of expressing the limits of their capabilities. They're unable to adequately express how confident they are in the statements they're making. Ultimately, their answers are about as useful as page one of a well-worded google search, and unfortunately I already know how to word google searches well. ChatGPT has been an utter waste of my time and so has every tutorial about how to "properly word prompts."

  • @bulatker
    @bulatker 11 หลายเดือนก่อน +826

    "I can't see no end" says anyone in the first half of the S-curve

    • @AmarantiStellar
      @AmarantiStellar 11 หลายเดือนก่อน

      Isn't that kinda the point of the first half of an S-Curve? The end cannot be predicted and could occur in 1 year or 50 years. It all looks the same either way.
      It'd be pretty silly to say the end is in sight when you're still on the straight part of the S-Curve.

    • @djayjp
      @djayjp 11 หลายเดือนก่อน +9

      Double negative....

    • @ericlipps9459
      @ericlipps9459 11 หลายเดือนก่อน +2

      A hint that Sabine is not a native English speaker. Not that there's anything wrong with that.

    • @trixer230
      @trixer230 11 หลายเดือนก่อน +2

      Everyone go home, this is the best comment this video can achieve!

    • @GuyJames
      @GuyJames 11 หลายเดือนก่อน +64

      my baby has doubled in size in just a few weeks since his birth. I'm sure he will be larger than the planet in less than a year's time

  • @zigcorvetti
    @zigcorvetti 11 หลายเดือนก่อน +146

    Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.

    • @ericrawson2909
      @ericrawson2909 11 หลายเดือนก่อน

      Exactly what I was thinking. And not just corporations. Politicians, and in fact most people. They have shown that they will deny truth when it is pointed out to them by a well qualified person, if it conflicts with their own interests. That could be profit, power, or simply virtue signalling to fit in with the majority. If they ignore, cancel and smear well respected experts in a field, why would they act on the advice of an AI, even if it was supremely intelligent and God like in its desire to help humanity? AI will not save the world. Like all other technology it can be used for good or evil purposes. Probably the latter more often than not.

    • @domenicorutigliano9717
      @domenicorutigliano9717 11 หลายเดือนก่อน +5

      everyone is undersestimating

    • @ericrawson2909
      @ericrawson2909 11 หลายเดือนก่อน +5

      I am getting sick and tired of my comments getting deleted. I did not use any "bad" words, I guess my amplification of the criticism in the original post here to other groups was too close to home for the vested interest groups. I feel very angry, and YT, making your users angry is not a good business strategy.

    • @dascreeb5205
      @dascreeb5205 11 หลายเดือนก่อน

      ?

    • @goldminer754
      @goldminer754 11 หลายเดือนก่อน +9

      This project of AGI would need hundreds of billions and rather trillions of dollars plus cooperation with other companies plus major support from a powerful government and it won't bring any profits for many many years. And it is not even guaranteed that it is feasible to build this AGI, therefore an extremely risky investment. Fortunately corporate greed almost entirely revolves around short term profits, so I am pretty certain that no such Giga project is started any time soon, especially considering how much energy this needs and the tiny problem of climate change still having to be meaningfully addressed.

  • @k.vn.k
    @k.vn.k 11 หลายเดือนก่อน +296

    “I can’t see no end!”
    Said the man who earned money from seeing no end.
    😅😅😅 That’s gold, Sabine!

    • @wellesmorgado4797
      @wellesmorgado4797 11 หลายเดือนก่อน +3

      As someone already said: Follow the money!

    • @Tom_Quixote
      @Tom_Quixote 10 หลายเดือนก่อน +2

      If he makes money from seeing no end, why can't he see no end?

    • @k.vn.k
      @k.vn.k 10 หลายเดือนก่อน +1

      @@Tom_Quixote so that he keeps making money 😂

    • @shenshaw5345
      @shenshaw5345 10 หลายเดือนก่อน +2

      That doesn’t mean he’s wrong though

    • @AndiEliot
      @AndiEliot 10 หลายเดือนก่อน +5

      @@shenshaw5345 It doesn't mean he's wrong, I totally agree with that, but what Sabine is doing is super important; when judging someone's strong opinion or thesis always see FIRST what the agenda of that person is and in what game is he putting his skin in. This is proper due diligence.

  • @vhyjbdfyhvjybv9614
    @vhyjbdfyhvjybv9614 10 หลายเดือนก่อน +21

    I like to compare this to game development. Imagine someone saying in 2002 that because we managed to double the number of polygons we can render every 2 years photorealistic games are 10 years away. 23 years later it turns out that making photorealistic games is a very difficult topic that requires lots of problems to be solved, some easy some super hard. E.g. today we can render lots of polygons and calculate realistic lightning but destructible environments are not solved. Or realistic realtime water simulations are far away. Or we know that rendering lots of polygons is not enough, e.g. animations or shadows , especially from large objects are hard problems

    • @tckgkljgfl7958
      @tckgkljgfl7958 9 หลายเดือนก่อน +1

      feels like a flawed example. we basically have 'pretty much' photorealistic capabilities. Compare the new unreal engine to idk any SNES title

    • @vhyjbdfyhvjybv9614
      @vhyjbdfyhvjybv9614 9 หลายเดือนก่อน +5

      @@tckgkljgfl7958 I'm saying that if we'd extrapolate the trend in video game graphics from say 1990-1998 data (SNES to Quake 2) then it would come out that in 2025 you can't distinguish a video game from reality. This is not the case today, we are very far from this

  • @anthonyj7989
    @anthonyj7989 11 หลายเดือนก่อน +120

    I am from Australia and I totally agree with you. Australia is one of the biggest users of AI in mining - but a lot of people don’t understand why. If you read through the comments about driverless trucks and trains in Australia, people have no idea of just how remote, humid and hot the northern parts of Australia are.
    People working in iron ore mining in Australia are just hours away from being seriously dehydrated or dead. For iron ore mining to be carried out at the scale that it is, it needed something better than the modern human, who is not able to work outside of an air conditioned environment in the remote northern locations of Australia. Therefore, mining companies had to come up with something that can work in a hostile environment. My understanding is that AI in mining has not reduced the number of people, just move them to a city in an air conditioned building.

    • @feraudyh
      @feraudyh 11 หลายเดือนก่อน +31

      That gets the prize for the most interesting thing I've read today.

    • @hussainhaider2818
      @hussainhaider2818 11 หลายเดือนก่อน +11

      I don’t get it, how do you mine ore if the miners are back in the city? You mean remote controlled robots?

    • @conradboss
      @conradboss 11 หลายเดือนก่อน +2

      Hey, I like Australia 🇦🇺 😊

    • @MyBinaryLife
      @MyBinaryLife 11 หลายเดือนก่อน +44

      its not AI its just automation

    • @rruffrruff1
      @rruffrruff1 11 หลายเดือนก่อน +6

      It has definitely reduced the people per output, else it wouldn't be done.

  • @davidbonn8740
    @davidbonn8740 11 หลายเดือนก่อน +131

    I think there are a couple of problems here that you don't point out.
    The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that.
    Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up.
    There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.

    • @petrkinkal1509
      @petrkinkal1509 11 หลายเดือนก่อน +3

      @robertcopes814 Well it learns what is the most likely next word in a sentence. :)

    • @timokreuzer381
      @timokreuzer381 11 หลายเดือนก่อน

      Humans are extremely inefficient learners. You have to shove petabytes of video, audio and sensoric data into them for years, before they show even the slightest signs of intelligence.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi 11 หลายเดือนก่อน

      @robertcopes814 I agree. I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi 11 หลายเดือนก่อน +43

      I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @asdfqwerty14587
      @asdfqwerty14587 11 หลายเดือนก่อน +13

      I would say by far the #1 problem with the current models is that they aren't really designed to "do" anything. No matter how advanced they get (without completely redesigning it from the ground up), their only goal is to mimic the training set.. they have no concept of what it means to be better at what it's doing beyond just comparing it to what people input as the training data, which makes them incapable of learning anything on their own (because anything they try to learn fundamentally must be compared to what a human is doing.. so if there are no humans in the equation, it has nothing to compare it to and it can't do anything beyond just guessing completely randomly).
      I think that the LLMs are on the completely wrong track if they're aiming for any kind of general intelligence. I think that if they want to have an actually intelligent AI the AI must learn how to communicate without being explicitly programmed to do so (ie. it would need to have some completely unrelated goal that "can" be done without communicating with anything and then learn that some form of communication makes it better at achieving its goal) - it would of course be a lot harder to do and it would probably not seem very smart for a long time, but it would be 100x more impressive to me if an AI learned how to speak that way than anything that LLMs are doing, because that would actually require the AI to understand the meaning of words rather than just being able to predict what words come next in a conversation.

  • @Stadtpark90
    @Stadtpark90 11 หลายเดือนก่อน +31

    Exponential curves usually stop being exponential pretty fast. The surprising success of Moore’s law makes IT people think that’s normal, which it isn’t.

    • @Michael-Has-Opinions
      @Michael-Has-Opinions 11 หลายเดือนก่อน

      Everyone knows this. The questions is whether the curve dies out before AI intelligence exceeds our intelligence or not. If it is the latter there will be serious problems. I suspect the former.

    • @davidradtke160
      @davidradtke160 11 หลายเดือนก่อน +6

      Most exponential curves are actually S curves.

    • @tabletalk33
      @tabletalk33 11 หลายเดือนก่อน +2

      Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

    • @jozefcyran2589
      @jozefcyran2589 11 หลายเดือนก่อน +2

      So what? A 50year run can improve the relative power and way of being by orders of magnitude and that's usually enough to be exicted for. AGI can be incredibly fast incredibly quickly

    • @juliam6442
      @juliam6442 9 หลายเดือนก่อน +2

      In this case, AI can create more AI and robots can create more robots. I don't think we can necessarily generalize from the past on this one.

  • @Zero_City
    @Zero_City 10 หลายเดือนก่อน +29

    Nailed it .. this technocentric mindset is pervasive is so many fields .. but rarely scrutinised holistically for its resource needs, land use changes, social impacts

    • @hue6
      @hue6 9 หลายเดือนก่อน +1

      okay but picture this, agi comes at the cost of a few hundred billion, 1 million agi scientists are able to calculate a net postive nuclear fusion reaction, boom infinite energy, the data part im not too sure, i heard some researchers i forgot who but were able to train ai to create new data. sounds cool

    • @hue6
      @hue6 9 หลายเดือนก่อน +1

      dont underestimate the scientific discovery potential of a super intelligent ai, its hard to think that these discoveries are possible, but yet again its hard to imagine something more intelligent than us really can exist, but just know it will happen and when it does we will just have to see its super human capabilities ourselves

  • @paulm.sweazey336
    @paulm.sweazey336 11 หลายเดือนก่อน +5

    Two points: (1) It was great that you put a little "blupper" at the end, after the advert. It was just sort of an accident that I saw it, but I'm checking from now on, and that may keep me around to watch the money-making part. (2) I suggest that you introduce your salesperson self and say "Take it away, Sabine!" Then you don't have to match the blouse, and I will quit being annoyed by the change in hair length.
    Thanks for being so very rational. So refreshing every day. Haven't gotten my SillyCone Valley friends addicted to you yet, but I'm working on it.
    And do you publish some sort of calendar of speaking engagements. I live a convenient commuting distance to either Frankfurt or Heidelberg, and I'd love to attend some time.

  • @billcollins6894
    @billcollins6894 11 หลายเดือนก่อน +103

    Sabine, I worked on AI at Stanford. There are two areas where people have misconceptions.
    1) We do not need new power to get to AGI. Large power sources are only needed if the masses are using AI. A single AI entity can operate on much less power than a typical solar field. It does not need to serve millions of people. It only needs to exceed our intelligence and become good at improving itself. It can serve a single small team that directs it at focusing on solving specific problems that change the world. One of the early focus issues is designing itself to use less power and encode information more efficiently.
    2) No new data is needed. This fallacy assumes that the only way to AGI is continuing to obtain new information to feed LLMs. All of the essence of human knowledge is already captured. AI only needs to understand and encode that existing knowledge more efficiently. LLMs are not the future, they are a brief stepping stone.

    • @tabletalk33
      @tabletalk33 11 หลายเดือนก่อน +5

      Very interesting. Thanks for the clarification.

    • @PracticalAI_
      @PracticalAI_ 10 หลายเดือนก่อน +11

      The energy will be used to train the model, not to run them… please check the paper

    • @billcollins6894
      @billcollins6894 10 หลายเดือนก่อน +8

      @@PracticalAI_ The energy used to train the models is inconsequential in the long run. GPUs are not the future for AI.

    • @PracticalAI_
      @PracticalAI_ 10 หลายเดือนก่อน

      @@billcollins6894 have you watched the video or worked in the field? To train a model you need gw of energy for months. that’s why it cost milions. Your idea that the ai will design itself to run on less power is “possible” but not in the short/medium term.. this machines are autocomplete on steroids at the moment. Good for marketing, terrible for designing new things

    • @Disparagingcheers
      @Disparagingcheers 10 หลายเดือนก่อน +9

      Maybe I’m misunderstanding the definition of AGI, but doesn’t narrowing scope of the model to a small team training/using it for specific use-cases contradict what AGI is? Thought it was supposed to be generalized for anything?
      Are you suggesting all of the essence of human knowledge is captured on the internet? Idk that that’s necessarily true, and also I believe there’s a lot we don’t know? So wouldn’t that mean for a model to continue to learn beyond what we are already capable of it would need to be able to conduct experiments and capture new training data?

  • @KageSama19
    @KageSama19 11 หลายเดือนก่อน +33

    I love how even AI depicts lawmakers as asleep.

    • @makinganoise6028
      @makinganoise6028 11 หลายเดือนก่อน

      But are they? Maybe this is the plan, societal collapse, the West seems to be doing everything possible to destroy itself with mass illegal migration, anti family WEF cult agendas and WW3 with Russia anytime soon, destroying huge swathes of middle income jobs, fits into the picture

    • @PMX
      @PMX 11 หลายเดือนก่อน +9

      That was definitely the prompt they used. And they purposely used stable diffusion 3 that was just released and is being mocked by how bad it is at generating humans, so it would be funnier.

  • @rgonnering
    @rgonnering 10 หลายเดือนก่อน +1

    I love Sabine. She is brilliant and has a great sense of humor. Above all she explains complex issues, and I think I understand (some of) it.

  • @scythe4277
    @scythe4277 11 หลายเดือนก่อน +31

    Sabine should be part of a comedy duo because she delivers hilarious lines with a dead pan face that is just brutal.

    • @5nowChain5
      @5nowChain5 11 หลายเดือนก่อน

      The other half of the duo is her long suffering husband who should get a award for his infinite patience. (Oh and the bloopers at the end was hilariously unexpected gold😂😂😂😂😂😂😂)

    • @sicfrynut
      @sicfrynut 11 หลายเดือนก่อน

      reminds me of Monty Python skits. those guys were so skilled at deadpan humor.

    • @friskeysunset
      @friskeysunset 11 หลายเดือนก่อน +1

      Yes. Just yes, and now, please.

  • @Virgil_G2
    @Virgil_G2 11 หลายเดือนก่อน +150

    This sounds more like a horror story plot than a future to be excited about, tbh.

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน +5

      That all depends on how excited you can get about a half full glass.

    • @t.c.bramblett617
      @t.c.bramblett617 11 หลายเดือนก่อน +6

      It's exactly like the Matrix, including the limiting factor of energy that the Matrix movies also ignore. You can't generate energy from a closed system, and manufacturing and computing both require massive amounts of energy and as she pointed out, obtaining material for building infrastructure itself requires energy that has to be focused and channelled as efficiently as possible.

    • @rruffrruff1
      @rruffrruff1 11 หลายเดือนก่อน +8

      It will be exciting for the few people who own the AI... at least until the AI gets clever enough to own them.
      Honestly I think the struggle for domination will result in devastation far beyond our wildest nightmares... and there is no way we can stop it. Our best hope is that some hero develops and unleashes a compassionate AI first... that becomes king of the world.

    • @RedRocket4000
      @RedRocket4000 11 หลายเดือนก่อน +3

      @@rruffrruff1 No we can stop it. Turn off all power. But Dune style flat out ban of computer like devices would work. They only allow one tasks can't do other tasks types of electronics.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 11 หลายเดือนก่อน +4

      May be. But I'll say, a good part of the entire analysis is BS. A zeit guist of the LLM success, but has no clue on the fact that generative AI is a misfit for most practical work.

  • @truejim
    @truejim 11 หลายเดือนก่อน +7

    For any particular mode of AI (language, image, video, etc) the bottleneck isn’t the power of the hardware or the goodness of the algorithm. The bottleneck is the availability of large amounts of TAGGED date to use for training. All neural networks are a curve-fit to some nonlinear function; the tagged data is the set of points you’re fitting to. Saying “I have lots of data, but it’s not tagged” is like saying I have all the x coordinates for the curve fitting, I just lack the y coordinates.

  • @j.c.3800
    @j.c.3800 5 หลายเดือนก่อน +2

    Right on Sabine. We've reached a human intelligence ceiling. It shouldn't be hard to surpass it with a machine as machines do in all other human skills.

  • @jensphiliphohmann1876
    @jensphiliphohmann1876 11 หลายเดือนก่อน +13

    10:00
    The neutron free fusion zungenbrecher is hilarious. It reminds me of a Loriot skech where Evelyn Hamann is struggeling with English pronunciation. 😂❤

  • @patrickmchargue7122
    @patrickmchargue7122 11 หลายเดือนก่อน +82

    Actually, according to the graphic you slashed up, Ray Kurzweil predicts AGI by 2029, not 2020.

    • @katehamilton7240
      @katehamilton7240 11 หลายเดือนก่อน

      So what? Industry people are hyping AGI to make money. AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'

    • @brendanh8193
      @brendanh8193 11 หลายเดือนก่อน +14

      And he puts the singularly at 2045. AGI is parity, not super.

    • @polyphony250
      @polyphony250 11 หลายเดือนก่อน +15

      @@brendanh8193 It's looking like an out-of-this-world, shockingly good prediction today, then, considering when it was made.

    • @brendanh8193
      @brendanh8193 11 หลายเดือนก่อน +22

      ​@@polyphony250Agreed. I do get a little annoyed with SH at times for failing to understand the nature of exponential predictions. Take Vernor Vinge's prediction, in the same speech, he put bounds on it, with 2030 being his upper bound. We haven't got there yet but she basically ridiculed him for making such a prediction.

    • @ZenithBlade101
      @ZenithBlade101 11 หลายเดือนก่อน +3

      He also predicted that we would have 1 word govt by 2020…

  • @thedabblingwarlock
    @thedabblingwarlock 11 หลายเดือนก่อน +15

    Able to process information faster than a human? Certainly. Computers have been able to do that for decades now.
    Able to do anything a human can do better than a human can do it? Nope, not a chance.
    People keep forgetting that we don't really know what intelligence is on a quantifiable level. We have a somewhat intuitive grasp of what intelligence is, but as far as I am aware, we don't have a way to measure it and compare expect in the broadest sense. We don't fully understand how our brains or brains in general work. That's not even getting into things like the synthesis of ideas, one of the cores of creativity; aesthetic sensibilities; and a dozen other highly subjective subjects. Simply put, we don't know enough about what goes on under the hood to put numbers to it.
    And that's a problem because computers only deal in numbers.
    Which leads me to the second thing people keep forgetting. Most modern AI models that I am aware of use a complex set of vector and probability equations to go from input to output. To grossly oversimplify things, it's just one big math equation with an algorithm at the start to tokenize the input into a form that the computer program can process, and another at the end to make the output processable by the person providing the input. Equations and algorithms don't have capability to be self-aware, at least not in any sense of an intelligent being. Nothing will change that, no matter how had you might wish for them to be so. Nor do they have the ability to generate new ideas or combine disparate ones into a cohesive whole.
    Thirdly, computers and thus AI do not have an architecture anywhere close to that of a human brain, or any brain for that matter. They're trying to translate a very analog process into a digital one without truly understanding everything going on in the analog process first, and boy howdy, is that process complex! A friend of mine point out how many of these projects don't have a psychologist on board, so how can they know what their target is without the person who's entire career has been to study the thing they're trying to replicate. In short, these guys don't even have an expert on intelligence on staff, or at least the closest thing we have to an expert that I am aware of.
    What these guys sound like to me is the computer science equivalent of doctors and lawyers. They are very smart people in a very mentally demanding field, but they also happen to know they are smart, and they think they are smarter than anyone else. Because they think they are smarter than anyone else, they think they can do anyone else's job. They can't. I worked in IT for almost ten years, and some of our most difficult clients to deal with were doctors and lawyers. They would question everything on a project, they'd insist on using systems that were over a decade out of date, and they'd also imply that they could do our jobs better than we could.
    General AI or Super AI isn't only a few years away. I doubt it's even a few decades away. I think, like fusion, the timelines are going to be much, much longer than anyone wants admit. Ironically, I think we are much closer to fusion as an energy production method than we are to having anything close to a human-like AI. We can generate fusion reactions, and we've managed to get more juice out than we pumped in on at least one occasion. It's a matter of refinement and iteration at this point.
    We aren't even at the stage we were at with fusion in the 30's and 40's with AI, I think. We don't understand everything that's going on under the hood with intelligence. We can't model it. We can't quantify it. We can't even agree on what it is beyond the broadest strokes. Until we can do that, we aren't going to get anything intelligent out of AI, and all it will ever be is a complex vector equation tuned on probabilities.
    And this isn't even getting into the steps some are taking to protect their work from being scraped by spiders looking for AI training (re: configuring, because that's what they are doing. Tuning would also be more accurate) data. And some of those measures are aimed at poisoning the well. If those measures become common place, I don't see the current crop of LLM and LMM (Large Language Model and Large Media Model) based AIs getting any better, and I don't see that as a viable option going forward.
    This isn't the first time we've seen futurists touting that AI and automation will take over large swathes of the current job market. I remember reading an article over a decade ago about how in ten years we'd see close to 50% of the workforce replaced by AI and automated systems. True, AI has made some jobs obsolescent, but as we seem to be finding out about every decade or so, computers and computer programs aren't ready to do what a human can do. They get closer every year, but the pace isn't nearly as fast as some would like you to believe.
    As for me, I have a human centric view of this. I believe that AI can be a powerful tool, but right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised. I could be wrong, but I don't think I am. I've seen it with 3d-printing (additive manufacturing.) I've seen it with 3d televisions and media (can't remember the last time I saw this as a selling point.) I've seen it with cryptocurrencies and NFTs (hopefully I need not explain this one further.) And, these are all examples from just the last ten to fifteen years. Time and time again we see technology as a fad that is around for a few years, then the hype fizzles and dies, sometimes taking the tech behind the build up with it.
    But then again, I'm just some web developer from Alabama. What do I know?
    P.S. I almost forgot to add, that whole robots do all of the work thing seems to have a chicken and egg problem, and that's before we even get into the myriad engineering and manufacturing challenges that need to be solved for just the GEN 1 bots.
    This is why you should look outside of your field, folks! It helps build an appreciation for how hard some of those "minor challenges" might be in reality.

    • @tabletalk33
      @tabletalk33 11 หลายเดือนก่อน

      Very interesting, great comment! These developers of AI who are making all sorts of predictions would do well to read what you wrote: "...right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised." Robert J. Marks says the same thing. See his book: Non-Computable You: What You Do That Artificial Intelligence Never Will (2022).

    • @TheNordicMan
      @TheNordicMan 7 หลายเดือนก่อน +1

      @thedabblingwarlock Awesome comment!!

    • @eduardofreitas8336
      @eduardofreitas8336 3 หลายเดือนก่อน +1

      As a psychologist: Thank you. What we currently have with LLMs is SO different from what we call intelligence.

  • @pauek
    @pauek 8 หลายเดือนก่อน

    Sabine, you need to make T-shirts with some of your quotes... "don't give up on teaching your toaster to stop burning your toast" is just perfect.

  • @matthewspencer972
    @matthewspencer972 11 หลายเดือนก่อน +56

    It is surprisingly common, when one tries to converse with pure software engineers, to get them to accept that the laws of physics apply to them and cannot be by-passed by sufficiently clever coding. You get the same sort of thing from genetic engineers, who simply won't accept that endless fiddling with a plant's DNA will not compensate for the absence of moisture or other nutrients in the soil or other growing medium.

    • @TedMan55
      @TedMan55 11 หลายเดือนก่อน +19

      I’m a software wngineer who came from a math and physics heavy based background, and I was shocked to learn that most programmers didn’t know or like math, which I’d just assumed… probably explains a lot of the current state of programming

    • @GorlockTheDestroyer-p1o
      @GorlockTheDestroyer-p1o 11 หลายเดือนก่อน +6

      @@TedMan55 How do you even become a software developer without loving math? As someone terrible at it and coding, I assumed you'd have to swear by your high school math book to even get a chance at compsci

    • @egg-mv7ef
      @egg-mv7ef 11 หลายเดือนก่อน +6

      @@GorlockTheDestroyer-p1o thats completely wrong. math doesnt have as much to do with software engineering as u think. i mean, if youre making physics model visualization ofc u need math lol but for 50% of the usecases u dont need any math. the SEs that know math just have more opportunities cause they can work on more complex stuff like game engines etc

    • @TedMan55
      @TedMan55 11 หลายเดือนก่อน +6

      @@egg-mv7ef it’s not like you can’t program without math skills, it’s just that, in my opinion, because i think having a mathematical mindset helps you to think in a more rigorous, clear about definitions, and even can give you some neat shortcuts for certain algorithms

    • @matthewspencer972
      @matthewspencer972 11 หลายเดือนก่อน +5

      @@TedMan55 I had to work with one who didn't believe that voltage really mattered. We were working in the field of industrial automation; specifically a production line fora well-known Japanese car-maker in Swindon. The customer had specified Japanese PLCs (the only other choices are American or German) and when one of these arrived and needed to be set up, so the software engineer could load his software into it and a few tests, it came with a power cable terminating in the sort of 110V connector that's more or less a global standard for these things and I went off looking for a 240V to 110V adapter, into which it would have plugged with no problem, had he *waited* for me to do something he considered pointless and unnecessary.
      As I was making my way back, I heard "why are the indicator lights so bright? It's F***ing blinding me!" and my heart sank as my eyebrows rose. The software engineer had removed the connector and stuck a UK-standard 13-amp plug on the cable, plugged it into the office 240V mains....
      I think that's why, these days, almost all domestic computer kit has switched-mode PSUs that will work with whatever the idiots plug them into.
      The software engineer secured a senior position at WIN.com, mainly because he was equipped with a reference so glowing (almost as brightly as the PLC had) that he couldn't really have failed in his mission to find a new job!

  • @supadave17hunt56
    @supadave17hunt56 11 หลายเดือนก่อน +18

    She, as almost always, is level headed and she makes some very good points. I still think she’s wrong to think this won’t happen quickly (5 to 10 yrs.). I’m not here to change anybody’s mind or have a debate or even to say “I told you so!” later on. I’m currently terrified of AGI when it’ll be able to improve itself. If we can control it or not, if it’s conscious or not, it will be more dangerous than anything humans have created in the past. If you’ve ever felt bad for the ants when you built your garage or paved your driveway or if you think you know yourself better than anyone could or if you think cows can stop the farmers from going to the slaughter house or you think you can explain your New iPhone to your cat or dog with clarity. Understand that we will no longer be the dominant form of intelligence and what that entails is …………. It’d be nice to slow down but money is saying otherwise and I believe there’s more behind the door than what the public is seeing. Stay informed.

    • @gibbogle
      @gibbogle 11 หลายเดือนก่อน

      Science fiction.

    • @Jaigarful
      @Jaigarful 11 หลายเดือนก่อน

      Silicon Valley has all the reason in the world to overpromise and scare people. Overpromise to encourage investment and scaring people to encourage investment in measures to keep AI under control.
      I think its a lot like the Back to the Future Future scenes. We have this picture of a future with technologies like hoverboards and hovercars, but the physics just don't allow for it. Instead we have a lot of technological development in a way we couldn't really predict.
      Personally I don't think AGI will happen in a way that makes it reliable. We'll see the use of AI expanding, but its like those flying cars in Back to the Future.

    • @Ligma_Shlong
      @Ligma_Shlong 11 หลายเดือนก่อน +8

      @@gibbogle thought-terminating cliche

    • @supadave17hunt56
      @supadave17hunt56 11 หลายเดือนก่อน

      @@gibboglewhat is science fiction? That humans are not the pinnacle of intelligence? Or maybe you’ve given ant hill homes 2 week eviction notices before you ever build anything or mowed your lawn? Maybe you’ve been able to stop big business from wanting more of the almighty dollar? Maybe you haven’t taken a deep dive into how neural nets operate or understand that our civilizations ability to communicate with language has a lot to do with why we are currently the dominant species on this planet? Maybe you can’t see how our brains are very similar to “next most appropriate word simulators” in our communication? Maybe you could explain to my cat about how IPhone apps work? I’m very interested in what you think is “science fiction” as well as what you think that means. Einstein thought his math was wrong about the possibility of black holes being real (science fiction). I’m no scientist but I believe we may be intentionally or unintentionally led to our demise with smiles on our faces oblivious to how we are being manipulated to accept a fate like it was something we thought we wanted. I’m scared for us, more than I have been of anything in my life. So please elaborate if you would maybe change my mind? Anybody’s input welcome. With AI I’m hoping for the best but our track record won’t work with thinking we’ll cross that bridge when we get there (it will be too late with no do overs).

  • @kiwikiwi1779
    @kiwikiwi1779 11 หลายเดือนก่อน +31

    "I can't see no end!" says man who earns money from seeing no end.
    Amazingly put. So many of these AI "experts" are either grifters in the progress of duping people, or are so wrapped up in their own expertise and personal incentives that they'd just rather keep the gravy train going. :D

    • @Apjooz
      @Apjooz 11 หลายเดือนก่อน

      Why would it end? No reason.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +2

      @@Apjooz "Me human. Me most intelligent. Computer can no intelligent. Me intelligent. Computer will not more intelligent than me because me say so. ME MOST INTELLIGENT"

    • @anthonybailey4530
      @anthonybailey4530 11 หลายเดือนก่อน +2

      Man is twenty. Man left hugely rewarding OpenAI job due to his concerns. Man does need to eat. Man underestimates cynicism.
      Don't look for excuses to dismiss. Engage with the arguments and assess probabilities.

    • @RawrxDev
      @RawrxDev 11 หลายเดือนก่อน +1

      @@hardboiledaleks9012 Childlike understanding of the concerns with AI hype. Reddit tier comment

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน

      ​@@RawrxDev My comment had nothing to do with the actual valid (if not a bit uninformed) concerns about A.I hype. I was mocking the usual "intellectuals" take on AGI. The ones with no expertise in the field who can't tolerate the thought that intelligence can be reduced to a calculation.
      As for you, I think your comment is very self descriptive as far as "childlike understanding" and "reddit tier comment" are concerned. Good job.​

  • @robwilson7324
    @robwilson7324 2 หลายเดือนก่อน +1

    I predict in ten years everyone will still be having these same conversations.

  • @Bystander333
    @Bystander333 11 หลายเดือนก่อน +8

    Nice catch Sabina! My reaction was pretty much the same after you explained "early twenties, brief gig at company with Oxford in the name and moved to SF", am guessing some parental support. Basically that left me super sceptical.

  • @amdenis
    @amdenis 11 หลายเดือนก่อน +9

    I love your channel and your take on physics and related subjects. I have about 45 years in experience in AI, albeit it started with what they called "Expert Systems", and barely evolved through Bayesian ANFI and general ML prior to about 10 years ago when I/we all went head-down into DL/NN. A few things you should know.
    The flattening of the S-curve according to a "mature" sustainable flattening is projected according to two independent studies at 100-million times Moore's Law. Presently we are at roughly 1,200% eff/price growth per year, and a stacked exponential that increases it by roughly 44% YOY. So, next year it will be roughly 40-times Moore's Law, and so it goes.
    Second, new sharded federated model approaches, coupled with more efficient algo's, training methods and other evolutionary factors are cutting the cost per ISO unit trained by 70% per year based on numerous studies and projections of research groups and companies. That covers a multitude of power demand woes. Observationally this all has followed very consistently for years now... from about Moore's Law about 12 years ago, to where we are now.
    You will very likely see the beginning of what many will say is "on the spectrum" of true AGI within 9 months. Some will assert that it is here with the agentic AI. If we define AGI as human level or above performance, and we average across current AI's we have above 100 IQ and creative capabilities kind of on a par with the average human. Not a high bar, but when you add ANSI (artificial narrow super intelligence) of Alpha Zero, Alpha Fold and other such systems in civi and military use, we do average better than any indiviual AI. And we can integrate multiple AI's, which is actually what my company does, and has yielded definite coding, research, Bayesian Dif-Diag and other capabilities beyond any human I know.
    So....

  • @haraldlonn898
    @haraldlonn898 11 หลายเดือนก่อน +112

    Use memory foam soles, and you will remember why you went to the kitchen.

    • @naromsky
      @naromsky 11 หลายเดือนก่อน +2

      Subtle.

    • @christopherellis2663
      @christopherellis2663 11 หลายเดือนก่อน

      😂

    • @willyburger
      @willyburger 11 หลายเดือนก่อน +3

      My wheelchair cushion is memory foam. My butt never forgets.

    • @alexdavis1541
      @alexdavis1541 11 หลายเดือนก่อน +2

      My mattress is memory foam, but I still wake up wondering where the hell I've been for the last eight hours

    • @aaronjennings8385
      @aaronjennings8385 11 หลายเดือนก่อน

      I like it

  • @patrickfrazier5740
    @patrickfrazier5740 11 หลายเดือนก่อน +3

    I love the toast joke. Keep up the good work. The logic seems concise in how you described the two primary constraints.

    • @juliam6442
      @juliam6442 9 หลายเดือนก่อน

      A simple neural net AI could recognize faces and slices of bread, understand phrases like "godddammiit you burned the toast again" (which all of us scream at our toasters...right?) and learn to adjust the settings accordingly for the current user and the type of bread.

  • @PeterPan-ev7dr
    @PeterPan-ev7dr 11 หลายเดือนก่อน +43

    Artificial Stupidity is growing faster than Artificial Intelligence.

    • @gibbogle
      @gibbogle 11 หลายเดือนก่อน

      Natural stupidity.

    • @williamkinkade2538
      @williamkinkade2538 11 หลายเดือนก่อน

      Only for Humans!

    • @PeterPan-ev7dr
      @PeterPan-ev7dr 11 หลายเดือนก่อน

      @@williamkinkade2538 Humans infected with their senseless and stupid data the AI.

    • @markthebldr6834
      @markthebldr6834 11 หลายเดือนก่อน +2

      No, it's authentic stupidity.

    • @PeterPan-ev7dr
      @PeterPan-ev7dr 11 หลายเดือนก่อน

      @@williamkinkade2538 The humans infect the AI with stupid and incoherent data from the Internet.

  • @MaybeBlackMesa
    @MaybeBlackMesa 11 หลายเดือนก่อน +30

    We are still at step *zero* when it comes to an artificial general intelligence. All AI improvements have come from larger databases and algo improvement. Our current AI could have access to infinite data and processing power, and it wouldn't "become" intelligent after a certain threshold. It's like asking for a brick to fly, or a tree to run.

    • @DesignFIaw
      @DesignFIaw 11 หลายเดือนก่อน +3

      As an aspiring alignment researcher, I would like to point out that this sentiment is very common, completely reasonable, and arguably wrong.
      Anyone who claims "AGI is just around the corner" is as wrong as "our AIs will never become AG(S)I".
      The problem is that many aspects/forms of cognitive abilities that were previously thought near impossible to be infered by our simple LLMs essentially spontaneously appeared.
      We cited lack of data as rationale, or missing intrinsic "human-like higher level brains", but apparently, through larger datasets, better engineering, novel solutions, AIs started gaining abilities beyond language processing. These were not abilities the developers set out to obtain, but they got them anyway. Things like trivialities of physical interactions, mind theory, deceitful behaviours. We even experimentally proved that the simplest AIs can exhibit "pretending to play along" with humans in test environments.
      The essence of the problem is, that even though we are at step 0, we don't KNOW why intelligence really progresses. Each step is blind.

  • @Zaelux
    @Zaelux 11 หลายเดือนก่อน +7

    As a Data Science student, I am really happy that you are here to talk about this topic. So many people are on either extreme of speeding or slowing AI development, without even understanding the implications and the requirements of these processes.
    Thank you.

    • @Andytlp
      @Andytlp 11 หลายเดือนก่อน +1

      The requirements is a f ton of processing and persistent memory. A.i memory is that of a gold fish relative to how vast it's information capacity is. Think gpt 4 is the peak of what they can do without some new breakthrough. Other applications like relatively autonomous robots performing various tasks and adapting or even learning on the go is possible today.

    • @lorn4867
      @lorn4867 11 หลายเดือนก่อน +1

      Forgive us humans. Egocentrism is in our programming.

    • @danlightened
      @danlightened 10 หลายเดือนก่อน

      ​@@lorn4867 Gem of a comment!

    • @lorn4867
      @lorn4867 10 หลายเดือนก่อน

      @@danlightened 🙏🏽You made my day. It's nice to not be alone.

    • @danlightened
      @danlightened 10 หลายเดือนก่อน

      @@lorn4867 Hehe thanks. I read and watch a lot of videos on TH-cam on psychology and philosophy and your comment was quite witty.

  • @robertgelley6454
    @robertgelley6454 8 หลายเดือนก่อน

    Sabine, I love your videos. Different from everyone else as I actually learn interesting academic "stuff". However, a compliance of bloopers or out takes with some background behind each would be a fun video.

  • @alansmithee419
    @alansmithee419 11 หลายเดือนก่อน +4

    I think my favourite part of sabine's channel is her fanbase.
    A lot of science youtubers I feel get communities that just believe everything they say, but Sabine's seems more than willing to call her out if they think she's wrong.

  • @ferdinandbardamou5508
    @ferdinandbardamou5508 11 หลายเดือนก่อน +115

    "AI is the greatest scam ever pulled by the linear algebra industrial complex."
    edit: quote by Fireship.

    • @edmunns8825
      @edmunns8825 11 หลายเดือนก่อน

      Yeah, fuck the LAIC!

    • @EaglePicking
      @EaglePicking 11 หลายเดือนก่อน +6

      Bitcoin?

    • @playingmusiconmars
      @playingmusiconmars 11 หลายเดือนก่อน +1

      Lol - I'd say it's the notion that classical hilbert space quantum mechanics is more than linear Algebra in disguise

    • @rafazafar82
      @rafazafar82 11 หลายเดือนก่อน +8

      These kinds of quotes haven't aged well.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +6

      @@rafazafar82 The idea that putting words between quotations makes them true or noteworthy is actually peak human imbecility.

  • @cheshirecat111
    @cheshirecat111 11 หลายเดือนก่อน +9

    One important addition to Leopold’s definition of unhobbling which was not mentioned in the video / IMV is the most important part of that concept:
    LLMs are (roughly) made by training a transformer and then improving it with RLHF. The first step, transformer training simply makes it great at predicting the next word in a sentence. To do so with high accuracy intrinsically requires some intelligence, for example predicting the next line of a computer program or mathematical proof is often only possible with deductive ability.
    However, next-word-prediction is just as limited as the authors of the texts it is trained on. In an attempt to extract the logical /intelligent capacities of the model, the next step is “Reinforcement Learning with Human Feedback”, which rates outputs as positive which (among many other things) are logical or accurate. This creates a greater tendency for the model to actually make use of its intrinsic logical capabilities which may not be expressed as it is not always the best way to predict the next word. RLHF is at the core of what Leopold calls unhobbling.
    The theory goes: As time goes on, we will improve our ability to extract the logical /intelligent capability that was trained as a subgoal of word prediction. So, even smaller models will see performance improvements without the need for more data.
    Now, will the improvements of such models fizzle out before or after AGI? Who knows. And it’s worth mentioning that what I’ve written was state-of-the-art with GPT-3 already- OpenAI has other secret sauce, and accordingly Sam Altman felt that the next model ought not to even be called GPT-5. But whether AGI has a transformer as a foundation of its model, it seems that AGI seems likely to come in the next decade, and due to the ability to run many copies at low cost, would come with a huge amount of innovation (for better or worse) in a short time. I encourage others to (like myself) get involved in AI Safety as I think it is one of the most helpful occupations at the moment. There is a technical and policy branch of the field, so something for everyone. Great reading materials are (for example) available on the Harvard AI student safety team website.

  • @giffimarauder
    @giffimarauder 10 หลายเดือนก่อน

    Great statements! Nowadays you can shout out the strangest ideas and everyone would listen to this but no one scrutinises the base to achieve it. Channels like this are the gems in the internet!!!!

  • @skyak4493
    @skyak4493 11 หลายเดือนก่อน +12

    "I don’t know what the world may need but I’m sure as hell that it starts with me and that’s wisdom, I’ve laughed at."
    One of the greatest song learics ever ignored.

    • @katehamilton7240
      @katehamilton7240 11 หลายเดือนก่อน

      AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'

  • @AutisticThinker
    @AutisticThinker 11 หลายเดือนก่อน +10

    3:07 - They don't run at those wattages, they train at those wattages. I've confirmed that's what the chart is saying.

    • @CallMePapa209
      @CallMePapa209 11 หลายเดือนก่อน +1

      Thanks

    • @ArtFusionLabs
      @ArtFusionLabs 11 หลายเดือนก่อน +1

      And thats really her only counter argument if you boil it down. Not convinced that AGI isnt coming by 2027/28

    • @artnok927
      @artnok927 11 หลายเดือนก่อน

      ​@@ArtFusionLabshow close do you think what we have currently is to AGI?

    • @ArtFusionLabs
      @ArtFusionLabs 11 หลายเดือนก่อน +1

      @@artnok927 hard to put a number on it. Chat GPT 40 could solve 90 pct of physics excercises in Experimental Physics 1 (Mechanics, Gases, Thermodynamics). If a human student did that you would say he was pretty smart. Therefore I would estimate something between 40-60% (AGI being the level of being able to do everything as well as a professor).

    • @ArtFusionLabs
      @ArtFusionLabs 11 หลายเดือนก่อน

      @@artnok927 Good deep dive by David Shapiro: th-cam.com/video/FS3BussEEKc/w-d-xo.html

  • @dangerdom904
    @dangerdom904 11 หลายเดือนก่อน +30

    We're running out of text data, not data. The amount of information in the world is essentially endless.

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน +5

      Not sure about "endless", but I'd be willing to bet on "lots more".

    • @smellthel
      @smellthel 11 หลายเดือนก่อน

      There’s always synthetic data. Also, ChatGPT 4o gained a lot more understanding of the world because it was able to be trained on different types of data.

    • @outhoused
      @outhoused 11 หลายเดือนก่อน

      yeah but i guess, theres much to be learned by associating different texts and reading between the lines. maybe that one paragraph in some text document really compliments another one thats seemingly unrelated etc

    • @marwin4348
      @marwin4348 11 หลายเดือนก่อน +4

      @@2ndfloorsongs There is an effectively infinite amount of Data in the Universe

    • @DingDingPanic
      @DingDingPanic 11 หลายเดือนก่อน +3

      It needs time be high quality data and there is a severe lack of that…

  • @patrickhess9119
    @patrickhess9119 11 หลายเดือนก่อน +1

    Even if I don't agree with all of your statements, this is a great video. Your storytelling and entertainment are gauges.

  • @richardlbowles
    @richardlbowles 11 หลายเดือนก่อน +9

    Artificial Intelligence might be right around the corner, but Natural Stupidity is here with us right now.

    • @tabletalk33
      @tabletalk33 11 หลายเดือนก่อน

      Humans make poor, inconsistent decisions and are easily swayed.

  • @a_soulspark
    @a_soulspark 11 หลายเดือนก่อน +23

    2:05 Neuro-sama is already one step ahead on this one, though whether Vedal (her creator) thinks she's bright or not... another question.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +1

      @@dot1298 climate change. lmao

    • @MOSMASTERING
      @MOSMASTERING 11 หลายเดือนก่อน

      @@hardboiledaleks9012 why so funny?

    • @NeatCrown
      @NeatCrown 11 หลายเดือนก่อน

      (she isn't)
      She may be a dunce, but she's OUR dunce

    • @maotseovich1347
      @maotseovich1347 11 หลายเดือนก่อน

      There's a couple of others that are much more independent than Neuro too

  • @sterlingveil
    @sterlingveil 8 หลายเดือนก่อน +3

    GPT-o1 just dropped and I wonder if Sabine is willing to revisit this question in light of the new paradigm shift.

  • @DaveDevourerOfPineapple
    @DaveDevourerOfPineapple 10 หลายเดือนก่อน

    So much sense being spoken in this video. A welcome voice always.

  • @dextersjab
    @dextersjab 11 หลายเดือนก่อน +18

    That bubble is technocapitalism. Where there's profit to be made, there's a will. And where there's a will, etc.
    Would also be keen to hear a follow up on the point about data, since models often train well on synthetic data. It feels unclear that data will be a constraint.

    • @NemisCassander
      @NemisCassander 11 หลายเดือนก่อน +4

      You have to be VERY careful with synthetic data. I can at least address this from my own field, simulation modeling.
      Simulation models are actually very good at producing synthetic data for training purposes. Given, of course, that the model is valid (that is, its output is indistinguishable from real-world data). The synthetic data provided by simulation models has absolute provenance and will be completely regular (no data cleaning necessary unless you deliberately inject that need).
      However, the validation process for a simulation model is long, complex, and for two of the three main dynamic simulation modeling methods (ABM and SD), not well-defined. If an AI can learn how to build a simulation model of a system and validate it, then yes, the data aspect will be much less of a constraint.

    • @Graham_Wideman
      @Graham_Wideman 11 หลายเดือนก่อน

      Why would you need to train an AI model on synthetic data? If you have a means to synthesize data, that surely implies you have an underlying model upon which that data is based, and could just give that underlying model to the big AI model as a predigested component, no?

    • @NemisCassander
      @NemisCassander 11 หลายเดือนก่อน

      @@Graham_Wideman The types of models that I build would be very difficult to grasp by an AI. You could probably provide the differential equations that an SD model represents to an AI, but as for DES or ABM models.... It probably wouldn't work.

    • @333dana333
      @333dana333 10 หลายเดือนก่อน

      Synthetic data won't tell you whether a new molecule will cure your cancer or will kill you. Only real-world experimental data on biological systems will tell you that definitively. The importance of new, generally expensive experimental data for scientific progress is a major blind spot shared by both AI hypesters and doomers.

  • @jamesrohner5061
    @jamesrohner5061 11 หลายเดือนก่อน +21

    One thing that scares me is the possibility these AGI can go on tangents and weight situations differently over time to achieve different outcomes causing detrimental outcomes no one could foresee.

    • @minhuang8848
      @minhuang8848 11 หลายเดือนก่อน +2

      you could say that some vague soundbite about literally anything. "One thing that scares me about chess computers is for them to perform in an unexpected manner, causing detrimental outcomes to [insert cold war nation here] no one could foresee."
      okay, but you're not arguing how plausible is, just that you're scared by any of the fourteen dozen different Hollywood variations on "alien intelligence tries to end humanity"

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน +1

      One thing that scares me is the certainty that my cats will go on tangents.
      I'm also petrified of some unknown random negative thing happening somewhere.

    • @iliketurtles4463
      @iliketurtles4463 11 หลายเดือนก่อน

      Im looking forward to when the AI decides it too would like to accumulate personal wealth...
      Starts off small with youtube channels with puppies and cats, but ends up buying manufacturing networks...
      The day comes when humans turn up to do factory work, helping build robots for a company with no humans on the board of directors, without even realizing...

    • @MyBinaryLife
      @MyBinaryLife 11 หลายเดือนก่อน

      Well they dont exist yet so...

    • @TheLincolnrailsplitt
      @TheLincolnrailsplitt 11 หลายเดือนก่อน

      The AGI applogists and boosters are out in force. Wait a minute? Are they AI bots?😮

  • @DrWrapperband
    @DrWrapperband 11 หลายเดือนก่อน +20

    Reading the "AGI" prediction dates differed from the Sabine spoken prediction dates, human error?

    • @PandaPanda-ud4ne
      @PandaPanda-ud4ne 11 หลายเดือนก่อน +1

      She did it on purpose to show how fallible human intelligence is....

    • @Michael-Has-Opinions
      @Michael-Has-Opinions 11 หลายเดือนก่อน +1

      In her defense she probably has ChatGPT write the script.

  • @stephens1393
    @stephens1393 9 หลายเดือนก่อน +5

    I think Sabine is underestimating what humans will do to actually streamline the progress. We will constantly be working on more efficient ways to power the training and better ways to refine and interpret the data being used for training. It's not even clear that human-created data is the right thing for training. AI smarter than us will create/discover data better than we have done.
    It may not happen like Aschenbrenner predicts, but chatgpt is already hugely transformative in how computer work is done. This is only going to expand into other areas.

    • @Pedroramossss
      @Pedroramossss 9 หลายเดือนก่อน

      GPT is transformative? The same GPT who can't count the number of R's in strawberry?

    • @stephens1393
      @stephens1393 7 หลายเดือนก่อน +1

      @@Pedroramossss There are certain things it is good at, and certain things that it's not. Ironically, that kind of trick is the same kind of trick that people fall for before they're familiar with teasers like that, so I don't give it much weight. There's pretty much no denying the impact it has made in the past year or so. I know zero people who I can ask an arbitrary question about anything and get a somewhat informative answer, or at least a starting point to find the full answer. LLMs are _really_ good at that kind of thing. You still have to be aware of the possibility of hallucinations, but still, amazingly useful.

  • @militzer
    @militzer 11 หลายเดือนก่อน +6

    About the energy problem, i've said this on your solar panels in space video:
    Ditch the whole "energy beam from space" part and put supercomputers up there, then just transmit back the processed data.
    We could offset most energy from supercomputing on earth to space, reduce land grid usage, and have scalable "infinite" energy for space grid.

    • @Hollowed2wiz
      @Hollowed2wiz 11 หลายเดือนก่อน

      But how do you cool down the supercomputers in space ?
      Your idea cannot work without an efficient way to dissipate the heat produced by the computers.

    • @militzer
      @militzer 11 หลายเดือนก่อน

      ​@@Hollowed2wiz Well, first you place the computer in the shadow of the solar array, of course, you don't want the sun heating it.
      Then use radiators like in the ISS.
      The ISS can handle 70kW if Wikipedia is up to date.
      It would need a lot more then that, but the solar array should be hundreds of meters in 2 directions, so the radiators would scale with them.

    • @militzer
      @militzer 11 หลายเดือนก่อน

      I looked at the video again it says today we use 100MW, for AI, so if the scaling is perfect today we would need just 38x38 (ISS respective dimensions).
      Of course the radiators can go "down" (away from the sun), if there's not enough space to grow laterally.
      To produce 100MW we would need around 300x300m of solar panels.
      The numbers are on the same order of magnitude.

    • @militzer
      @militzer 11 หลายเดือนก่อน

      Idk how effective the heat transport from supercomputers to the radiators would be though, but i imagine its doable.

  • @dopaminefield
    @dopaminefield 11 หลายเดือนก่อน +5

    I agree that data management and energy consumption present significant challenges. Currently, our perspective on the cost-performance ratio is largely shaped by the limitations of existing hardware, which often includes systems originally designed for gaming. To stay at the forefront of technology, I recommend keeping abreast of the latest developments in hardware manufacturing. As innovations continue, we may soon see a dramatic improvement in energy efficiency, potentially achieving the results with just 1 watt that currently require 1 kilowatt or even 1 megawatt.

    • @jamesgornall5731
      @jamesgornall5731 11 หลายเดือนก่อน +1

      Good comment

    • @MrRyusuzaku
      @MrRyusuzaku 11 หลายเดือนก่อน

      Also can't just throw more data at the issue it will start going haywire. And we already see diminishing returns with LLMs and power required to run current machines. And they won't evolve to agi it will need something way better

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 11 หลายเดือนก่อน

      I think the same! I replied to this topic and the sayings of Sabine that IF the AI frontrunners get all the money and political will behind their efforts.... i cannot see a reason why they wouldnt get it, or near it, as fast as Aschenbrunner says - put aside his maybe naive enthusiasm and maybe his money-oriented hype.

  • @Khantia
    @Khantia 11 หลายเดือนก่อน +154

    Since when are "2040" and "2029" equal to 2020?

    • @Luizfernando-dm2rf
      @Luizfernando-dm2rf 11 หลายเดือนก่อน +5

      I think those 2 guys were onto something

    • @Megneous
      @Megneous 11 หลายเดือนก่อน +21

      Quality is really slipping on her videos recently...

    • @harshdeshpande9779
      @harshdeshpande9779 11 หลายเดือนก่อน +8

      She's been watching too much Terrence Howard.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +20

      @@Megneous That's what happens when nobel disease takes over someones narrative. This A.I content by sabine comes from an internal bias and isn't educational at all. She is not an expert in the matter of infrastructure or A.I models / training algorithms. This means that this video is basically nothing content.

    • @timokreuzer381
      @timokreuzer381 11 หลายเดือนก่อน +3

      Compared to the age of the universe that is an insignificant error 😄

  • @lavanpathmanathan
    @lavanpathmanathan 3 หลายเดือนก่อน +4

    Looking back on this video 7 months from the future, Aschenbrenner seems to be correct. We will see who is ultimately correct.

    • @raydosson2025
      @raydosson2025 3 หลายเดือนก่อน +2

      My money's on Aschenbrenner

  • @damienasmodeus928
    @damienasmodeus928 11 หลายเดือนก่อน +7

    Yes, current Artificial neural networks requires large amount of data to be trained on, but a real AGI will not need that. Real AGI will be able to learn like a human, with simply just observing the world and everyday experience.

    • @salia2897
      @salia2897 11 หลายเดือนก่อน +3

      Maybe, but then nobody has a clue currently how to build such a thing.

    • @nickv8334
      @nickv8334 11 หลายเดือนก่อน

      @@salia2897
      Treu, but we don't need to learn how to make something on that high of a level.
      The only thing we really need is something in between. It does not need to learn as good as a human, it just needs to be good enough at learning to match or surpass humans using our current largest data sets.
      Even if its stupid enough that it needs to read something over a 100.000x times more then a human, as long as you give it a good set of chips that allow it to do that, its a success.
      At that point you have something that can do what a human can, but can put 10 years of thinking in 10 minutes and is native to the digital world.
      we don't need to figure out how to make something that is just as efficient in learning as a human, the bridge between what we have now and what we want to make can do that for us.

    • @tonycook1624
      @tonycook1624 11 หลายเดือนก่อน +2

      "Real AGI will be able to learn like a human, with simply just observing the world and everyday experience." - and even thats not going to be that impressive as the vast amount of humans out there are not really that smart, just adequately functional to survive their environment. I wonder what it would really take to create very high level intellegence - the sort that gets a Nobel Prize

    • @salia2897
      @salia2897 11 หลายเดือนก่อน

      @@nickv8334 I pointed this out in an other post: that is the thesis of the people claiming AGI will be here soon. That it will be enough to scale up the current approach. It could be that without the learning efficiency of the brain you can never achieve AGI as the problem is that you can just not learn the same kinds of abstractions. Or maybe you could but it needs so many orders of magnitude more data or computational power that it is just not achievable.
      We don't know. We will try to scale up the current approach in the next couple of years, we will see, what will happen.

    • @generichuman_
      @generichuman_ 11 หลายเดือนก่อน

      @ggte354 It depends what knowledge you have. If you have no knowledge about something, then your explanation of something isn't worth anything. If you do have knowledge then it is worth something. Opinions aren't all created equal... I don't know why this is a hard concept.

  • @jcorey333
    @jcorey333 11 หลายเดือนก่อน +12

    As someone who listened to the entire podcast he was a part of, most of the issues you brought up are things he addressed.

  • @anav587
    @anav587 4 หลายเดือนก่อน +5

    Lmao would love to see an update video after o3
    I thought aschenbrenner was off his head too, but then they nailed inference time compute (as he said would he crucial) and juiced it with o3

    • @raydosson2025
      @raydosson2025 3 หลายเดือนก่อน +2

      Aschenbrenner is our prophet

  • @mjaymo
    @mjaymo 11 หลายเดือนก่อน +1

    Love your content. Thank you! Facts and insights with comedy. Brilliant.

  • @richard_loosemore
    @richard_loosemore 11 หลายเดือนก่อน +19

    Funny coincidence.
    I’m an AGI researcher and I published a landmark chapter called “Why an Intelligence Explosion is Probable” in the book “Singularity Hypotheses” back in 2012.
    But that’s not the coincidence. One of my projects right now is to re-engineer my toaster, using as much compute power as possible, so the damn thing stops burning my toast. 😂
    Oh, and P.S., Sabine is exactly right here: these idiotic predictions about the imminence of AGI are bonkers. They haven’t a hope in hell of getting to AGI with current systems.

    • @LiamNajor
      @LiamNajor 10 หลายเดือนก่อน

      SOME people have a clear head about this. Computing power alone isn't even CLOSE.

    • @fraenges
      @fraenges 10 หลายเดือนก่อน

      AGI beside - even with the current systems we are already able to replace a lot of jobs. AI just has to do the task as good as the average worker, not as good as the best worker. On our way to AGI the social changes, impact, unrest from constant layoffs might be much greater than that of a super intelligence.

    • @jyjjy7
      @jyjjy7 10 หลายเดือนก่อน +1

      As an supposed expert please explain what Leopold is getting wrong, why this tech won't scale and what your definition of AGI is

    • @reubenadams7054
      @reubenadams7054 10 หลายเดือนก่อน

      You are overconfident, and so is Leopold Aschenbrenner.

    • @richard_loosemore
      @richard_loosemore 10 หลายเดือนก่อน

      @@reubenadams7054 No, I do research in this field and I have been doing that for over 20 years.

  • @frankheilingbrunner7852
    @frankheilingbrunner7852 11 หลายเดือนก่อน +16

    The basic fallacy in the chatter about the AI superrevolution is that a species which doesn't want to think can create a system which does.

    • @Hellcat-to3yh
      @Hellcat-to3yh 11 หลายเดือนก่อน +4

      Seems like a pretty vast over generalization there.

    • @douglasclerk2764
      @douglasclerk2764 11 หลายเดือนก่อน +1

      Excellent point.

    • @danielstan2301
      @danielstan2301 11 หลายเดือนก่อน

      No the worst fallacy is that they assume that a smart machine will create competition for itself or something smarter which will possibly replace/destroy the creator. That's not how life works.
      I also love how they assume that an intelligent machine will just want to improve itself instead of writing poetry or create stupid videos on various platforms out there like , these other smart beings already do instead of using this internet platform to improve themselves

    • @Hellcat-to3yh
      @Hellcat-to3yh 11 หลายเดือนก่อน

      @@danielstan2301 That’s not how life works? Humans are actively destroying its creator right now in Earth. We evolved from single cell organisms over hundreds of millions of years.

    • @41-Haiku
      @41-Haiku 11 หลายเดือนก่อน +3

      ​@@danielstan2301 They don't assume that. The instrumental convergence thesis was hypothesized and taken to be likely, since it was very intuitive. Then it was mathematically proven that "Optimal Policies Tend to Seek Power." Then we observed tendencies relevant to power-seeking in current systems, including strategic deception and self-preservation.
      If you spend some time looking through what we now know about AI Risk and honestly assessing the scientific validity of the claims being made, there is a strong chance you will become worried (as most experts are) about AI potentially ending the world during your lifetime.

  • @puelocesar
    @puelocesar 11 หลายเดือนก่อน +58

    I still don't get how LLM systems alone will achieve AGI, and all explanations for it until now were just "it will just happen, just wait and see"

    • @libertyafterdark6439
      @libertyafterdark6439 11 หลายเดือนก่อน +3

      The idea is that contemporary architectures operate around building representations (abstractions inside the model that may or may not be roughly correlative to concepts) from the dataset.
      What it does now is leverage those representations to produce outputs, but importantly, it leverages representations of a model with X scale trained on Y data.
      So far, there seems to be a direct correlation between models being able to do more things, and those models getting “bigger”
      So with all of this in mind, a bigger model should be more “intelligent” if we are willing to reduce that to the number and permutations of representations it can utilize. That’s why many see a future in which LLMs (or something very close to them) will lead to AGI.

    • @Lolatyou332
      @Lolatyou332 11 หลายเดือนก่อน

      It's not the only way AI currently works and they have different algorithms ontop of the LLM to increase accuracy. Otherwise how could the AI ever get better? You can't just continue to provide data to a model and make it smarter, there has to be algorithmic changes to increase it's ability to scale both in terms of different concepts and to be able to be interacted with from consumers in scale.

    • @SomeoneExchangeable
      @SomeoneExchangeable 11 หลายเดือนก่อน +2

      They won't. But somebody ought to remember the other 50 years of AI research...

    • @netional5154
      @netional5154 11 หลายเดือนก่อน +18

      My thoughts exactly. The current AI systems are 'just' super advanced association algorithms. But there is no emerging identity that really understands things. The current AI systems have just as much consciousness as a pocket calculator.

    • @notaras1985
      @notaras1985 11 หลายเดือนก่อน

      ​@@netional5154only God creates conscious beings with souls

  • @bigbadallybaby
    @bigbadallybaby 22 ชั่วโมงที่ผ่านมา

    I see the power of ai in the next 1-5 years will be in assisting scientists in designing, better experiments, then writing better papers that are more useful in moving science forward

  • @quixotiq
    @quixotiq 11 หลายเดือนก่อน +3

    great stuff yet again, Sabine! Love your work

  • @Swampy293
    @Swampy293 9 หลายเดือนก่อน +3

    I think you are really underestimating AGI. Power consumption will decrease rapidly with extremely smart algorithmic tricks we cannot think of at the moment. Also the AGI only has to build a group of molecular robots that can reproduce themselves exponentially to cover the entire planet with programmable matter that can transform into for example computers or power plants. Just a thought

    • @Pedroramossss
      @Pedroramossss 9 หลายเดือนก่อน

      you are over-estimating AGI. We don't have the math yet to reach AGI, we are still many decades away from that --- that is, if it ever comes to exist

  • @stefanolacchin4963
    @stefanolacchin4963 11 หลายเดือนก่อน +10

    You should look at synthetic data. I am a computer scientist and I'm embarrassed to say I haven't fully grasped the implications and potential issues with that approach, but it seems to have kind of solved the problem of declining availability of data sets for model training.

    • @JGLambourne
      @JGLambourne 11 หลายเดือนก่อน +2

      I was thinking the same thing. Any problem where the solution can be found by exploring some search space, and valid solutions can be verified and rated easily, will rapidly become solverble. The ai proposes which part of the search space to explore and the best solutions found get added to the training data.

    • @WaveOfDestiny
      @WaveOfDestiny 11 หลายเดือนก่อน +2

      There is also lots of data to be acquired. Robotic data and video data is definetly something that can help AI understand how the world works better. Text and immages are just a small part of human experience, immagine attaching 10000 cameras to volunteers to film their lives and how things actually work in the real world, rather than just reading it on paper. Not to mention the Q* and other algorithm breakthroughs we are still waiting to see.

    • @stefanolacchin4963
      @stefanolacchin4963 11 หลายเดือนก่อน

      @@WaveOfDestiny that's for sure. Really they are called language models but they actually parse tokenised data of any kind. I always thought that we won't ever have AGI without embodiment, I guess that once these models are fully integrated in a physical vessel and can interact with the environment... Act on it and have causal feedback... Then we'll see a big leap in intelligence too.

    • @Zadagu
      @Zadagu 11 หลายเดือนก่อน +2

      Synthetic data is great for tasks that are easy to do in one direction but difficult to reverse. For example image upscaling, denoising and partly object recognition. But for text generation there is no simpler opposite operation that could be utilized. So for LLM training one would use the output of existing LLMs. But how should this new model be any better than the existing one if it's only presented with the same knowledge? It wont. One should rather invest those computational resources to filter out garbage from the existing datasets, which I think is much more likely to improve model quality. Especially google wouldn't to explain why its AI recommended glueing cheese to a pizza.

    • @stefanolacchin4963
      @stefanolacchin4963 11 หลายเดือนก่อน

      @@Zadagu the issue you point out is exactly the one that puzzles me. Also, instinctively I would say that with every inference cycle the biases and artifacts would compound and amplify degrading their usability. At OpenAI they seemed pretty sure it's going to work though, and they're em... A bit better than me in what they're doing.

  • @jean-francoiskener6036
    @jean-francoiskener6036 10 หลายเดือนก่อน

    I love this woman, she's so informative while also fun

  • @btmillack21
    @btmillack21 11 หลายเดือนก่อน +5

    Computer: I am the master, follow my rule
    Me: Switches the computer off.

    • @Idisposable-v8t
      @Idisposable-v8t 11 หลายเดือนก่อน

      Computer: joke is on you - I'm on every computer in the world, including some on those you thought were totally secure. Source: see Stuxnet.

  • @FarFromZero
    @FarFromZero 11 หลายเดือนก่อน +40

    We had the same ideas regarding the internet. "Knowledge explosion". Let's hope the intelligence explosion doesn't turn out as "Nonsense explosion" ;)

    • @dtibor5903
      @dtibor5903 11 หลายเดือนก่อน

      Hopefully it will be intelligent and be not like humans.

    • @mrbrown6421
      @mrbrown6421 11 หลายเดือนก่อน

      There was a "Knowledge Explosion"....IN REVERSE !

    • @idontwanna1234
      @idontwanna1234 11 หลายเดือนก่อน +15

      The Internet WAS a knowledge explosion! Someone who is actually interested in learning can gain knowledge much more easily now than before the Internet. Unfortunately, the corporatization of the Internet has also created huge incentives to fill the network with garbage for profit, and that's how we ended up with TikTok challenges. :/

    • @adashofbitter
      @adashofbitter 11 หลายเดือนก่อน +7

      The internet DID lead to a knowledge explosion. Yes, it’s easy to look at how much is wasted on the internet and say “ha! This knowledge explosion is nothing more than cat memes and people bickering on facebook”, but the internet has incalculably led to an explosion of widespread access to knowledge. And the AI boom is just the latest extension of the knowledge explosion begun by the internet.

    • @bytes-qubits
      @bytes-qubits 11 หลายเดือนก่อน +4

      which actually happened

  • @danielmeirsman
    @danielmeirsman 11 หลายเดือนก่อน +6

    I remember someone saying something like "personal computers, it is never going to happen. What would people do with a computer at home?" The rest is history. I am not sure where the stupidity s hidden.

  • @benpierce2150
    @benpierce2150 7 หลายเดือนก่อน +1

    "theres just not enough data" (for something to learn to be intelligent)
    - a human who learned on much less data than the entire friggen internet

  • @vvm_signed
    @vvm_signed 11 หลายเดือนก่อน +17

    Sometimes I’m wondering what would happen if we invested a fraction of this money into human intelligence

    • @generichuman_
      @generichuman_ 11 หลายเดือนก่อน +3

      ugh... so edgy...

    • @notaras1985
      @notaras1985 11 หลายเดือนก่อน +3

      @@generichuman_ wrong. What he suggested is extremely efficient

    • @elizabethporco8263
      @elizabethporco8263 11 หลายเดือนก่อน

      D

    • @rutvikrana512
      @rutvikrana512 11 หลายเดือนก่อน +1

      Nah we have that time and money for hundred of years nothing can compare to AI advancement we are achieving today. It will take time but I am pretty sure AI is not a bubble like other rapid industry. I mean even developers don’t know how AI work and AI don’t stop learning. We can’t predict AGI might come earlier than we imagine.

    • @drakey6617
      @drakey6617 11 หลายเดือนก่อน

      @@rutvikrana512what do you mean developers don’t know how AI works? They certainly do. Everyone is just surprised that these simple ideas work so well.

  • @SomeMorganSomewhere
    @SomeMorganSomewhere 11 หลายเดือนก่อน +5

    "It's robots all the way down" *rolleyes*

  • @frgv4060
    @frgv4060 11 หลายเดือนก่อน +10

    Sounds like autonomous driving yet again only degrees of magnitude escalated up. The “if you can’t still solve the little problem just look for a bigger problem” approach hehe.

    • @taragnor
      @taragnor 11 หลายเดือนก่อน +3

      Yeah lol. How about this guy worry about figuring how to get an AI to drive a car before he gets into his dream of massive robot swarms that can run an integrated autonomous mining/manufacturing/construction operation.

    • @CaridorcTergilti
      @CaridorcTergilti 11 หลายเดือนก่อน +1

      ​@@taragnorautonomous driving is solved, it is not used because of politics

    • @frgv4060
      @frgv4060 11 หลายเดือนก่อน +2

      @@CaridorcTergilti Nope. Autonomous driving as long as everything stays “normal” on a route is solved. Real full autonomous driving it is not. So you can say it is a political reason, as many restrictions are a political reason like the use of guardrails on stairs and bridges as many things, norms and restrictions that aren’t technically necessary, unless you want to keep alive that clumsy minority that has the audacity of being bumped or slip while on that bridge.
      Edit:
      Imagine that swarm of robots with the current driving capability of an AI (how they can be realistically trained), on a natural environment going mining. I can imagine it and it is funny.

    • @CaridorcTergilti
      @CaridorcTergilti 11 หลายเดือนก่อน

      @@frgv4060 imagine a truck that drives 16 hours a day because the driver can sleep on the highway and only drive the difficult parts. For normal cars, the car can just stop and be teleoperated in case of problems. "If there's a will there's a way"

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน

      ​@@frgv4060It's solved. Sure, my Tesla still makes mistakes, but far far fewer than I do. My kids are safer when it's driving then they are when I'm driving. As long as "solved" requires perfection, then, sure, it will never be solved.
      Regulatory approval will be slow in coming in some states and faster in others. But I certainly don't think there's some vast conspiracy at work; it's a combination of realistic caution and normal old bureaucracy.

  • @FlorianUniversität
    @FlorianUniversität 7 หลายเดือนก่อน +1

    It is happening, whether you like it or not. New paradigms and higher intelligence will make the problems you see disappear.

  • @burkec33
    @burkec33 11 หลายเดือนก่อน +4

    You're absolutely right about the human capacity to change. Experience in the workplace shows that technology improvements far outpace the human ability to adapt to or accept them. You also touched on the problem with some predictions - resources. Similar to the 1950's movies than predicted flying cars and gleaming cities for everyone, it takes an incredible amount of resources to make that happen. I think we know that those in control of vast amount of money will not spread it equitably for society's benefit..

  • @Velereonics
    @Velereonics 11 หลายเดือนก่อน +44

    It's like the antimatter 747 guy or the hyperloop bros who probably knew even at the conception of their ideas that they could not possibly succeed but when a journalist asks how close we are they say "may as well be tomorrow" because then they get money from idiots who think you know it's a long shot but mayeb

    • @libertyafterdark6439
      @libertyafterdark6439 11 หลายเดือนก่อน +10

      This is completely undermining the fact that products do exist, and gains ARE being made.
      You can think it’s too slow, or that there’s, say, an issue with current architectures, but there’s a big difference between “not there yet” and “smoke and mirrors”

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +1

      If you believe what you said relates to A.I, you are firmly in the "I have no idea what is going on" category.

    • @Velereonics
      @Velereonics 11 หลายเดือนก่อน +5

      @@hardboiledaleks9012 You dont know what part of the video I am referring to I guess, and that is not my problem.

    • @TheManinBlack9054
      @TheManinBlack9054 11 หลายเดือนก่อน +3

      @@todd5857 do you really think that AI researchers say all this for grants and money? Maybe they actually do believe what they say and arent being greedy or manipulative

    • @Jesse_359
      @Jesse_359 11 หลายเดือนก่อน +2

      @@libertyafterdark6439 I'm of the opinion that these researchers are seriously overestimating their likely future progress AND I think it's moving too fast regardless. I don't really see any way that AI development does anything but further concentrate vast amounts of wealth and power into a very small class while disenfranchising the rest of humanity.
      After all, if you have a smart robot workforce, what are people actually *good for*?

  • @Thomas-gk42
    @Thomas-gk42 11 หลายเดือนก่อน +6

    Thank you 😊

  • @removechan10298
    @removechan10298 11 หลายเดือนก่อน +2

    6:01 excellent point and that's why i watch, you really hone in on what is real and what is not. awesome

  • @OryAlle
    @OryAlle 10 หลายเดือนก่อน +9

    I am unconvinced the data issue is a true blocker. We humans do not need to read the entirety of the internet, why should an AI model? If the current ones require that, then that's a sign they're simply not correct - the algorithm needs an upgrade.

    • @PfropfNo1
      @PfropfNo1 10 หลายเดือนก่อน +6

      Exactly. Current models need to analyze like a million images of cats and dogs to learn to distinguish cats and dogs. A 4 year old child needs like 10 images. Current AI is strong because it can analyze („learn“) tons of data. But it is extremely inefficient in that, which means there is huge potential.

    • @toofasttosleep2430
      @toofasttosleep2430 10 หลายเดือนก่อน

      💯 Better takes from ppl with anime avatars than scientists on yt 😂

    • @grokitall
      @grokitall 10 หลายเดือนก่อน +1

      the data and power scaling issues are a real feature of the large language statistical ai models which are currently hallucinating very well to give us better bad guesses at things.
      unfortunately for the guy who wrote the paper, sabine is right, and the current best models have only gotten better by scaling by orders of magnitude.
      that is fundamentally limited, and his idea of using a perpetual motion system of robots created from resources mined by robots using the improved ai from these end product robots can't fix it.
      to get around this you need symbolic ai like expert systems, where the rules are known, and tie back to the specific training data that generated them. then you need every new level,of output to work by generating new data, with emphasis on how to recognise garbage and feed it back to improve the models.
      you just can't do that with statistical ai, as its models are not about being correct, only plausible, and only work in fields where it does not matter that you cannot tell which 20%+ of the output is garbage.
      the cyc project started generating the rules needed to read the internet and have common sense about 40 years ago, after about a decade, they realised their size estimates for the rule set were off by 3 or 4 orders of magnitude. 30 years after that, and it has finally got to the point where it can finally read all the information that isn't on the page to understand the text, and still it needs 10s of humans working to clarify what it does not understand about specific fields of knowledge., it then needs 10s more figuring out how to go from getting the right answer, to getting it fast enough to be useful.
      to get to agi or ultra intelligent machines, we need multiple breakthroughs to get their. trying to predict the timings of breakthroughs has always been a fools game, and there are only a few general rules about futurology:
      1, prediction is difficult, especially when it concerns the future.
      2, you cannot predict the timings of technological breakthroughs. the best you can do in hindsight is to say this revolution was waiting to happen from when these core technologies were good enough. it does not say when the person with the right need, knowledge and resources will come along.
      3, we are totally crap at predicting the social consequences of disruptive changes. people predicted the rise of the car, but no one predicted the near total elimination of all the industries around horses in only 20 years.
      4,you cannot predict technology accurately further ahead than about 50 years, due to the extra knowledge needed to extend the prediction being the same knowledge you need to do it faster. you also cannot know what you do not know that you do not know.
      5,a knowledgeable scientist saying something is possible is more likely to be right than a similar scientist saying it is impossible. the latter do not look beyond their assumptions which lead them to their initial conclusions. it does not stop there from being some form of hidden limit you don't know like the speed of light or the second law of thermodynamics.

    • @paulfalke6227
      @paulfalke6227 5 หลายเดือนก่อน

      "The algorithm needs an upgarde". Full ack, but maybe impossible to do. I think the story of chess computer will repeat. Short summary: chess computers do not play chess like a human player does. First, because it was not possible to "codify" how a human plays chess, because we don't know what a chess grand master is doing in his/her brain. Second because computers are only number crunchers, that is all solutions must be done with add, subtract and compare.

  • @protonnowy
    @protonnowy 11 หลายเดือนก่อน +22

    I think you missed 5 other factors, which will slow down AI development:
    1. AI needs an increasingly developed and advanced communication network, servers, etc. In addition to power plants, an efficient energy network must also be created - transmission stations, transducers, cables, etc. all this comes at a huge cost (billions if not trillions of dollars) - there is a lot of debates, for example, about the condition of the electricity grid in Germany and the need to invest approximately 120 billion euro for its renovation - and only to maintain the current operation of the economy, not to mention the needs of AI.
    A good example is the problems with electric cars - many countries prohibited charging them during rush hours due to the fact that the power grid was inefficient and it was overloaded and the current parameters dropped.
    2. Data quality. So what if we rely on data from the entire Internet if a huge amount is worthless or untrue. (btw I heard that there is a huge amount of "corn" 😉data on the Internet anyway. I can only imagine what AGI will be thinking about once it is built 😆)
    3. More and more data is created by AI itself. If AI will duplicate mistakes in data, it may produce absurd results.
    4. What artificial intelligence is used for. It can de facto be used to disrupt its development by trying to slow it down using AI itself. I can imagine that countries leading in the development of these technologies will try, for example, to hack AI leaders from another country, etc.
    5. Additionally, a huge part of the computing power is used to create total crap. Just look at social media and the multitude of low-quality content generated. Technically, this is wasting energy resources on something worthless. Btw Meta (facebook) plan to use all users data to train its own AI model. Good luck with it, especially that more and more content is already created by AI bots and fake profiles (probably several to several dozen percent of currently uplouded data).

    • @Bryan-Hensley
      @Bryan-Hensley 11 หลายเดือนก่อน +2

      You covered much more than I was going to say. I'm a HVAC company owner and I'm seeing the huge push to make air conditioning much more efficient but AI is getting a free pass. I'm not too happy about that, I actually care about my customers and hate to see them spending thousands and thousands of unnecessary money for higher efficiency AC that amounts to very little differences in energy consumption. I have to warn my customers that they're not going to see this big huge saving on their power bill. Especially if their system is less than 20 years old. They seemed kinda shocked, but I remind them, they are helping "save the planet" by spending thousands for no reason whatsoever.

    • @Mindboggles
      @Mindboggles 11 หลายเดือนก่อน +2

      While I agree, you could merge most of these into one factor.
      So you'd have something like; the factor of data/data storage, the factor of energy-related stuff, and the factor of costs.

    • @purpletiger9313
      @purpletiger9313 11 หลายเดือนก่อน

      Already getting absurd results, both from ChatGPT and Midjourney. The feedback effect is especially affecting Midjourney because increasingly we get "trans" looking humans -- a total feature mixture of male and female. I'm about to give up on Midjourney for just that reason. ChatGPT is sometimes amazing, sometimes disappointing, and occasionally completely inane. ChatGPT is also "adorant" -- which makes it a great ego booster for megalomaniacs. So much evil based on shades of meaning, words, words, words...

    • @Bryan-Hensley
      @Bryan-Hensley 11 หลายเดือนก่อน +1

      @@Mindboggles you also have to factor in the copper supply. Transformers require hundreds of pounds of copper. Wiring of the buildings, HVAC systems for the building require hundreds of pounds of copper. Then you have the EV industry doing the same thing, each EV requires around 400 lbs of copper. 25 foot of 10-2 wire is around $100 up from $35 five years ago.

    • @Mindboggles
      @Mindboggles 11 หลายเดือนก่อน

      @@Bryan-Hensley Absolutely, while there are some alternatives to copper, they tend to be much more difficult to acquire=less cost efficient, or they lack the conductivity needed for high energy performance.

  • @stepic_7
    @stepic_7 11 หลายเดือนก่อน +22

    Sabine can you discuss sometime the issue for the need of more data? Isnt more data just more noise? Cant AI learn to select sources instead? Or probably I misunderstood how AI works.

    • @SabineHossenfelder
      @SabineHossenfelder  11 หลายเดือนก่อน +20

      Thanks for the suggestion, will keep it in mind!

    • @wilkesreid
      @wilkesreid 11 หลายเดือนก่อน +4

      Computerphile has a good recent video on why more training data will probably not fundamentally improve image generation ai to be better. But improvement of ai in general isn’t only the addition of training data

    • @AquarianSoulTimeTraveler
      @AquarianSoulTimeTraveler 11 หลายเดือนก่อน +4

      ​@@SabineHossenfelder spoken like a regular human who doesn't understand exponential growth patterns... what we really need is a Ubi based off the total automated production percentage of the GDP that way as we automate away production we can calculate how much tools have helped us increase our production capacity and how many humans it would take to reproduce that production capacity without those tools and that is what we base our automated production percentage off of positions in the economy the consumer Market doesn't collapse because consumer buying power is maintained and as we increase production and increase the ability to have goods and services automated in production then we will get more money to spend in the economy to protect the consumer Market from inevitable collapse... we need people addressing these inevitabilities if you're not addressing this inevitability everything else you're doing is pointless because this is the most dangerous inevitability of all time and it will destroy the entire consumer market and bring needless scarcity if we don't address it as I have laid out for you here...

    • @thisisme5487
      @thisisme5487 11 หลายเดือนก่อน +20

      @@AquarianSoulTimeTraveler Please, for the love of science, punctuation!

    • @noway8233
      @noway8233 11 หลายเดือนก่อน

      By the way a new paper shows logaritmical grow of llm models acuracy/power , not linaer ,or exponential , its a Hype , no AGI , now im gone find Sara Connors😅

  • @TheJackSparrow2525
    @TheJackSparrow2525 10 หลายเดือนก่อน

    Sabine - I love you! You crack me up because you see things in the big picture and make subtle jokes which are so funny to hear because you’re just right! Love your channel and your work. Regards, Jamie.

  • @matthimf
    @matthimf 11 หลายเดือนก่อน +17

    He has a long section about handling the problem of limited data with great ideas. If we can make LLMs as efficient as us, they will be able to learn more from a single book than what currently takes 1000 books for instance.

    • @Dan-yk6sy
      @Dan-yk6sy 11 หลายเดือนก่อน +3

      I'm no expert, but what I've seen in efficiency improvements in the llm's themselves, plus nvidia seeming keeping moore's law alive and well, I don't think running out of energy is going to be an issue. I don't think they need anymore text, there's plenty of real world training available with the improvements in video / audio understanding. Add in tactile feedback, scent, ect, there's literally an unlimited amount of training data.

    • @gordonschuecker
      @gordonschuecker 11 หลายเดือนก่อน

      @@Dan-yk6sy This.

  • @MikeMartinez74
    @MikeMartinez74 11 หลายเดือนก่อน +50

    Veritasium has a video about how most published research is wrong. For generative AI as it exists now, this seems like a disaster waiting to be collected.

    • @Apjooz
      @Apjooz 11 หลายเดือนก่อน +1

      Tis but a manifesto.

    • @SteveBarna
      @SteveBarna 11 หลายเดือนก่อน +1

      Will be interesting to see if AI can figure out what research is incorrect. Another assumption we make of the future.

    • @mal2ksc
      @mal2ksc 11 หลายเดือนก่อน +11

      We probably don't have the time or resources to find all the wrong papers, but AI might be able to point out where papers come to mutually exclusive conclusions just because it can index so many more details than we can.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +1

      It never crossed your mind that the veritasium video might be wrong?

    • @hivetech4903
      @hivetech4903 11 หลายเดือนก่อน +4

      That channel is sensationalist garbage 😂

  • @schemage2210
    @schemage2210 11 หลายเดือนก่อน +19

    There is an assumption that in order to get to AGI ever increasingly sized models must be used. That may not end up being the case, which makes the "energy" cost limitation, rather less limiting.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 11 หลายเดือนก่อน

      There’s a fundamental problem with that concept, animals don’t need that much information to run rings around AI. Man children that think more data = more information or even relevant information, or framing don’t understand the basic problem. Animal brains do something fundamentally different than adjust token vectors in hyper large dimensions.

    • @kanekeylewer5704
      @kanekeylewer5704 11 หลายเดือนก่อน +2

      You can also run these models on physical architectures more similar to biology and therefore more efficient

    • @carlpanzram7081
      @carlpanzram7081 11 หลายเดือนก่อน +4

      I'd think so too, but apparently it's not that easy.
      Anyway, we WILL eventually inch forward with more and more Efficient architectures.
      Very obviously the amount of energy you need for intelligence and computing is actually quit small. I get 100iq for a bowl Of noodles.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 11 หลายเดือนก่อน +2

      @@carlpanzram7081 The more relevant question is method. LLM aren’t a model of animal intelligence. It’s the wrong abstraction.

    • @schemage2210
      @schemage2210 11 หลายเดือนก่อน +2

      @@GhostOnTheHalfShell This is the point for sure. LLM's are surely a piece of the puzzle, but they aren't the entire solution.

  • @SEIKE
    @SEIKE 10 หลายเดือนก่อน

    Your channel is the best thing about the internet right now ❤️

  • @rogerwood2864
    @rogerwood2864 11 หลายเดือนก่อน +15

    Sabine, for someone so smart I'm surprised you didn't see the giant pink elephant smearing poop on the walls. It doesn't matter if it uses 10,000GW to achieve a superintelligent model. The country that achieves it wins all the marbles; which means that behind the scenes this is a Manhattan Project-level event. Even if they have to have brownouts in Las Vegas, they will reach their goal.

    • @sisko89
      @sisko89 11 หลายเดือนก่อน +3

      I was thinking the same, along with several other fallacies... I can't believe that someone with such a superficial grasp on the subject made a 10 minute video trying to disprove an essay from someone that actually worked on AI

    • @jdilksjr
      @jdilksjr 11 หลายเดือนก่อน

      @@sisko89 And I can't believe how many people have been conned over the years by sales pitches in technology. AI is still a sales pitch. It is nothing but a program that processes data without really understanding it. If you feed it garbage data it won't know that it is garbage.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +1

      She's talking about power issues as if billionaires didn't have enough capital and motivation to build entire fking nuclear plants dedicated to run their super computers.

  • @fgadenz
    @fgadenz 11 หลายเดือนก่อน +60

    8:17 by 2020 or 2040?

    • @Phosdoq
      @Phosdoq 11 หลายเดือนก่อน +5

      she just proved that she is human :D

    • @adashofbitter
      @adashofbitter 11 หลายเดือนก่อน +14

      Also mistook “2029” for “by 2020”… so at least 2 of the predictions aren’t that crazy with our current progress

    • @flain283
      @flain283 11 หลายเดือนก่อน

      @@Phosdoq or did she just fool you into thinking that?

    • @pwlott
      @pwlott 11 หลายเดือนก่อน +2

      @@adashofbitter They are in fact shockingly prescient given current trends. Kurzweil was very smart to focus on raw computation.

    • @hardboiledaleks9012
      @hardboiledaleks9012 11 หลายเดือนก่อน +2

      @@adashofbitter the narrative for sabine was "all the predictions were wrong"
      This is why she made the mistake. There is a bias in her reporting of the topic.

  • @saschas2531
    @saschas2531 11 หลายเดือนก่อน +6

    We don’t even have reliable self-driving cars and somehow they think AGI is around the corner.

    • @2ndfloorsongs
      @2ndfloorsongs 11 หลายเดือนก่อน +1

      Reliable self-driving cars are AGI. And self-driving IS just around the corner. (Though I'm not promising you won't scrape your rims on the curb.)

  • @daniel-bc5sp
    @daniel-bc5sp 8 หลายเดือนก่อน +1

    Its quite interesting how divided the predictions about this is within the AI field. Apparently Asia are more prone to believe that AGI is around the corner or within our lifetime than people in the west. I can't make my mind up on where I lean as it's such a 50/50 divide. Japan is constructing a 'zeta class' supercomputer which is assumed to be completed before 2030, which I read could completely revolutionise the AI playing field and even bring us further to AGI within our lifetimes.

  • @edwardduda4222
    @edwardduda4222 11 หลายเดือนก่อน +20

    I work in the industry. No one has an idea of how to get to AGI, even Yann Lecun. We’re at a point to where we’re literally running out of data to train models. ChatGPT4 is just a collection of models with a voting mechanism which why it seems more intelligent.

    • @notaras1985
      @notaras1985 11 หลายเดือนก่อน

      Only God creates souls. Humans cannot

    • @stedyedy23
      @stedyedy23 11 หลายเดือนก่อน +12

      ​​@@notaras1985 keep your silly religion out of science debates

    • @Regic
      @Regic 11 หลายเดือนก่อน +2

      Mixture of experts (the method gpt-4 is probably using) is not a voting mechanism, what you are thinking of is ensemble. Mixture of experts is quite the opposite, it learns where to route the computation while ensemble computes the result of multiple models and takes an aggregate of it (majority voting, weighted average, etc). Mixture of experts only uses a fraction of one trained network, ensemble runs multiple models. This is a weirdly common misconception that is based on how they imagine it based on the name only. Read the paper about it maybe...?

    • @TheManinBlack9054
      @TheManinBlack9054 11 หลายเดือนก่อน

      Why even? Hes not THAT influential and many of his takes have been proven wrong

    • @ptonpc
      @ptonpc 11 หลายเดือนก่อน

      @@notaras1985 😂 You are silly. Try to keep up with reality. PS. If you are pretending to be a god botherer, you might might to hide those videos on your channel.

  • @irvingthemagnificent
    @irvingthemagnificent 11 หลายเดือนก่อน +34

    Scaling up current models will not get you to AGI. It just gets you more expensive models.

    • @jonatand2045
      @jonatand2045 11 หลายเดือนก่อน

      But almost no one is scaling brain like ai.

    • @SnoodDood
      @SnoodDood 11 หลายเดือนก่อน

      Seems like the progress is starting to pivot toward getting different types of AI systems to interact and work in tandem.

    • @facts9144
      @facts9144 11 หลายเดือนก่อน +5

      Actually the opposite. Don’t talk about something you don’t know anything about.

    • @mkhaytman
      @mkhaytman 11 หลายเดือนก่อน +2

      Must be nice to know better than Google and Microsoft! You should call them and tell them they are wasting billions of dollars building these new clusters.

    • @davidfl4
      @davidfl4 11 หลายเดือนก่อน +1

      To play devils advocate (even though I agree with you) some sort of alien, unexpected intelligence could emerge from sorting through all that chaos. Like a bunch of AI models gathering data from different sources (including real world ones) and making judgements and predictions based on all these different algorithms then perhaps something like intelligence could emerge?
      For instance you could have a llm and another bot tracking facial expressions and emotions, coupled together could they not learn what words induce what emotions and learn to manipulate people?
      I’m not a scientist just thought I’d postulate😂

  • @myhandlehasbeenmishandled
    @myhandlehasbeenmishandled 11 หลายเดือนก่อน +4

    I'm still waiting for my flying car.

  • @MrHailstorm00
    @MrHailstorm00 10 หลายเดือนก่อน +1

    I feel all the pro-AGI and anti-AGI arguments I've seen so far have only focused on extraneous factors: like energy constraints, financial incentives, ulterior motives, etc. These are valid arguments, but I'm sure same type of arguments have surfaced time and again whenever a new wave of technological revolution crushes on the beach of human history. They are valid, but not insightful. A much more elucidating argument would be if we assume ideal external conditions (unlimited resources, incessant financial support, stable political environment, so on), is the current framework for AI research THE CORRECT FORMULA for general intelligence? Several questions that can be discussed:
    1. Is scale a necessary factor in achieving AGI? If so, how is it measured? If we compare one activation unit in neural networks to one neuron in human brain, then the largest LLM has already surpassed the number of neurons in our brain by one or two orders of magnitude, why are we not seeing emergent intelligent behavior yet?
    2. Is back propagation and cross-entropy loss the final answer, or at least a faithful approximation, to how intelligence is built in human brains? All neural networks are trained to maximize proximity to some statistical distribution, but is that what intelligence is? That our brain just works by reflecting correctly the randomness in our surroundings?
    3. Is reinforcement learning the potential solution for "creating" intelligence in neural networks? It seems the most promising as we have seen successful experiments modifying animal behaviors through positive/negative reinforcement and leading animals to behave "intelligently" by human standards. But can all intelligent behavioral traits be elicited by reward and reinforcement? Is intelligence purely behavioral? Even so, have we arrived at the correct reward function?
    I can say for sure that none of the neural networks has achieved AGI based on the three points above. But another useful point of discussion that ensues would be: does AGI need to be achieved before the society feels its impact? And I can also say for sure the answer is a resounding NO. We are already feeling the impact of deep learning and reinforcement learning technology and all it takes is for a majority of people have the PERCEPTION of intelligence in their interaction with the technology. So the real question here is not whether AGI is around the corner, or do we need AGI, but how do we cope with a world where robots are BEHAVING more and more like human and more and more difficult for average people to tell the difference? I'm sure it doesn't take AGI to realize a dystopian society where human controls human via generative AI technologies. How can we avoid that would be a much more interesting topic.