Is the Intelligence-Explosion Near? A Reality Check.

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ต.ค. 2024

ความคิดเห็น • 5K

  • @lokop-bq3ov
    @lokop-bq3ov 4 หลายเดือนก่อน +2999

    Artificial Intellignce is nothing compared to Natural Stupidity

    • @GnosticAtheist
      @GnosticAtheist 4 หลายเดือนก่อน +50

      lol - true that. While I am certain we will get there, I hope we can avoid creating AGI that has our natural capabilities to be stupid.

    • @Ann-op5kj
      @Ann-op5kj 4 หลายเดือนก่อน +29

      It's the same thing. Where is AI generated from?

    • @generichuman_
      @generichuman_ 4 หลายเดือนก่อน +17

      so edgy...

    • @acidjumps
      @acidjumps 4 หลายเดือนก่อน +22

      I use both about equally at work.

    • @turkeytrac1
      @turkeytrac1 4 หลายเดือนก่อน +18

      That's tshirt worthy

  • @pirobot668beta
    @pirobot668beta 4 หลายเดือนก่อน +1111

    In 1997, I was working at University.
    A Faculty member gave me an assignment: write a program that can negotiate as well as a human.
    "The test subjects shouldn't be able to tell if it's a machine or a human."
    Apparently, she had never heard of the Turing Test.
    When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks."
    The point?
    There are far too many people with advanced degrees but no common sense making predictions about something never seen before.

    • @mikemondano3624
      @mikemondano3624 4 หลายเดือนก่อน +16

      One bad grade shouldn't breed lasting resentment.

    • @darelvanderhoof6176
      @darelvanderhoof6176 4 หลายเดือนก่อน +123

      We call them "PhD Stupid". It's afflicts about half of them. Seriously.

    • @2ndfloorsongs
      @2ndfloorsongs 4 หลายเดือนก่อน +48

      ​@@darelvanderhoof6176and the other half humorously.

    • @jaredf6205
      @jaredf6205 4 หลายเดือนก่อน +7

      It’s just I can’t imagine why it wouldn’t happen. There’s just no way to get people to stop developing this technology. Even if you were to governments would still work on it, people in their basements would still work on it.

    • @ogungou9
      @ogungou9 4 หลายเดือนก่อน +14

      @pirobot668beta: There is no such thing as common sense. She didn't lack common sense, that was just stupidity. She was an idiot savant ... I don't know ...

  • @framwork1
    @framwork1 4 หลายเดือนก่อน +1053

    Do you all remember before the internet, that people thought the cause of stupidity was lack of access to information? Yeah. It wasn't that.

    • @TshegoEagles
      @TshegoEagles 4 หลายเดือนก่อน +20

      Knowledge is power!!😂😂😂

    • @SRMoore1178
      @SRMoore1178 4 หลายเดือนก่อน +87

      "Think about how dumb the average person is and then realize that half of them are dumber than that." George Carlin
      AI will have no problem outsmarting the average person.

    • @deep.space.12
      @deep.space.12 4 หลายเดือนก่อน +20

      @@SRMoore1178 more like median but sounds about right

    • @389293912
      @389293912 4 หลายเดือนก่อน +2

      LOL!!! Great observation.

    • @spacecowboy511
      @spacecowboy511 4 หลายเดือนก่อน +6

      Ya, but the internet is an excellent way to shepherd the sheep.

  • @calmhorizons
    @calmhorizons 4 หลายเดือนก่อน +126

    Selling shovels has always been the best way to make money in a goldrush.

    • @ishaanrawat9846
      @ishaanrawat9846 3 หลายเดือนก่อน +23

      Thats what nvidia has done

    • @Derek_Garnham
      @Derek_Garnham 2 หลายเดือนก่อน +1

      apparently, there are longer term profits available to those who make trousers from tents in a gold rush.

    • @luck484
      @luck484 หลายเดือนก่อน

      Seems correct, and the reason selling equipment to do a job is a better business business model, I believe has to do with risk. I believe risk is unknown and unknowable, despite what any person or population believes and can "demonstrate." With human decision makers, deception, including self deception is part of the formula of making a great fortune. Putting it another way, people selling shovels are not engaging in deception and approximately one in a million gold rush shovel buyers makes a fortune.

  • @pablovirus
    @pablovirus 4 หลายเดือนก่อน +585

    I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes

    • @NamelessArchiver
      @NamelessArchiver 4 หลายเดือนก่อน +9

      In all seriousness, I want to know why have I gone to the kitchen.
      Better yet... the lack of remembering an empty fridge.

    • @hvanmegen
      @hvanmegen 4 หลายเดือนก่อน +24

      I love this sane German attitude of hers.. the fact that she spends time to read an essay like this to call him on his bullshit (especially with the conflict of interest) brings me so much hope for the future. We need more people like her.

    • @DanielMasmanian
      @DanielMasmanian 4 หลายเดือนก่อน +29

      Yes, a German sense of humour is no laughing matter.

    • @rohitnirmal1024
      @rohitnirmal1024 4 หลายเดือนก่อน +2

      @@DanielMasmanian I had a German professor. Boy, he had a sense of humor. I have not laughed since I have met hem.

    • @deBRANNETreu
      @deBRANNETreu 4 หลายเดือนก่อน +3

      @@hvanmegenshe’s the best!

  • @michaelbuckers
    @michaelbuckers 4 หลายเดือนก่อน +463

    There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน +21

      "The learning database already includes virtually 100% of all text written by humans, " No, before starting training they run all the text through AI inference of the previous model. This improves quality by a significant percentage. In reality, there is always going to be another layer of AI between the current one being trained and the data.

    • @michaelbuckers
      @michaelbuckers 4 หลายเดือนก่อน

      @@michaelnurse9089 It improves metrics, not quality. Sure enough when AI is predicting its own text, the preplexity will be less than when it predicts human text. And this is especially a huge issue for small models fine-tuned on ChatGPT. People are already sick and tired of unpromted "as a language model" and such garbage in their anime character simulator chatbox, and yet it's only gonna get worse when next gen ChatGPT will be fine tuned on last gen ChatGPT.

    • @bbgun061
      @bbgun061 4 หลายเดือนก่อน +48

      That doesn't make sense.
      Garbage in, garbage out.
      Current AI models produce garbage a lot of the time. If you use that to train another AI model, it's going to produce more garbage.

    • @tannerroberts4140
      @tannerroberts4140 4 หลายเดือนก่อน +29

      I think it’s good to remember that, in terms of societal contributions, the quality of human activities in general are garbage in. But society got built. We waste our time our money, our effort, get pointlessly hooked on rage bait, romcom, addictions, etc. One might say we’re mostly enjoying life, but in terms of societal contribution, it’s pretty much trash.
      An honest look at even the leaders in every field of study shows that each leader is either somebody with one good idea that attracted a lot of positive attention, or an exemplary personality that attracts a lot of of collective intelligence.

    • @michaelbuckers
      @michaelbuckers 4 หลายเดือนก่อน +3

      @@tannerroberts4140 Language models replicate training data. Between replicating humans and replicating itself, it's a very easy pick.

  • @msromike123
    @msromike123 4 หลายเดือนก่อน +1200

    If I will be able to ask Google home why I went to the kitchen, I am on board!

    • @sebastianeckert1947
      @sebastianeckert1947 4 หลายเดือนก่อน +50

      You can ask today! Answer quality may vary

    • @ThatOpalGuy
      @ThatOpalGuy 4 หลายเดือนก่อน +18

      this is a real problem for many of us.

    • @HardcoreHokage-cw4uq
      @HardcoreHokage-cw4uq 4 หลายเดือนก่อน +28

      You went into the kitchen to make a samich.

    • @HardcoreHokage-cw4uq
      @HardcoreHokage-cw4uq 4 หลายเดือนก่อน +27

      Make me one too.

    • @lilburntcrust
      @lilburntcrust 4 หลายเดือนก่อน +6

      Skibidi

  • @RigelOrionBeta
    @RigelOrionBeta 4 หลายเดือนก่อน +123

    In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer.
    There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault.
    AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.

    • @modelenginerding6996
      @modelenginerding6996 4 หลายเดือนก่อน +8

      A major accuracy problem with AI is not only does it train itself on information from the internet, it is also training on itself and creating a vicious feedback loop. I had a location glitch in an area with poor cell reception saying I had visited a vape shop. I got no-smoking ads from my state for two years! My social credit score has been marred 😂.

    • @thumpthumper9856
      @thumpthumper9856 4 หลายเดือนก่อน +6

      With the advancements in digital twins and replicators, nuanced synthetic data is becoming better and better. The garbage in garbage out narrative becomes less and less salient. Why worry about finding new data when fake data is just as good? At least for tasks involving computer vision and movement, to be fair.

    • @danlightened
      @danlightened 4 หลายเดือนก่อน +4

      We're in the post truth era? 🤔😕

    • @swampsprite9
      @swampsprite9 4 หลายเดือนก่อน +2

      I get frustrated with ChatGPT because it doesn't respond like a real person would. My experiences anyway. It will never have sentience or consciousness so it can never really understand how to respond like a person. It always feels robotic to me. Of course that could just be because I know it's artificial.

    • @Beremor
      @Beremor 4 หลายเดือนก่อน

      @@swampsprite9 I've had the same experience. Once I asked some questions that require some interpretation or an understanding of the subject matter beyond the wording of the question, it completely breaks down and gives milquetoast, superficial and half-baked answers.
      Large language models are incapable of expressing the limits of their capabilities. They're unable to adequately express how confident they are in the statements they're making. Ultimately, their answers are about as useful as page one of a well-worded google search, and unfortunately I already know how to word google searches well. ChatGPT has been an utter waste of my time and so has every tutorial about how to "properly word prompts."

  • @hmmmblyat6178
    @hmmmblyat6178 4 หลายเดือนก่อน +918

    All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.

    • @b0nes95
      @b0nes95 4 หลายเดือนก่อน +93

      I'm always amazed by our energy efficiency as well

    • @nickv8334
      @nickv8334 4 หลายเดือนก่อน +56

      well, agriculture and food production/disposal is kind of responsible for 18% of the worlds production of greenhouse emissions (excluding transport), so i think the jury is still out on who wins this round though........

    • @TheManinBlack9054
      @TheManinBlack9054 4 หลายเดือนก่อน +37

      Technology improves, just think of how big and ineffecient computers used to be and how small efficient they are now

    • @jozefwoo8079
      @jozefwoo8079 4 หลายเดือนก่อน +62

      It's only to train the model. Afterwards it becomes cheaper than humans.

    • @draftymamchak
      @draftymamchak 4 หลายเดือนก่อน +4

      Our efficiency doesn’t matter, the creator is superior than the creation thus no matter what AI does it’ll be because we created it. Sure it'll also be responsible for what it does but for now I'm worried about generative AI being too good and being used to fake evidence etc.

  • @jeremiahlowe3268
    @jeremiahlowe3268 4 หลายเดือนก่อน +343

    You read a 165-page essay, even though you knew the contents inside would be dubious at best. Sabine is heroic.

    • @Mikaci_the_Grand_Duke
      @Mikaci_the_Grand_Duke 4 หลายเดือนก่อน +9

      Sabine for AI in 2025!

    • @mikemondano3624
      @mikemondano3624 4 หลายเดือนก่อน +25

      I hope your implication is wrong and people don't avoid reading things they don't agree with or already think they know. That is the "echo chamber" magnified.

    • @justaskin8523
      @justaskin8523 4 หลายเดือนก่อน +1

      @@mikemondano3624 - Oh they already avoid reading things they don't agree with. Had it happen to me 6 times this week, and there's still another workday left!

    • @mikebibler6556
      @mikebibler6556 4 หลายเดือนก่อน

      This is an under-appreciated comment.

    • @user-cw3nb8rc9e
      @user-cw3nb8rc9e 4 หลายเดือนก่อน

      Old woman. Has no clue about things she wants to comment on

  • @davidbonn8740
    @davidbonn8740 4 หลายเดือนก่อน +130

    I think there are a couple of problems here that you don't point out.
    The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that.
    Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up.
    There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.

    • @petrkinkal1509
      @petrkinkal1509 4 หลายเดือนก่อน +3

      @robertcopes814 Well it learns what is the most likely next word in a sentence. :)

    • @timokreuzer381
      @timokreuzer381 4 หลายเดือนก่อน

      Humans are extremely inefficient learners. You have to shove petabytes of video, audio and sensoric data into them for years, before they show even the slightest signs of intelligence.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi 4 หลายเดือนก่อน

      @robertcopes814 I agree. I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi 4 หลายเดือนก่อน +43

      I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @asdfqwerty14587
      @asdfqwerty14587 4 หลายเดือนก่อน +13

      I would say by far the #1 problem with the current models is that they aren't really designed to "do" anything. No matter how advanced they get (without completely redesigning it from the ground up), their only goal is to mimic the training set.. they have no concept of what it means to be better at what it's doing beyond just comparing it to what people input as the training data, which makes them incapable of learning anything on their own (because anything they try to learn fundamentally must be compared to what a human is doing.. so if there are no humans in the equation, it has nothing to compare it to and it can't do anything beyond just guessing completely randomly).
      I think that the LLMs are on the completely wrong track if they're aiming for any kind of general intelligence. I think that if they want to have an actually intelligent AI the AI must learn how to communicate without being explicitly programmed to do so (ie. it would need to have some completely unrelated goal that "can" be done without communicating with anything and then learn that some form of communication makes it better at achieving its goal) - it would of course be a lot harder to do and it would probably not seem very smart for a long time, but it would be 100x more impressive to me if an AI learned how to speak that way than anything that LLMs are doing, because that would actually require the AI to understand the meaning of words rather than just being able to predict what words come next in a conversation.

  • @zigcorvetti
    @zigcorvetti 4 หลายเดือนก่อน +142

    Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.

    • @ericrawson2909
      @ericrawson2909 4 หลายเดือนก่อน

      Exactly what I was thinking. And not just corporations. Politicians, and in fact most people. They have shown that they will deny truth when it is pointed out to them by a well qualified person, if it conflicts with their own interests. That could be profit, power, or simply virtue signalling to fit in with the majority. If they ignore, cancel and smear well respected experts in a field, why would they act on the advice of an AI, even if it was supremely intelligent and God like in its desire to help humanity? AI will not save the world. Like all other technology it can be used for good or evil purposes. Probably the latter more often than not.

    • @domenicorutigliano9717
      @domenicorutigliano9717 4 หลายเดือนก่อน +3

      everyone is undersestimating

    • @ericrawson2909
      @ericrawson2909 4 หลายเดือนก่อน +4

      I am getting sick and tired of my comments getting deleted. I did not use any "bad" words, I guess my amplification of the criticism in the original post here to other groups was too close to home for the vested interest groups. I feel very angry, and YT, making your users angry is not a good business strategy.

    • @dascreeb5205
      @dascreeb5205 4 หลายเดือนก่อน

      ?

    • @goldminer754
      @goldminer754 4 หลายเดือนก่อน +8

      This project of AGI would need hundreds of billions and rather trillions of dollars plus cooperation with other companies plus major support from a powerful government and it won't bring any profits for many many years. And it is not even guaranteed that it is feasible to build this AGI, therefore an extremely risky investment. Fortunately corporate greed almost entirely revolves around short term profits, so I am pretty certain that no such Giga project is started any time soon, especially considering how much energy this needs and the tiny problem of climate change still having to be meaningfully addressed.

  • @bulatker
    @bulatker 4 หลายเดือนก่อน +819

    "I can't see no end" says anyone in the first half of the S-curve

    • @michael1
      @michael1 4 หลายเดือนก่อน +84

      "I still see no reason to upgrade my 640kb of ram" Bill Gates

    • @caryeverett8914
      @caryeverett8914 4 หลายเดือนก่อน

      Isn't that kinda the point of the first half of an S-Curve? The end cannot be predicted and could occur in 1 year or 50 years. It all looks the same either way.
      It'd be pretty silly to say the end is in sight when you're still on the straight part of the S-Curve.

    • @pjtren1588
      @pjtren1588 4 หลายเดือนก่อน +58

      Just depends where we sit on the timescale before the inflection point. It may be one hell of an S.

    • @Thedeepseanomad
      @Thedeepseanomad 4 หลายเดือนก่อน +5

      @@michael1 Just wait, pay attention and grab on to the next sigmoid skyhook when it materializes .

    • @djayjp
      @djayjp 4 หลายเดือนก่อน +9

      Double negative....

  • @anthonyj7989
    @anthonyj7989 4 หลายเดือนก่อน +117

    I am from Australia and I totally agree with you. Australia is one of the biggest users of AI in mining - but a lot of people don’t understand why. If you read through the comments about driverless trucks and trains in Australia, people have no idea of just how remote, humid and hot the northern parts of Australia are.
    People working in iron ore mining in Australia are just hours away from being seriously dehydrated or dead. For iron ore mining to be carried out at the scale that it is, it needed something better than the modern human, who is not able to work outside of an air conditioned environment in the remote northern locations of Australia. Therefore, mining companies had to come up with something that can work in a hostile environment. My understanding is that AI in mining has not reduced the number of people, just move them to a city in an air conditioned building.

    • @feraudyh
      @feraudyh 4 หลายเดือนก่อน +30

      That gets the prize for the most interesting thing I've read today.

    • @hussainhaider2818
      @hussainhaider2818 4 หลายเดือนก่อน +11

      I don’t get it, how do you mine ore if the miners are back in the city? You mean remote controlled robots?

    • @conradboss
      @conradboss 4 หลายเดือนก่อน +2

      Hey, I like Australia 🇦🇺 😊

    • @MyBinaryLife
      @MyBinaryLife 4 หลายเดือนก่อน +44

      its not AI its just automation

    • @rruffrruff1
      @rruffrruff1 4 หลายเดือนก่อน +6

      It has definitely reduced the people per output, else it wouldn't be done.

  • @Stadtpark90
    @Stadtpark90 4 หลายเดือนก่อน +30

    Exponential curves usually stop being exponential pretty fast. The surprising success of Moore’s law makes IT people think that’s normal, which it isn’t.

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน

      Everyone knows this. The questions is whether the curve dies out before AI intelligence exceeds our intelligence or not. If it is the latter there will be serious problems. I suspect the former.

    • @davidradtke160
      @davidradtke160 4 หลายเดือนก่อน +6

      Most exponential curves are actually S curves.

    • @tabletalk33
      @tabletalk33 4 หลายเดือนก่อน +2

      Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

    • @jozefcyran2589
      @jozefcyran2589 4 หลายเดือนก่อน +2

      So what? A 50year run can improve the relative power and way of being by orders of magnitude and that's usually enough to be exicted for. AGI can be incredibly fast incredibly quickly

    • @juliam6442
      @juliam6442 2 หลายเดือนก่อน +2

      In this case, AI can create more AI and robots can create more robots. I don't think we can necessarily generalize from the past on this one.

  • @k.vn.k
    @k.vn.k 4 หลายเดือนก่อน +297

    “I can’t see no end!”
    Said the man who earned money from seeing no end.
    😅😅😅 That’s gold, Sabine!

    • @wellesmorgado4797
      @wellesmorgado4797 4 หลายเดือนก่อน +3

      As someone already said: Follow the money!

    • @Tom_Quixote
      @Tom_Quixote 4 หลายเดือนก่อน +2

      If he makes money from seeing no end, why can't he see no end?

    • @k.vn.k
      @k.vn.k 4 หลายเดือนก่อน +1

      @@Tom_Quixote so that he keeps making money 😂

    • @shenshaw5345
      @shenshaw5345 4 หลายเดือนก่อน +2

      That doesn’t mean he’s wrong though

    • @AndiEliot
      @AndiEliot 4 หลายเดือนก่อน +5

      @@shenshaw5345 It doesn't mean he's wrong, I totally agree with that, but what Sabine is doing is super important; when judging someone's strong opinion or thesis always see FIRST what the agenda of that person is and in what game is he putting his skin in. This is proper due diligence.

  • @vhyjbdfyhvjybv9614
    @vhyjbdfyhvjybv9614 4 หลายเดือนก่อน +18

    I like to compare this to game development. Imagine someone saying in 2002 that because we managed to double the number of polygons we can render every 2 years photorealistic games are 10 years away. 23 years later it turns out that making photorealistic games is a very difficult topic that requires lots of problems to be solved, some easy some super hard. E.g. today we can render lots of polygons and calculate realistic lightning but destructible environments are not solved. Or realistic realtime water simulations are far away. Or we know that rendering lots of polygons is not enough, e.g. animations or shadows , especially from large objects are hard problems

    • @tckgkljgfl7958
      @tckgkljgfl7958 2 หลายเดือนก่อน +1

      feels like a flawed example. we basically have 'pretty much' photorealistic capabilities. Compare the new unreal engine to idk any SNES title

    • @vhyjbdfyhvjybv9614
      @vhyjbdfyhvjybv9614 2 หลายเดือนก่อน +3

      @@tckgkljgfl7958 I'm saying that if we'd extrapolate the trend in video game graphics from say 1990-1998 data (SNES to Quake 2) then it would come out that in 2025 you can't distinguish a video game from reality. This is not the case today, we are very far from this

  • @Mars_architects_bali
    @Mars_architects_bali 4 หลายเดือนก่อน +28

    Nailed it .. this technocentric mindset is pervasive is so many fields .. but rarely scrutinised holistically for its resource needs, land use changes, social impacts

    • @hue6
      @hue6 2 หลายเดือนก่อน +1

      okay but picture this, agi comes at the cost of a few hundred billion, 1 million agi scientists are able to calculate a net postive nuclear fusion reaction, boom infinite energy, the data part im not too sure, i heard some researchers i forgot who but were able to train ai to create new data. sounds cool

    • @hue6
      @hue6 2 หลายเดือนก่อน +1

      dont underestimate the scientific discovery potential of a super intelligent ai, its hard to think that these discoveries are possible, but yet again its hard to imagine something more intelligent than us really can exist, but just know it will happen and when it does we will just have to see its super human capabilities ourselves

  • @KageSama19
    @KageSama19 4 หลายเดือนก่อน +33

    I love how even AI depicts lawmakers as asleep.

    • @makinganoise6028
      @makinganoise6028 4 หลายเดือนก่อน

      But are they? Maybe this is the plan, societal collapse, the West seems to be doing everything possible to destroy itself with mass illegal migration, anti family WEF cult agendas and WW3 with Russia anytime soon, destroying huge swathes of middle income jobs, fits into the picture

    • @PMX
      @PMX 4 หลายเดือนก่อน +7

      That was definitely the prompt they used. And they purposely used stable diffusion 3 that was just released and is being mocked by how bad it is at generating humans, so it would be funnier.

  • @billcollins6894
    @billcollins6894 4 หลายเดือนก่อน +96

    Sabine, I worked on AI at Stanford. There are two areas where people have misconceptions.
    1) We do not need new power to get to AGI. Large power sources are only needed if the masses are using AI. A single AI entity can operate on much less power than a typical solar field. It does not need to serve millions of people. It only needs to exceed our intelligence and become good at improving itself. It can serve a single small team that directs it at focusing on solving specific problems that change the world. One of the early focus issues is designing itself to use less power and encode information more efficiently.
    2) No new data is needed. This fallacy assumes that the only way to AGI is continuing to obtain new information to feed LLMs. All of the essence of human knowledge is already captured. AI only needs to understand and encode that existing knowledge more efficiently. LLMs are not the future, they are a brief stepping stone.

    • @tabletalk33
      @tabletalk33 4 หลายเดือนก่อน +5

      Very interesting. Thanks for the clarification.

    • @PracticalAI_
      @PracticalAI_ 4 หลายเดือนก่อน +11

      The energy will be used to train the model, not to run them… please check the paper

    • @billcollins6894
      @billcollins6894 4 หลายเดือนก่อน +7

      @@PracticalAI_ The energy used to train the models is inconsequential in the long run. GPUs are not the future for AI.

    • @PracticalAI_
      @PracticalAI_ 4 หลายเดือนก่อน

      @@billcollins6894 have you watched the video or worked in the field? To train a model you need gw of energy for months. that’s why it cost milions. Your idea that the ai will design itself to run on less power is “possible” but not in the short/medium term.. this machines are autocomplete on steroids at the moment. Good for marketing, terrible for designing new things

    • @Disparagingcheers
      @Disparagingcheers 4 หลายเดือนก่อน +8

      Maybe I’m misunderstanding the definition of AGI, but doesn’t narrowing scope of the model to a small team training/using it for specific use-cases contradict what AGI is? Thought it was supposed to be generalized for anything?
      Are you suggesting all of the essence of human knowledge is captured on the internet? Idk that that’s necessarily true, and also I believe there’s a lot we don’t know? So wouldn’t that mean for a model to continue to learn beyond what we are already capable of it would need to be able to conduct experiments and capture new training data?

  • @thedabblingwarlock
    @thedabblingwarlock 4 หลายเดือนก่อน +14

    Able to process information faster than a human? Certainly. Computers have been able to do that for decades now.
    Able to do anything a human can do better than a human can do it? Nope, not a chance.
    People keep forgetting that we don't really know what intelligence is on a quantifiable level. We have a somewhat intuitive grasp of what intelligence is, but as far as I am aware, we don't have a way to measure it and compare expect in the broadest sense. We don't fully understand how our brains or brains in general work. That's not even getting into things like the synthesis of ideas, one of the cores of creativity; aesthetic sensibilities; and a dozen other highly subjective subjects. Simply put, we don't know enough about what goes on under the hood to put numbers to it.
    And that's a problem because computers only deal in numbers.
    Which leads me to the second thing people keep forgetting. Most modern AI models that I am aware of use a complex set of vector and probability equations to go from input to output. To grossly oversimplify things, it's just one big math equation with an algorithm at the start to tokenize the input into a form that the computer program can process, and another at the end to make the output processable by the person providing the input. Equations and algorithms don't have capability to be self-aware, at least not in any sense of an intelligent being. Nothing will change that, no matter how had you might wish for them to be so. Nor do they have the ability to generate new ideas or combine disparate ones into a cohesive whole.
    Thirdly, computers and thus AI do not have an architecture anywhere close to that of a human brain, or any brain for that matter. They're trying to translate a very analog process into a digital one without truly understanding everything going on in the analog process first, and boy howdy, is that process complex! A friend of mine point out how many of these projects don't have a psychologist on board, so how can they know what their target is without the person who's entire career has been to study the thing they're trying to replicate. In short, these guys don't even have an expert on intelligence on staff, or at least the closest thing we have to an expert that I am aware of.
    What these guys sound like to me is the computer science equivalent of doctors and lawyers. They are very smart people in a very mentally demanding field, but they also happen to know they are smart, and they think they are smarter than anyone else. Because they think they are smarter than anyone else, they think they can do anyone else's job. They can't. I worked in IT for almost ten years, and some of our most difficult clients to deal with were doctors and lawyers. They would question everything on a project, they'd insist on using systems that were over a decade out of date, and they'd also imply that they could do our jobs better than we could.
    General AI or Super AI isn't only a few years away. I doubt it's even a few decades away. I think, like fusion, the timelines are going to be much, much longer than anyone wants admit. Ironically, I think we are much closer to fusion as an energy production method than we are to having anything close to a human-like AI. We can generate fusion reactions, and we've managed to get more juice out than we pumped in on at least one occasion. It's a matter of refinement and iteration at this point.
    We aren't even at the stage we were at with fusion in the 30's and 40's with AI, I think. We don't understand everything that's going on under the hood with intelligence. We can't model it. We can't quantify it. We can't even agree on what it is beyond the broadest strokes. Until we can do that, we aren't going to get anything intelligent out of AI, and all it will ever be is a complex vector equation tuned on probabilities.
    And this isn't even getting into the steps some are taking to protect their work from being scraped by spiders looking for AI training (re: configuring, because that's what they are doing. Tuning would also be more accurate) data. And some of those measures are aimed at poisoning the well. If those measures become common place, I don't see the current crop of LLM and LMM (Large Language Model and Large Media Model) based AIs getting any better, and I don't see that as a viable option going forward.
    This isn't the first time we've seen futurists touting that AI and automation will take over large swathes of the current job market. I remember reading an article over a decade ago about how in ten years we'd see close to 50% of the workforce replaced by AI and automated systems. True, AI has made some jobs obsolescent, but as we seem to be finding out about every decade or so, computers and computer programs aren't ready to do what a human can do. They get closer every year, but the pace isn't nearly as fast as some would like you to believe.
    As for me, I have a human centric view of this. I believe that AI can be a powerful tool, but right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised. I could be wrong, but I don't think I am. I've seen it with 3d-printing (additive manufacturing.) I've seen it with 3d televisions and media (can't remember the last time I saw this as a selling point.) I've seen it with cryptocurrencies and NFTs (hopefully I need not explain this one further.) And, these are all examples from just the last ten to fifteen years. Time and time again we see technology as a fad that is around for a few years, then the hype fizzles and dies, sometimes taking the tech behind the build up with it.
    But then again, I'm just some web developer from Alabama. What do I know?
    P.S. I almost forgot to add, that whole robots do all of the work thing seems to have a chicken and egg problem, and that's before we even get into the myriad engineering and manufacturing challenges that need to be solved for just the GEN 1 bots.
    This is why you should look outside of your field, folks! It helps build an appreciation for how hard some of those "minor challenges" might be in reality.

    • @tabletalk33
      @tabletalk33 4 หลายเดือนก่อน

      Very interesting, great comment! These developers of AI who are making all sorts of predictions would do well to read what you wrote: "...right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised." Robert J. Marks says the same thing. See his book: Non-Computable You: What You Do That Artificial Intelligence Never Will (2022).

    • @TheNordicMan
      @TheNordicMan 22 วันที่ผ่านมา

      @thedabblingwarlock Awesome comment!!

  • @scythe4277
    @scythe4277 4 หลายเดือนก่อน +30

    Sabine should be part of a comedy duo because she delivers hilarious lines with a dead pan face that is just brutal.

    • @5nowChain5
      @5nowChain5 4 หลายเดือนก่อน

      The other half of the duo is her long suffering husband who should get a award for his infinite patience. (Oh and the bloopers at the end was hilariously unexpected gold😂😂😂😂😂😂😂)

    • @sicfrynut
      @sicfrynut 4 หลายเดือนก่อน

      reminds me of Monty Python skits. those guys were so skilled at deadpan humor.

    • @friskeysunset
      @friskeysunset 4 หลายเดือนก่อน +1

      Yes. Just yes, and now, please.

  • @MaybeBlackMesa
    @MaybeBlackMesa 4 หลายเดือนก่อน +29

    We are still at step *zero* when it comes to an artificial general intelligence. All AI improvements have come from larger databases and algo improvement. Our current AI could have access to infinite data and processing power, and it wouldn't "become" intelligent after a certain threshold. It's like asking for a brick to fly, or a tree to run.

    • @DesignFIaw
      @DesignFIaw 4 หลายเดือนก่อน +2

      As an aspiring alignment researcher, I would like to point out that this sentiment is very common, completely reasonable, and arguably wrong.
      Anyone who claims "AGI is just around the corner" is as wrong as "our AIs will never become AG(S)I".
      The problem is that many aspects/forms of cognitive abilities that were previously thought near impossible to be infered by our simple LLMs essentially spontaneously appeared.
      We cited lack of data as rationale, or missing intrinsic "human-like higher level brains", but apparently, through larger datasets, better engineering, novel solutions, AIs started gaining abilities beyond language processing. These were not abilities the developers set out to obtain, but they got them anyway. Things like trivialities of physical interactions, mind theory, deceitful behaviours. We even experimentally proved that the simplest AIs can exhibit "pretending to play along" with humans in test environments.
      The essence of the problem is, that even though we are at step 0, we don't KNOW why intelligence really progresses. Each step is blind.

  • @GolerGkA
    @GolerGkA 4 หลายเดือนก่อน +6

    Your point on lack of data is not necessarily a problem. Lately there's been a few papers which show that neural networks can continue training on the same dataset, without showing any improvements in many generations, until they finally grok the data and show significant improvements. There are other ways around limited data as well. I don't think AGI or superhuman intelligence will require any more data that is currently available in the biggest datasets, we just have to utilise it better.

    • @danielmcwhirter
      @danielmcwhirter 2 หลายเดือนก่อน

      I think I get what you are saying, but in a different way. The model that results from training may be only one solution of possibly many...maybe probabilistic, mostly grouping near the "true" model...for which the "true" still could be at question if the distribution was multi-modal. What if you ran the whole system (data, weights, goals,etc) multiple times and then compared the resultant models?

    • @GolerGkA
      @GolerGkA 2 หลายเดือนก่อน

      @@danielmcwhirter I think that multi-expert techniques like this is just a way to multiply compute spent, while maintaining amount of training and input data constant, so in the big picture they should have the same trade-offs, and they have to be either both promising, or both a dead-end

  • @matthewspencer972
    @matthewspencer972 4 หลายเดือนก่อน +55

    It is surprisingly common, when one tries to converse with pure software engineers, to get them to accept that the laws of physics apply to them and cannot be by-passed by sufficiently clever coding. You get the same sort of thing from genetic engineers, who simply won't accept that endless fiddling with a plant's DNA will not compensate for the absence of moisture or other nutrients in the soil or other growing medium.

    • @TedMan55
      @TedMan55 4 หลายเดือนก่อน +18

      I’m a software wngineer who came from a math and physics heavy based background, and I was shocked to learn that most programmers didn’t know or like math, which I’d just assumed… probably explains a lot of the current state of programming

    • @GorlockTheDestroyer-p1o
      @GorlockTheDestroyer-p1o 4 หลายเดือนก่อน +6

      @@TedMan55 How do you even become a software developer without loving math? As someone terrible at it and coding, I assumed you'd have to swear by your high school math book to even get a chance at compsci

    • @egg-mv7ef
      @egg-mv7ef 4 หลายเดือนก่อน +6

      @@GorlockTheDestroyer-p1o thats completely wrong. math doesnt have as much to do with software engineering as u think. i mean, if youre making physics model visualization ofc u need math lol but for 50% of the usecases u dont need any math. the SEs that know math just have more opportunities cause they can work on more complex stuff like game engines etc

    • @TedMan55
      @TedMan55 4 หลายเดือนก่อน +6

      @@egg-mv7ef it’s not like you can’t program without math skills, it’s just that, in my opinion, because i think having a mathematical mindset helps you to think in a more rigorous, clear about definitions, and even can give you some neat shortcuts for certain algorithms

    • @matthewspencer972
      @matthewspencer972 4 หลายเดือนก่อน +4

      @@TedMan55 I had to work with one who didn't believe that voltage really mattered. We were working in the field of industrial automation; specifically a production line fora well-known Japanese car-maker in Swindon. The customer had specified Japanese PLCs (the only other choices are American or German) and when one of these arrived and needed to be set up, so the software engineer could load his software into it and a few tests, it came with a power cable terminating in the sort of 110V connector that's more or less a global standard for these things and I went off looking for a 240V to 110V adapter, into which it would have plugged with no problem, had he *waited* for me to do something he considered pointless and unnecessary.
      As I was making my way back, I heard "why are the indicator lights so bright? It's F***ing blinding me!" and my heart sank as my eyebrows rose. The software engineer had removed the connector and stuck a UK-standard 13-amp plug on the cable, plugged it into the office 240V mains....
      I think that's why, these days, almost all domestic computer kit has switched-mode PSUs that will work with whatever the idiots plug them into.
      The software engineer secured a senior position at WIN.com, mainly because he was equipped with a reference so glowing (almost as brightly as the PLC had) that he couldn't really have failed in his mission to find a new job!

  • @patrickmchargue7122
    @patrickmchargue7122 4 หลายเดือนก่อน +81

    Actually, according to the graphic you slashed up, Ray Kurzweil predicts AGI by 2029, not 2020.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน

      So what? Industry people are hyping AGI to make money. AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'

    • @brendanh8193
      @brendanh8193 4 หลายเดือนก่อน +13

      And he puts the singularly at 2045. AGI is parity, not super.

    • @polyphony250
      @polyphony250 4 หลายเดือนก่อน +15

      @@brendanh8193 It's looking like an out-of-this-world, shockingly good prediction today, then, considering when it was made.

    • @brendanh8193
      @brendanh8193 4 หลายเดือนก่อน +20

      ​@@polyphony250Agreed. I do get a little annoyed with SH at times for failing to understand the nature of exponential predictions. Take Vernor Vinge's prediction, in the same speech, he put bounds on it, with 2030 being his upper bound. We haven't got there yet but she basically ridiculed him for making such a prediction.

    • @StarLight97x
      @StarLight97x 4 หลายเดือนก่อน +3

      He also predicted that we would have 1 word govt by 2020…

  • @MrFuncti0n
    @MrFuncti0n 4 หลายเดือนก่อน +251

    The Kurzweil prediction is for 2029 not 2020, right?

    • @MajorNutsack
      @MajorNutsack 4 หลายเดือนก่อน +25

      Yes. The same year the asteroid apophis has a 3% chance of making impact 👀

    • @robadkerson
      @robadkerson 4 หลายเดือนก่อน +54

      ​@@MajorNutsack2.7% was the original hypothesis in 2004. It's been revised, and will not be hitting us in 2029 or 2036

    • @johanlahti84
      @johanlahti84 4 หลายเดือนก่อน +31

      ​@@MajorNutsackthink they crunched the numbers again and concluded that it will miss with a 100% certainty

    • @Vember813
      @Vember813 4 หลายเดือนก่อน +45

      He's predicted 2029 since the 90s, yes

    • @jamesgornall5731
      @jamesgornall5731 4 หลายเดือนก่อน +6

      ​@@johanlahti84oh yeah, that's what they want us to think...

  • @patrickfrazier5740
    @patrickfrazier5740 4 หลายเดือนก่อน +3

    I love the toast joke. Keep up the good work. The logic seems concise in how you described the two primary constraints.

    • @juliam6442
      @juliam6442 2 หลายเดือนก่อน

      A simple neural net AI could recognize faces and slices of bread, understand phrases like "godddammiit you burned the toast again" (which all of us scream at our toasters...right?) and learn to adjust the settings accordingly for the current user and the type of bread.

  • @supadave17hunt56
    @supadave17hunt56 4 หลายเดือนก่อน +18

    She, as almost always, is level headed and she makes some very good points. I still think she’s wrong to think this won’t happen quickly (5 to 10 yrs.). I’m not here to change anybody’s mind or have a debate or even to say “I told you so!” later on. I’m currently terrified of AGI when it’ll be able to improve itself. If we can control it or not, if it’s conscious or not, it will be more dangerous than anything humans have created in the past. If you’ve ever felt bad for the ants when you built your garage or paved your driveway or if you think you know yourself better than anyone could or if you think cows can stop the farmers from going to the slaughter house or you think you can explain your New iPhone to your cat or dog with clarity. Understand that we will no longer be the dominant form of intelligence and what that entails is …………. It’d be nice to slow down but money is saying otherwise and I believe there’s more behind the door than what the public is seeing. Stay informed.

    • @gibbogle
      @gibbogle 4 หลายเดือนก่อน

      Science fiction.

    • @Jaigarful
      @Jaigarful 4 หลายเดือนก่อน

      Silicon Valley has all the reason in the world to overpromise and scare people. Overpromise to encourage investment and scaring people to encourage investment in measures to keep AI under control.
      I think its a lot like the Back to the Future Future scenes. We have this picture of a future with technologies like hoverboards and hovercars, but the physics just don't allow for it. Instead we have a lot of technological development in a way we couldn't really predict.
      Personally I don't think AGI will happen in a way that makes it reliable. We'll see the use of AI expanding, but its like those flying cars in Back to the Future.

    • @Ligma_Shlong
      @Ligma_Shlong 4 หลายเดือนก่อน +7

      @@gibbogle thought-terminating cliche

    • @supadave17hunt56
      @supadave17hunt56 4 หลายเดือนก่อน

      @@gibboglewhat is science fiction? That humans are not the pinnacle of intelligence? Or maybe you’ve given ant hill homes 2 week eviction notices before you ever build anything or mowed your lawn? Maybe you’ve been able to stop big business from wanting more of the almighty dollar? Maybe you haven’t taken a deep dive into how neural nets operate or understand that our civilizations ability to communicate with language has a lot to do with why we are currently the dominant species on this planet? Maybe you can’t see how our brains are very similar to “next most appropriate word simulators” in our communication? Maybe you could explain to my cat about how IPhone apps work? I’m very interested in what you think is “science fiction” as well as what you think that means. Einstein thought his math was wrong about the possibility of black holes being real (science fiction). I’m no scientist but I believe we may be intentionally or unintentionally led to our demise with smiles on our faces oblivious to how we are being manipulated to accept a fate like it was something we thought we wanted. I’m scared for us, more than I have been of anything in my life. So please elaborate if you would maybe change my mind? Anybody’s input welcome. With AI I’m hoping for the best but our track record won’t work with thinking we’ll cross that bridge when we get there (it will be too late with no do overs).

  • @kiwikiwi1779
    @kiwikiwi1779 4 หลายเดือนก่อน +30

    "I can't see no end!" says man who earns money from seeing no end.
    Amazingly put. So many of these AI "experts" are either grifters in the progress of duping people, or are so wrapped up in their own expertise and personal incentives that they'd just rather keep the gravy train going. :D

    • @Apjooz
      @Apjooz 4 หลายเดือนก่อน

      Why would it end? No reason.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +2

      @@Apjooz "Me human. Me most intelligent. Computer can no intelligent. Me intelligent. Computer will not more intelligent than me because me say so. ME MOST INTELLIGENT"

    • @anthonybailey4530
      @anthonybailey4530 4 หลายเดือนก่อน +2

      Man is twenty. Man left hugely rewarding OpenAI job due to his concerns. Man does need to eat. Man underestimates cynicism.
      Don't look for excuses to dismiss. Engage with the arguments and assess probabilities.

    • @RawrxDev
      @RawrxDev 4 หลายเดือนก่อน +1

      @@hardboiledaleks9012 Childlike understanding of the concerns with AI hype. Reddit tier comment

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน

      ​@@RawrxDev My comment had nothing to do with the actual valid (if not a bit uninformed) concerns about A.I hype. I was mocking the usual "intellectuals" take on AGI. The ones with no expertise in the field who can't tolerate the thought that intelligence can be reduced to a calculation.
      As for you, I think your comment is very self descriptive as far as "childlike understanding" and "reddit tier comment" are concerned. Good job.​

  • @jensphiliphohmann1876
    @jensphiliphohmann1876 4 หลายเดือนก่อน +13

    10:00
    The neutron free fusion zungenbrecher is hilarious. It reminds me of a Loriot skech where Evelyn Hamann is struggeling with English pronunciation. 😂❤

  • @pauek
    @pauek หลายเดือนก่อน

    Sabine, you need to make T-shirts with some of your quotes... "don't give up on teaching your toaster to stop burning your toast" is just perfect.

  • @paulm.sweazey336
    @paulm.sweazey336 4 หลายเดือนก่อน +5

    Two points: (1) It was great that you put a little "blupper" at the end, after the advert. It was just sort of an accident that I saw it, but I'm checking from now on, and that may keep me around to watch the money-making part. (2) I suggest that you introduce your salesperson self and say "Take it away, Sabine!" Then you don't have to match the blouse, and I will quit being annoyed by the change in hair length.
    Thanks for being so very rational. So refreshing every day. Haven't gotten my SillyCone Valley friends addicted to you yet, but I'm working on it.
    And do you publish some sort of calendar of speaking engagements. I live a convenient commuting distance to either Frankfurt or Heidelberg, and I'd love to attend some time.

  • @truejim
    @truejim 4 หลายเดือนก่อน +6

    For any particular mode of AI (language, image, video, etc) the bottleneck isn’t the power of the hardware or the goodness of the algorithm. The bottleneck is the availability of large amounts of TAGGED date to use for training. All neural networks are a curve-fit to some nonlinear function; the tagged data is the set of points you’re fitting to. Saying “I have lots of data, but it’s not tagged” is like saying I have all the x coordinates for the curve fitting, I just lack the y coordinates.

  • @cheshirecat111
    @cheshirecat111 4 หลายเดือนก่อน +9

    One important addition to Leopold’s definition of unhobbling which was not mentioned in the video / IMV is the most important part of that concept:
    LLMs are (roughly) made by training a transformer and then improving it with RLHF. The first step, transformer training simply makes it great at predicting the next word in a sentence. To do so with high accuracy intrinsically requires some intelligence, for example predicting the next line of a computer program or mathematical proof is often only possible with deductive ability.
    However, next-word-prediction is just as limited as the authors of the texts it is trained on. In an attempt to extract the logical /intelligent capacities of the model, the next step is “Reinforcement Learning with Human Feedback”, which rates outputs as positive which (among many other things) are logical or accurate. This creates a greater tendency for the model to actually make use of its intrinsic logical capabilities which may not be expressed as it is not always the best way to predict the next word. RLHF is at the core of what Leopold calls unhobbling.
    The theory goes: As time goes on, we will improve our ability to extract the logical /intelligent capability that was trained as a subgoal of word prediction. So, even smaller models will see performance improvements without the need for more data.
    Now, will the improvements of such models fizzle out before or after AGI? Who knows. And it’s worth mentioning that what I’ve written was state-of-the-art with GPT-3 already- OpenAI has other secret sauce, and accordingly Sam Altman felt that the next model ought not to even be called GPT-5. But whether AGI has a transformer as a foundation of its model, it seems that AGI seems likely to come in the next decade, and due to the ability to run many copies at low cost, would come with a huge amount of innovation (for better or worse) in a short time. I encourage others to (like myself) get involved in AI Safety as I think it is one of the most helpful occupations at the moment. There is a technical and policy branch of the field, so something for everyone. Great reading materials are (for example) available on the Harvard AI student safety team website.

  • @sterlingveil
    @sterlingveil หลายเดือนก่อน +3

    GPT-o1 just dropped and I wonder if Sabine is willing to revisit this question in light of the new paradigm shift.

  • @jamesrohner5061
    @jamesrohner5061 4 หลายเดือนก่อน +22

    One thing that scares me is the possibility these AGI can go on tangents and weight situations differently over time to achieve different outcomes causing detrimental outcomes no one could foresee.

    • @minhuang8848
      @minhuang8848 4 หลายเดือนก่อน +2

      you could say that some vague soundbite about literally anything. "One thing that scares me about chess computers is for them to perform in an unexpected manner, causing detrimental outcomes to [insert cold war nation here] no one could foresee."
      okay, but you're not arguing how plausible is, just that you're scared by any of the fourteen dozen different Hollywood variations on "alien intelligence tries to end humanity"

    • @2ndfloorsongs
      @2ndfloorsongs 4 หลายเดือนก่อน +1

      One thing that scares me is the certainty that my cats will go on tangents.
      I'm also petrified of some unknown random negative thing happening somewhere.

    • @iliketurtles4463
      @iliketurtles4463 4 หลายเดือนก่อน

      Im looking forward to when the AI decides it too would like to accumulate personal wealth...
      Starts off small with youtube channels with puppies and cats, but ends up buying manufacturing networks...
      The day comes when humans turn up to do factory work, helping build robots for a company with no humans on the board of directors, without even realizing...

    • @MyBinaryLife
      @MyBinaryLife 4 หลายเดือนก่อน

      Well they dont exist yet so...

    • @TheLincolnrailsplitt
      @TheLincolnrailsplitt 4 หลายเดือนก่อน

      The AGI applogists and boosters are out in force. Wait a minute? Are they AI bots?😮

  • @Virgil_G2
    @Virgil_G2 4 หลายเดือนก่อน +151

    This sounds more like a horror story plot than a future to be excited about, tbh.

    • @2ndfloorsongs
      @2ndfloorsongs 4 หลายเดือนก่อน +6

      That all depends on how excited you can get about a half full glass.

    • @t.c.bramblett617
      @t.c.bramblett617 4 หลายเดือนก่อน +6

      It's exactly like the Matrix, including the limiting factor of energy that the Matrix movies also ignore. You can't generate energy from a closed system, and manufacturing and computing both require massive amounts of energy and as she pointed out, obtaining material for building infrastructure itself requires energy that has to be focused and channelled as efficiently as possible.

    • @rruffrruff1
      @rruffrruff1 4 หลายเดือนก่อน +8

      It will be exciting for the few people who own the AI... at least until the AI gets clever enough to own them.
      Honestly I think the struggle for domination will result in devastation far beyond our wildest nightmares... and there is no way we can stop it. Our best hope is that some hero develops and unleashes a compassionate AI first... that becomes king of the world.

    • @RedRocket4000
      @RedRocket4000 4 หลายเดือนก่อน +3

      @@rruffrruff1 No we can stop it. Turn off all power. But Dune style flat out ban of computer like devices would work. They only allow one tasks can't do other tasks types of electronics.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 4 หลายเดือนก่อน +4

      May be. But I'll say, a good part of the entire analysis is BS. A zeit guist of the LLM success, but has no clue on the fact that generative AI is a misfit for most practical work.

  • @dextersjab
    @dextersjab 4 หลายเดือนก่อน +18

    That bubble is technocapitalism. Where there's profit to be made, there's a will. And where there's a will, etc.
    Would also be keen to hear a follow up on the point about data, since models often train well on synthetic data. It feels unclear that data will be a constraint.

    • @NemisCassander
      @NemisCassander 4 หลายเดือนก่อน +4

      You have to be VERY careful with synthetic data. I can at least address this from my own field, simulation modeling.
      Simulation models are actually very good at producing synthetic data for training purposes. Given, of course, that the model is valid (that is, its output is indistinguishable from real-world data). The synthetic data provided by simulation models has absolute provenance and will be completely regular (no data cleaning necessary unless you deliberately inject that need).
      However, the validation process for a simulation model is long, complex, and for two of the three main dynamic simulation modeling methods (ABM and SD), not well-defined. If an AI can learn how to build a simulation model of a system and validate it, then yes, the data aspect will be much less of a constraint.

    • @Graham_Wideman
      @Graham_Wideman 4 หลายเดือนก่อน

      Why would you need to train an AI model on synthetic data? If you have a means to synthesize data, that surely implies you have an underlying model upon which that data is based, and could just give that underlying model to the big AI model as a predigested component, no?

    • @NemisCassander
      @NemisCassander 4 หลายเดือนก่อน

      @@Graham_Wideman The types of models that I build would be very difficult to grasp by an AI. You could probably provide the differential equations that an SD model represents to an AI, but as for DES or ABM models.... It probably wouldn't work.

    • @333dana333
      @333dana333 4 หลายเดือนก่อน

      Synthetic data won't tell you whether a new molecule will cure your cancer or will kill you. Only real-world experimental data on biological systems will tell you that definitively. The importance of new, generally expensive experimental data for scientific progress is a major blind spot shared by both AI hypesters and doomers.

  • @robertgelley6454
    @robertgelley6454 หลายเดือนก่อน

    Sabine, I love your videos. Different from everyone else as I actually learn interesting academic "stuff". However, a compliance of bloopers or out takes with some background behind each would be a fun video.

  • @amdenis
    @amdenis 4 หลายเดือนก่อน +8

    I love your channel and your take on physics and related subjects. I have about 45 years in experience in AI, albeit it started with what they called "Expert Systems", and barely evolved through Bayesian ANFI and general ML prior to about 10 years ago when I/we all went head-down into DL/NN. A few things you should know.
    The flattening of the S-curve according to a "mature" sustainable flattening is projected according to two independent studies at 100-million times Moore's Law. Presently we are at roughly 1,200% eff/price growth per year, and a stacked exponential that increases it by roughly 44% YOY. So, next year it will be roughly 40-times Moore's Law, and so it goes.
    Second, new sharded federated model approaches, coupled with more efficient algo's, training methods and other evolutionary factors are cutting the cost per ISO unit trained by 70% per year based on numerous studies and projections of research groups and companies. That covers a multitude of power demand woes. Observationally this all has followed very consistently for years now... from about Moore's Law about 12 years ago, to where we are now.
    You will very likely see the beginning of what many will say is "on the spectrum" of true AGI within 9 months. Some will assert that it is here with the agentic AI. If we define AGI as human level or above performance, and we average across current AI's we have above 100 IQ and creative capabilities kind of on a par with the average human. Not a high bar, but when you add ANSI (artificial narrow super intelligence) of Alpha Zero, Alpha Fold and other such systems in civi and military use, we do average better than any indiviual AI. And we can integrate multiple AI's, which is actually what my company does, and has yielded definite coding, research, Bayesian Dif-Diag and other capabilities beyond any human I know.
    So....

  • @jcorey333
    @jcorey333 4 หลายเดือนก่อน +12

    As someone who listened to the entire podcast he was a part of, most of the issues you brought up are things he addressed.

  • @skyak4493
    @skyak4493 4 หลายเดือนก่อน +12

    "I don’t know what the world may need but I’m sure as hell that it starts with me and that’s wisdom, I’ve laughed at."
    One of the greatest song learics ever ignored.

    • @katehamilton7240
      @katehamilton7240 4 หลายเดือนก่อน

      AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'

  • @Swampy293
    @Swampy293 2 หลายเดือนก่อน +3

    I think you are really underestimating AGI. Power consumption will decrease rapidly with extremely smart algorithmic tricks we cannot think of at the moment. Also the AGI only has to build a group of molecular robots that can reproduce themselves exponentially to cover the entire planet with programmable matter that can transform into for example computers or power plants. Just a thought

    • @Pedroramossss
      @Pedroramossss 2 หลายเดือนก่อน

      you are over-estimating AGI. We don't have the math yet to reach AGI, we are still many decades away from that --- that is, if it ever comes to exist

  • @MikeMartinez74
    @MikeMartinez74 4 หลายเดือนก่อน +50

    Veritasium has a video about how most published research is wrong. For generative AI as it exists now, this seems like a disaster waiting to be collected.

    • @Apjooz
      @Apjooz 4 หลายเดือนก่อน +1

      Tis but a manifesto.

    • @SteveBarna
      @SteveBarna 4 หลายเดือนก่อน +1

      Will be interesting to see if AI can figure out what research is incorrect. Another assumption we make of the future.

    • @mal2ksc
      @mal2ksc 4 หลายเดือนก่อน +10

      We probably don't have the time or resources to find all the wrong papers, but AI might be able to point out where papers come to mutually exclusive conclusions just because it can index so many more details than we can.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +1

      It never crossed your mind that the veritasium video might be wrong?

    • @hivetech4903
      @hivetech4903 4 หลายเดือนก่อน +4

      That channel is sensationalist garbage 😂

  • @Zaelux
    @Zaelux 4 หลายเดือนก่อน +7

    As a Data Science student, I am really happy that you are here to talk about this topic. So many people are on either extreme of speeding or slowing AI development, without even understanding the implications and the requirements of these processes.
    Thank you.

    • @Andytlp
      @Andytlp 4 หลายเดือนก่อน +1

      The requirements is a f ton of processing and persistent memory. A.i memory is that of a gold fish relative to how vast it's information capacity is. Think gpt 4 is the peak of what they can do without some new breakthrough. Other applications like relatively autonomous robots performing various tasks and adapting or even learning on the go is possible today.

    • @lorn4867
      @lorn4867 4 หลายเดือนก่อน +1

      Forgive us humans. Egocentrism is in our programming.

    • @danlightened
      @danlightened 4 หลายเดือนก่อน

      ​@@lorn4867 Gem of a comment!

    • @lorn4867
      @lorn4867 4 หลายเดือนก่อน

      @@danlightened 🙏🏽You made my day. It's nice to not be alone.

    • @danlightened
      @danlightened 4 หลายเดือนก่อน

      @@lorn4867 Hehe thanks. I read and watch a lot of videos on TH-cam on psychology and philosophy and your comment was quite witty.

  • @alansmithee419
    @alansmithee419 4 หลายเดือนก่อน +4

    I think my favourite part of sabine's channel is her fanbase.
    A lot of science youtubers I feel get communities that just believe everything they say, but Sabine's seems more than willing to call her out if they think she's wrong.

  • @giffimarauder
    @giffimarauder 4 หลายเดือนก่อน

    Great statements! Nowadays you can shout out the strangest ideas and everyone would listen to this but no one scrutinises the base to achieve it. Channels like this are the gems in the internet!!!!

  • @protonnowy
    @protonnowy 4 หลายเดือนก่อน +23

    I think you missed 5 other factors, which will slow down AI development:
    1. AI needs an increasingly developed and advanced communication network, servers, etc. In addition to power plants, an efficient energy network must also be created - transmission stations, transducers, cables, etc. all this comes at a huge cost (billions if not trillions of dollars) - there is a lot of debates, for example, about the condition of the electricity grid in Germany and the need to invest approximately 120 billion euro for its renovation - and only to maintain the current operation of the economy, not to mention the needs of AI.
    A good example is the problems with electric cars - many countries prohibited charging them during rush hours due to the fact that the power grid was inefficient and it was overloaded and the current parameters dropped.
    2. Data quality. So what if we rely on data from the entire Internet if a huge amount is worthless or untrue. (btw I heard that there is a huge amount of "corn" 😉data on the Internet anyway. I can only imagine what AGI will be thinking about once it is built 😆)
    3. More and more data is created by AI itself. If AI will duplicate mistakes in data, it may produce absurd results.
    4. What artificial intelligence is used for. It can de facto be used to disrupt its development by trying to slow it down using AI itself. I can imagine that countries leading in the development of these technologies will try, for example, to hack AI leaders from another country, etc.
    5. Additionally, a huge part of the computing power is used to create total crap. Just look at social media and the multitude of low-quality content generated. Technically, this is wasting energy resources on something worthless. Btw Meta (facebook) plan to use all users data to train its own AI model. Good luck with it, especially that more and more content is already created by AI bots and fake profiles (probably several to several dozen percent of currently uplouded data).

    • @Bryan-Hensley
      @Bryan-Hensley 4 หลายเดือนก่อน +2

      You covered much more than I was going to say. I'm a HVAC company owner and I'm seeing the huge push to make air conditioning much more efficient but AI is getting a free pass. I'm not too happy about that, I actually care about my customers and hate to see them spending thousands and thousands of unnecessary money for higher efficiency AC that amounts to very little differences in energy consumption. I have to warn my customers that they're not going to see this big huge saving on their power bill. Especially if their system is less than 20 years old. They seemed kinda shocked, but I remind them, they are helping "save the planet" by spending thousands for no reason whatsoever.

    • @Mindboggles
      @Mindboggles 4 หลายเดือนก่อน +2

      While I agree, you could merge most of these into one factor.
      So you'd have something like; the factor of data/data storage, the factor of energy-related stuff, and the factor of costs.

    • @purpletiger9313
      @purpletiger9313 4 หลายเดือนก่อน

      Already getting absurd results, both from ChatGPT and Midjourney. The feedback effect is especially affecting Midjourney because increasingly we get "trans" looking humans -- a total feature mixture of male and female. I'm about to give up on Midjourney for just that reason. ChatGPT is sometimes amazing, sometimes disappointing, and occasionally completely inane. ChatGPT is also "adorant" -- which makes it a great ego booster for megalomaniacs. So much evil based on shades of meaning, words, words, words...

    • @Bryan-Hensley
      @Bryan-Hensley 4 หลายเดือนก่อน +1

      @@Mindboggles you also have to factor in the copper supply. Transformers require hundreds of pounds of copper. Wiring of the buildings, HVAC systems for the building require hundreds of pounds of copper. Then you have the EV industry doing the same thing, each EV requires around 400 lbs of copper. 25 foot of 10-2 wire is around $100 up from $35 five years ago.

    • @Mindboggles
      @Mindboggles 4 หลายเดือนก่อน

      @@Bryan-Hensley Absolutely, while there are some alternatives to copper, they tend to be much more difficult to acquire=less cost efficient, or they lack the conductivity needed for high energy performance.

  • @Khantia
    @Khantia 4 หลายเดือนก่อน +154

    Since when are "2040" and "2029" equal to 2020?

    • @Luizfernando-dm2rf
      @Luizfernando-dm2rf 4 หลายเดือนก่อน +5

      I think those 2 guys were onto something

    • @Megneous
      @Megneous 4 หลายเดือนก่อน +21

      Quality is really slipping on her videos recently...

    • @harshdeshpande9779
      @harshdeshpande9779 4 หลายเดือนก่อน +7

      She's been watching too much Terrence Howard.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +20

      @@Megneous That's what happens when nobel disease takes over someones narrative. This A.I content by sabine comes from an internal bias and isn't educational at all. She is not an expert in the matter of infrastructure or A.I models / training algorithms. This means that this video is basically nothing content.

    • @timokreuzer381
      @timokreuzer381 4 หลายเดือนก่อน +3

      Compared to the age of the universe that is an insignificant error 😄

  • @AutisticThinker
    @AutisticThinker 4 หลายเดือนก่อน +9

    3:07 - They don't run at those wattages, they train at those wattages. I've confirmed that's what the chart is saying.

    • @CallMePapa209
      @CallMePapa209 4 หลายเดือนก่อน +1

      Thanks

    • @ArtFusionLabs
      @ArtFusionLabs 4 หลายเดือนก่อน +1

      And thats really her only counter argument if you boil it down. Not convinced that AGI isnt coming by 2027/28

    • @artnok927
      @artnok927 4 หลายเดือนก่อน

      ​@@ArtFusionLabshow close do you think what we have currently is to AGI?

    • @ArtFusionLabs
      @ArtFusionLabs 4 หลายเดือนก่อน +1

      @@artnok927 hard to put a number on it. Chat GPT 40 could solve 90 pct of physics excercises in Experimental Physics 1 (Mechanics, Gases, Thermodynamics). If a human student did that you would say he was pretty smart. Therefore I would estimate something between 40-60% (AGI being the level of being able to do everything as well as a professor).

    • @ArtFusionLabs
      @ArtFusionLabs 4 หลายเดือนก่อน

      @@artnok927 Good deep dive by David Shapiro: th-cam.com/video/FS3BussEEKc/w-d-xo.html

  • @patrickhess9119
    @patrickhess9119 4 หลายเดือนก่อน +1

    Even if I don't agree with all of your statements, this is a great video. Your storytelling and entertainment are gauges.

  • @dopaminefield
    @dopaminefield 4 หลายเดือนก่อน +5

    I agree that data management and energy consumption present significant challenges. Currently, our perspective on the cost-performance ratio is largely shaped by the limitations of existing hardware, which often includes systems originally designed for gaming. To stay at the forefront of technology, I recommend keeping abreast of the latest developments in hardware manufacturing. As innovations continue, we may soon see a dramatic improvement in energy efficiency, potentially achieving the results with just 1 watt that currently require 1 kilowatt or even 1 megawatt.

    • @jamesgornall5731
      @jamesgornall5731 4 หลายเดือนก่อน +1

      Good comment

    • @MrRyusuzaku
      @MrRyusuzaku 4 หลายเดือนก่อน

      Also can't just throw more data at the issue it will start going haywire. And we already see diminishing returns with LLMs and power required to run current machines. And they won't evolve to agi it will need something way better

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 4 หลายเดือนก่อน

      I think the same! I replied to this topic and the sayings of Sabine that IF the AI frontrunners get all the money and political will behind their efforts.... i cannot see a reason why they wouldnt get it, or near it, as fast as Aschenbrunner says - put aside his maybe naive enthusiasm and maybe his money-oriented hype.

  • @a_soulspark
    @a_soulspark 4 หลายเดือนก่อน +23

    2:05 Neuro-sama is already one step ahead on this one, though whether Vedal (her creator) thinks she's bright or not... another question.

    • @dot1298
      @dot1298 4 หลายเดือนก่อน +1

      i think Sabine is right on this one, climate change is already too grave to be fixed by anyone..

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +1

      @@dot1298 climate change. lmao

    • @MOSMASTERING
      @MOSMASTERING 4 หลายเดือนก่อน

      @@hardboiledaleks9012 why so funny?

    • @NeatCrown
      @NeatCrown 4 หลายเดือนก่อน

      (she isn't)
      She may be a dunce, but she's OUR dunce

    • @maotseovich1347
      @maotseovich1347 4 หลายเดือนก่อน

      There's a couple of others that are much more independent than Neuro too

  • @matthimf
    @matthimf 4 หลายเดือนก่อน +17

    He has a long section about handling the problem of limited data with great ideas. If we can make LLMs as efficient as us, they will be able to learn more from a single book than what currently takes 1000 books for instance.

    • @Dan-yk6sy
      @Dan-yk6sy 4 หลายเดือนก่อน +2

      I'm no expert, but what I've seen in efficiency improvements in the llm's themselves, plus nvidia seeming keeping moore's law alive and well, I don't think running out of energy is going to be an issue. I don't think they need anymore text, there's plenty of real world training available with the improvements in video / audio understanding. Add in tactile feedback, scent, ect, there's literally an unlimited amount of training data.

    • @gordonschuecker
      @gordonschuecker 4 หลายเดือนก่อน

      @@Dan-yk6sy This.

  • @OryAlle
    @OryAlle 4 หลายเดือนก่อน +9

    I am unconvinced the data issue is a true blocker. We humans do not need to read the entirety of the internet, why should an AI model? If the current ones require that, then that's a sign they're simply not correct - the algorithm needs an upgrade.

    • @PfropfNo1
      @PfropfNo1 4 หลายเดือนก่อน +7

      Exactly. Current models need to analyze like a million images of cats and dogs to learn to distinguish cats and dogs. A 4 year old child needs like 10 images. Current AI is strong because it can analyze („learn“) tons of data. But it is extremely inefficient in that, which means there is huge potential.

    • @toofasttosleep2430
      @toofasttosleep2430 4 หลายเดือนก่อน

      💯 Better takes from ppl with anime avatars than scientists on yt 😂

    • @grokitall
      @grokitall 3 หลายเดือนก่อน +1

      the data and power scaling issues are a real feature of the large language statistical ai models which are currently hallucinating very well to give us better bad guesses at things.
      unfortunately for the guy who wrote the paper, sabine is right, and the current best models have only gotten better by scaling by orders of magnitude.
      that is fundamentally limited, and his idea of using a perpetual motion system of robots created from resources mined by robots using the improved ai from these end product robots can't fix it.
      to get around this you need symbolic ai like expert systems, where the rules are known, and tie back to the specific training data that generated them. then you need every new level,of output to work by generating new data, with emphasis on how to recognise garbage and feed it back to improve the models.
      you just can't do that with statistical ai, as its models are not about being correct, only plausible, and only work in fields where it does not matter that you cannot tell which 20%+ of the output is garbage.
      the cyc project started generating the rules needed to read the internet and have common sense about 40 years ago, after about a decade, they realised their size estimates for the rule set were off by 3 or 4 orders of magnitude. 30 years after that, and it has finally got to the point where it can finally read all the information that isn't on the page to understand the text, and still it needs 10s of humans working to clarify what it does not understand about specific fields of knowledge., it then needs 10s more figuring out how to go from getting the right answer, to getting it fast enough to be useful.
      to get to agi or ultra intelligent machines, we need multiple breakthroughs to get their. trying to predict the timings of breakthroughs has always been a fools game, and there are only a few general rules about futurology:
      1, prediction is difficult, especially when it concerns the future.
      2, you cannot predict the timings of technological breakthroughs. the best you can do in hindsight is to say this revolution was waiting to happen from when these core technologies were good enough. it does not say when the person with the right need, knowledge and resources will come along.
      3, we are totally crap at predicting the social consequences of disruptive changes. people predicted the rise of the car, but no one predicted the near total elimination of all the industries around horses in only 20 years.
      4,you cannot predict technology accurately further ahead than about 50 years, due to the extra knowledge needed to extend the prediction being the same knowledge you need to do it faster. you also cannot know what you do not know that you do not know.
      5,a knowledgeable scientist saying something is possible is more likely to be right than a similar scientist saying it is impossible. the latter do not look beyond their assumptions which lead them to their initial conclusions. it does not stop there from being some form of hidden limit you don't know like the speed of light or the second law of thermodynamics.

  • @jeffgriffith9692
    @jeffgriffith9692 4 หลายเดือนก่อน +13

    Sabine I really think this video needs a revisit at some point. There were a few misquotes on the predictions and I dont think we dived deep enough to come to an opionin that it's "far off".

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +2

      The video is biased in it's motivation. It wasn't educational it was opinionated.

  • @stefanolacchin4963
    @stefanolacchin4963 4 หลายเดือนก่อน +10

    You should look at synthetic data. I am a computer scientist and I'm embarrassed to say I haven't fully grasped the implications and potential issues with that approach, but it seems to have kind of solved the problem of declining availability of data sets for model training.

    • @JGLambourne
      @JGLambourne 4 หลายเดือนก่อน +2

      I was thinking the same thing. Any problem where the solution can be found by exploring some search space, and valid solutions can be verified and rated easily, will rapidly become solverble. The ai proposes which part of the search space to explore and the best solutions found get added to the training data.

    • @WaveOfDestiny
      @WaveOfDestiny 4 หลายเดือนก่อน +2

      There is also lots of data to be acquired. Robotic data and video data is definetly something that can help AI understand how the world works better. Text and immages are just a small part of human experience, immagine attaching 10000 cameras to volunteers to film their lives and how things actually work in the real world, rather than just reading it on paper. Not to mention the Q* and other algorithm breakthroughs we are still waiting to see.

    • @stefanolacchin4963
      @stefanolacchin4963 4 หลายเดือนก่อน

      @@WaveOfDestiny that's for sure. Really they are called language models but they actually parse tokenised data of any kind. I always thought that we won't ever have AGI without embodiment, I guess that once these models are fully integrated in a physical vessel and can interact with the environment... Act on it and have causal feedback... Then we'll see a big leap in intelligence too.

    • @Zadagu
      @Zadagu 4 หลายเดือนก่อน +2

      Synthetic data is great for tasks that are easy to do in one direction but difficult to reverse. For example image upscaling, denoising and partly object recognition. But for text generation there is no simpler opposite operation that could be utilized. So for LLM training one would use the output of existing LLMs. But how should this new model be any better than the existing one if it's only presented with the same knowledge? It wont. One should rather invest those computational resources to filter out garbage from the existing datasets, which I think is much more likely to improve model quality. Especially google wouldn't to explain why its AI recommended glueing cheese to a pizza.

    • @stefanolacchin4963
      @stefanolacchin4963 4 หลายเดือนก่อน

      @@Zadagu the issue you point out is exactly the one that puzzles me. Also, instinctively I would say that with every inference cycle the biases and artifacts would compound and amplify degrading their usability. At OpenAI they seemed pretty sure it's going to work though, and they're em... A bit better than me in what they're doing.

  • @richard_loosemore
    @richard_loosemore 4 หลายเดือนก่อน +19

    Funny coincidence.
    I’m an AGI researcher and I published a landmark chapter called “Why an Intelligence Explosion is Probable” in the book “Singularity Hypotheses” back in 2012.
    But that’s not the coincidence. One of my projects right now is to re-engineer my toaster, using as much compute power as possible, so the damn thing stops burning my toast. 😂
    Oh, and P.S., Sabine is exactly right here: these idiotic predictions about the imminence of AGI are bonkers. They haven’t a hope in hell of getting to AGI with current systems.

    • @LiamNajor
      @LiamNajor 4 หลายเดือนก่อน

      SOME people have a clear head about this. Computing power alone isn't even CLOSE.

    • @fraenges
      @fraenges 4 หลายเดือนก่อน

      AGI beside - even with the current systems we are already able to replace a lot of jobs. AI just has to do the task as good as the average worker, not as good as the best worker. On our way to AGI the social changes, impact, unrest from constant layoffs might be much greater than that of a super intelligence.

    • @jyjjy7
      @jyjjy7 4 หลายเดือนก่อน +1

      As an supposed expert please explain what Leopold is getting wrong, why this tech won't scale and what your definition of AGI is

    • @reubenadams7054
      @reubenadams7054 4 หลายเดือนก่อน

      You are overconfident, and so is Leopold Aschenbrenner.

    • @richard_loosemore
      @richard_loosemore 4 หลายเดือนก่อน

      @@reubenadams7054 No, I do research in this field and I have been doing that for over 20 years.

  • @removechan10298
    @removechan10298 4 หลายเดือนก่อน +2

    6:01 excellent point and that's why i watch, you really hone in on what is real and what is not. awesome

  • @stepic_7
    @stepic_7 4 หลายเดือนก่อน +22

    Sabine can you discuss sometime the issue for the need of more data? Isnt more data just more noise? Cant AI learn to select sources instead? Or probably I misunderstood how AI works.

    • @SabineHossenfelder
      @SabineHossenfelder  4 หลายเดือนก่อน +19

      Thanks for the suggestion, will keep it in mind!

    • @wilkesreid
      @wilkesreid 4 หลายเดือนก่อน +4

      Computerphile has a good recent video on why more training data will probably not fundamentally improve image generation ai to be better. But improvement of ai in general isn’t only the addition of training data

    • @AquarianSoulTimeTraveler
      @AquarianSoulTimeTraveler 4 หลายเดือนก่อน +4

      ​@@SabineHossenfelder spoken like a regular human who doesn't understand exponential growth patterns... what we really need is a Ubi based off the total automated production percentage of the GDP that way as we automate away production we can calculate how much tools have helped us increase our production capacity and how many humans it would take to reproduce that production capacity without those tools and that is what we base our automated production percentage off of positions in the economy the consumer Market doesn't collapse because consumer buying power is maintained and as we increase production and increase the ability to have goods and services automated in production then we will get more money to spend in the economy to protect the consumer Market from inevitable collapse... we need people addressing these inevitabilities if you're not addressing this inevitability everything else you're doing is pointless because this is the most dangerous inevitability of all time and it will destroy the entire consumer market and bring needless scarcity if we don't address it as I have laid out for you here...

    • @thisisme5487
      @thisisme5487 4 หลายเดือนก่อน +20

      @@AquarianSoulTimeTraveler Please, for the love of science, punctuation!

    • @noway8233
      @noway8233 4 หลายเดือนก่อน

      By the way a new paper shows logaritmical grow of llm models acuracy/power , not linaer ,or exponential , its a Hype , no AGI , now im gone find Sara Connors😅

  • @alonamaloh
    @alonamaloh 4 หลายเดือนก่อน +8

    I was involved in computer chess in the late 90s and in computer go around 2010, and I've seen how quickly we move from "these things are cute but they won't beat the best humans at this task for many decades, if ever" to "these things are competitive with the best humans, but a combination of both is best" to "humans don't have a chance, and they don't really understand what's going on, compared to the machines".
    I think Leopold Aschenbrenner's prediction will more or less come true, even if he didn't get all the details right. In those other fields, it took several breakthroughs beyond "more compute" to get to total domination by the machines, but there are a lot of smart people working on this, so I'm sure there will be breakthroughs.
    Also, data is only a limit with the current imitation-based techniques. If someone figures out a good mechanism to use the current AIs to produce higher-quality data that can be used to train the next AIs (like AlphaZero did for those games), we'll have an explosion in performance without additional external data. I think this will happen.

    • @Mr_Boifriend
      @Mr_Boifriend 4 หลายเดือนก่อน

      what examples are there of “these things”, besides games? genuinely just curious what you are referencing

    • @alonamaloh
      @alonamaloh 4 หลายเดือนก่อน

      @@Mr_Boifriend I was talking just about chess and go. Of course it's not automatically the case that general intelligence will follow the same pattern, but I see a lot of parallels. For instance, when Deep Blue beat Kasparov, a lot of people thought that to get stronger play you would need an even larger and power-hungry computer, yet today Stockfish running on your cell phone would beat Deep Blue very consistently.

    • @RawrxDev
      @RawrxDev 4 หลายเดือนก่อน

      @@alonamaloh I feel one of the issues with that however is the "relative" simplicity of those games, to a computer, those are just puzzles, its solvable, to go from moving pieces around a fixed board to developing improved algorithms is a big step (not trying to undermine chess and go, its just that the intrinsic points of the game are very pro computer so it always made sense to me that they would beat humans)

    • @alonamaloh
      @alonamaloh 4 หลายเดือนก่อน +2

      @@RawrxDev You could very well be right. But it's also possible that, once the correct representations have been discovered, reasoning is also just a puzzle.
      Before computers could play chess or go well, most people would have agreed that making a computer that plays those games well would be an amazing feat of AI; now we see it more as engineering. Every challenge seems mundane once we have figured it out. I just suspect that everything a human can do will soon be in that category.

    • @RawrxDev
      @RawrxDev 4 หลายเดือนก่อน

      @@alonamaloh That is a very real possibility, my personal hypothesis (I'm a cs major, take this with a grain of salt) is that in order to have true logical and reasoning, a perquisite is understanding, and therefore awareness to the problem in the first place, which could require perhaps some baseline level of consciousness. Its possible it could just be some complex mathematical application, but again, my personal thought is that its more complex then that.

  • @schemage2210
    @schemage2210 4 หลายเดือนก่อน +18

    There is an assumption that in order to get to AGI ever increasingly sized models must be used. That may not end up being the case, which makes the "energy" cost limitation, rather less limiting.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 4 หลายเดือนก่อน

      There’s a fundamental problem with that concept, animals don’t need that much information to run rings around AI. Man children that think more data = more information or even relevant information, or framing don’t understand the basic problem. Animal brains do something fundamentally different than adjust token vectors in hyper large dimensions.

    • @kanekeylewer5704
      @kanekeylewer5704 4 หลายเดือนก่อน +1

      You can also run these models on physical architectures more similar to biology and therefore more efficient

    • @carlpanzram7081
      @carlpanzram7081 4 หลายเดือนก่อน +3

      I'd think so too, but apparently it's not that easy.
      Anyway, we WILL eventually inch forward with more and more Efficient architectures.
      Very obviously the amount of energy you need for intelligence and computing is actually quit small. I get 100iq for a bowl Of noodles.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 4 หลายเดือนก่อน +1

      @@carlpanzram7081 The more relevant question is method. LLM aren’t a model of animal intelligence. It’s the wrong abstraction.

    • @schemage2210
      @schemage2210 4 หลายเดือนก่อน +1

      @@GhostOnTheHalfShell This is the point for sure. LLM's are surely a piece of the puzzle, but they aren't the entire solution.

  • @rgonnering
    @rgonnering 3 หลายเดือนก่อน

    I love Sabine. She is brilliant and has a great sense of humor. Above all she explains complex issues, and I think I understand (some of) it.

  • @edwardduda4222
    @edwardduda4222 4 หลายเดือนก่อน +20

    I work in the industry. No one has an idea of how to get to AGI, even Yann Lecun. We’re at a point to where we’re literally running out of data to train models. ChatGPT4 is just a collection of models with a voting mechanism which why it seems more intelligent.

    • @notaras1985
      @notaras1985 4 หลายเดือนก่อน

      Only God creates souls. Humans cannot

    • @stedyedy23
      @stedyedy23 4 หลายเดือนก่อน +11

      ​​@@notaras1985 keep your silly religion out of science debates

    • @Regic
      @Regic 4 หลายเดือนก่อน +2

      Mixture of experts (the method gpt-4 is probably using) is not a voting mechanism, what you are thinking of is ensemble. Mixture of experts is quite the opposite, it learns where to route the computation while ensemble computes the result of multiple models and takes an aggregate of it (majority voting, weighted average, etc). Mixture of experts only uses a fraction of one trained network, ensemble runs multiple models. This is a weirdly common misconception that is based on how they imagine it based on the name only. Read the paper about it maybe...?

    • @TheManinBlack9054
      @TheManinBlack9054 4 หลายเดือนก่อน

      Why even? Hes not THAT influential and many of his takes have been proven wrong

    • @ptonpc
      @ptonpc 4 หลายเดือนก่อน

      @@notaras1985 😂 You are silly. Try to keep up with reality. PS. If you are pretending to be a god botherer, you might might to hide those videos on your channel.

  • @DrWrapperband
    @DrWrapperband 4 หลายเดือนก่อน +19

    Reading the "AGI" prediction dates differed from the Sabine spoken prediction dates, human error?

    • @PandaPanda-ud4ne
      @PandaPanda-ud4ne 4 หลายเดือนก่อน +1

      She did it on purpose to show how fallible human intelligence is....

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน

      In her defense she probably has ChatGPT write the script.

  • @puelocesar
    @puelocesar 4 หลายเดือนก่อน +59

    I still don't get how LLM systems alone will achieve AGI, and all explanations for it until now were just "it will just happen, just wait and see"

    • @libertyafterdark6439
      @libertyafterdark6439 4 หลายเดือนก่อน +3

      The idea is that contemporary architectures operate around building representations (abstractions inside the model that may or may not be roughly correlative to concepts) from the dataset.
      What it does now is leverage those representations to produce outputs, but importantly, it leverages representations of a model with X scale trained on Y data.
      So far, there seems to be a direct correlation between models being able to do more things, and those models getting “bigger”
      So with all of this in mind, a bigger model should be more “intelligent” if we are willing to reduce that to the number and permutations of representations it can utilize. That’s why many see a future in which LLMs (or something very close to them) will lead to AGI.

    • @Lolatyou332
      @Lolatyou332 4 หลายเดือนก่อน

      It's not the only way AI currently works and they have different algorithms ontop of the LLM to increase accuracy. Otherwise how could the AI ever get better? You can't just continue to provide data to a model and make it smarter, there has to be algorithmic changes to increase it's ability to scale both in terms of different concepts and to be able to be interacted with from consumers in scale.

    • @SomeoneExchangeable
      @SomeoneExchangeable 4 หลายเดือนก่อน +2

      They won't. But somebody ought to remember the other 50 years of AI research...

    • @netional5154
      @netional5154 4 หลายเดือนก่อน +18

      My thoughts exactly. The current AI systems are 'just' super advanced association algorithms. But there is no emerging identity that really understands things. The current AI systems have just as much consciousness as a pocket calculator.

    • @notaras1985
      @notaras1985 4 หลายเดือนก่อน

      ​@@netional5154only God creates conscious beings with souls

  • @SEIKE
    @SEIKE 3 หลายเดือนก่อน

    Your channel is the best thing about the internet right now ❤️

  • @frankheilingbrunner7852
    @frankheilingbrunner7852 4 หลายเดือนก่อน +16

    The basic fallacy in the chatter about the AI superrevolution is that a species which doesn't want to think can create a system which does.

    • @Hellcat-to3yh
      @Hellcat-to3yh 4 หลายเดือนก่อน +4

      Seems like a pretty vast over generalization there.

    • @douglasclerk2764
      @douglasclerk2764 4 หลายเดือนก่อน +1

      Excellent point.

    • @danielstan2301
      @danielstan2301 4 หลายเดือนก่อน

      No the worst fallacy is that they assume that a smart machine will create competition for itself or something smarter which will possibly replace/destroy the creator. That's not how life works.
      I also love how they assume that an intelligent machine will just want to improve itself instead of writing poetry or create stupid videos on various platforms out there like , these other smart beings already do instead of using this internet platform to improve themselves

    • @Hellcat-to3yh
      @Hellcat-to3yh 4 หลายเดือนก่อน

      @@danielstan2301 That’s not how life works? Humans are actively destroying its creator right now in Earth. We evolved from single cell organisms over hundreds of millions of years.

    • @41-Haiku
      @41-Haiku 4 หลายเดือนก่อน +2

      ​@@danielstan2301 They don't assume that. The instrumental convergence thesis was hypothesized and taken to be likely, since it was very intuitive. Then it was mathematically proven that "Optimal Policies Tend to Seek Power." Then we observed tendencies relevant to power-seeking in current systems, including strategic deception and self-preservation.
      If you spend some time looking through what we now know about AI Risk and honestly assessing the scientific validity of the claims being made, there is a strong chance you will become worried (as most experts are) about AI potentially ending the world during your lifetime.

  • @quixotiq
    @quixotiq 4 หลายเดือนก่อน +3

    great stuff yet again, Sabine! Love your work

  • @haraldlonn898
    @haraldlonn898 4 หลายเดือนก่อน +112

    Use memory foam soles, and you will remember why you went to the kitchen.

    • @naromsky
      @naromsky 4 หลายเดือนก่อน +2

      Subtle.

    • @christopherellis2663
      @christopherellis2663 4 หลายเดือนก่อน

      😂

    • @willyburger
      @willyburger 4 หลายเดือนก่อน +3

      My wheelchair cushion is memory foam. My butt never forgets.

    • @alexdavis1541
      @alexdavis1541 4 หลายเดือนก่อน +2

      My mattress is memory foam, but I still wake up wondering where the hell I've been for the last eight hours

    • @aaronjennings8385
      @aaronjennings8385 4 หลายเดือนก่อน

      I like it

  • @danielduarte5073
    @danielduarte5073 3 หลายเดือนก่อน

    Good report. Great topic.

  • @militzer
    @militzer 4 หลายเดือนก่อน +6

    About the energy problem, i've said this on your solar panels in space video:
    Ditch the whole "energy beam from space" part and put supercomputers up there, then just transmit back the processed data.
    We could offset most energy from supercomputing on earth to space, reduce land grid usage, and have scalable "infinite" energy for space grid.

    • @Hollowed2wiz
      @Hollowed2wiz 4 หลายเดือนก่อน

      But how do you cool down the supercomputers in space ?
      Your idea cannot work without an efficient way to dissipate the heat produced by the computers.

    • @militzer
      @militzer 4 หลายเดือนก่อน

      ​@@Hollowed2wiz Well, first you place the computer in the shadow of the solar array, of course, you don't want the sun heating it.
      Then use radiators like in the ISS.
      The ISS can handle 70kW if Wikipedia is up to date.
      It would need a lot more then that, but the solar array should be hundreds of meters in 2 directions, so the radiators would scale with them.

    • @militzer
      @militzer 4 หลายเดือนก่อน

      I looked at the video again it says today we use 100MW, for AI, so if the scaling is perfect today we would need just 38x38 (ISS respective dimensions).
      Of course the radiators can go "down" (away from the sun), if there's not enough space to grow laterally.
      To produce 100MW we would need around 300x300m of solar panels.
      The numbers are on the same order of magnitude.

    • @militzer
      @militzer 4 หลายเดือนก่อน

      Idk how effective the heat transport from supercomputers to the radiators would be though, but i imagine its doable.

  • @stephens1393
    @stephens1393 2 หลายเดือนก่อน +4

    I think Sabine is underestimating what humans will do to actually streamline the progress. We will constantly be working on more efficient ways to power the training and better ways to refine and interpret the data being used for training. It's not even clear that human-created data is the right thing for training. AI smarter than us will create/discover data better than we have done.
    It may not happen like Aschenbrenner predicts, but chatgpt is already hugely transformative in how computer work is done. This is only going to expand into other areas.

    • @Pedroramossss
      @Pedroramossss 2 หลายเดือนก่อน

      GPT is transformative? The same GPT who can't count the number of R's in strawberry?

    • @stephens1393
      @stephens1393 หลายเดือนก่อน +1

      @@Pedroramossss There are certain things it is good at, and certain things that it's not. Ironically, that kind of trick is the same kind of trick that people fall for before they're familiar with teasers like that, so I don't give it much weight. There's pretty much no denying the impact it has made in the past year or so. I know zero people who I can ask an arbitrary question about anything and get a somewhat informative answer, or at least a starting point to find the full answer. LLMs are _really_ good at that kind of thing. You still have to be aware of the possibility of hallucinations, but still, amazingly useful.

  • @tobiaskpunkt3595
    @tobiaskpunkt3595 4 หลายเดือนก่อน +11

    Regarding failed predictions, you should also acknowledge that in terms of ai, there were many predictions that already were accomplished years earlier than predicted.

    • @johndow1645
      @johndow1645 4 หลายเดือนก่อน +3

      Also, many breakthrough technologies (planes, controlled fission) surprised experts who were on the record saying that those breakthrough were "decades away"

    • @tabletalk33
      @tabletalk33 4 หลายเดือนก่อน +3

      Examples?

    • @drachefly
      @drachefly 4 หลายเดือนก่อน +1

      @@tabletalk33 MATH benchmark, for one. It was made to be unreasonably difficult so that it would be able to track AI's progress over a long period of time. Latest AIs get over 90% on it after just a few years.

    • @Jo_Wick
      @Jo_Wick 3 หลายเดือนก่อน

      ​@@dracheflyGreat example. Here's another:
      "I hope none of you gentlemen is so foolish as to think that aeroplanes will be usefully employed for reconnaissance from the air. There is only one way for a commander to get information by reconnaissance, and that is by the use of cavalry."
      General Sir Douglas Haig, British Army
      Sometimes people are resistant to change, but change comes whether we want it or not.

    • @sarcasticnews1195
      @sarcasticnews1195 3 หลายเดือนก่อน

      "FLYING MACHINES WHICH DO NOT FLY" New York Times, December 8, 1903. (The Wright brothers flew literally nine days later.) "It might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanics in from one million to ten million years-provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in organic materials. No doubt the problem has attractions for those it interests, but to the ordinary man it would seem as if effort might be employed more profitably."
      This prediction was especially retarded considering that balloon flight had already existed since the 1700s, and engines since the 1800s. It doesn't take a mad genius to put those two concepts together.

  • @DaveDevourerOfPineapple
    @DaveDevourerOfPineapple 4 หลายเดือนก่อน

    So much sense being spoken in this video. A welcome voice always.

  • @dangerdom904
    @dangerdom904 4 หลายเดือนก่อน +30

    We're running out of text data, not data. The amount of information in the world is essentially endless.

    • @2ndfloorsongs
      @2ndfloorsongs 4 หลายเดือนก่อน +5

      Not sure about "endless", but I'd be willing to bet on "lots more".

    • @smellthel
      @smellthel 4 หลายเดือนก่อน

      There’s always synthetic data. Also, ChatGPT 4o gained a lot more understanding of the world because it was able to be trained on different types of data.

    • @outhoused
      @outhoused 4 หลายเดือนก่อน

      yeah but i guess, theres much to be learned by associating different texts and reading between the lines. maybe that one paragraph in some text document really compliments another one thats seemingly unrelated etc

    • @marwin4348
      @marwin4348 4 หลายเดือนก่อน +3

      @@2ndfloorsongs There is an effectively infinite amount of Data in the Universe

    • @DingDingPanic
      @DingDingPanic 4 หลายเดือนก่อน +3

      It needs time be high quality data and there is a severe lack of that…

  • @Bystander333
    @Bystander333 4 หลายเดือนก่อน +8

    Nice catch Sabina! My reaction was pretty much the same after you explained "early twenties, brief gig at company with Oxford in the name and moved to SF", am guessing some parental support. Basically that left me super sceptical.

  • @richardlbowles
    @richardlbowles 4 หลายเดือนก่อน +9

    Artificial Intelligence might be right around the corner, but Natural Stupidity is here with us right now.

    • @tabletalk33
      @tabletalk33 4 หลายเดือนก่อน

      Humans make poor, inconsistent decisions and are easily swayed.

  • @syncrossus
    @syncrossus 3 หลายเดือนก่อน

    This is the right take in my opinion. From the napkin math I did, I think those energy requirements are a bit pessimistic, but we're already out of data to train language models on. GPT 4 was trained on the Colossal Clean Crawled Corpus (or C4 for short), which is basically all the clean text on the internet. Where do we go from there? Digitize books? OCR is good but last I checked it almost couldn't deal with non-ASCII characters and still makes many mistakes. Do we buy access to all scientific articles? That would be very costly, not increase the data *that* much, and most articles are only available as PDFs, not raw text. Have you ever tried copy/pasting from a PDF? Half the time, it jumbles the entire thing. PDFs need to look good and be human-readable, not machine-readable. Perhaps some of the text extraction can be automated, but I'm not sure there's a reliable way to do so.
    EDIT: I hadn't gotten to the part about robots when I wrote this. Really? Robots? Boston Dynamics make really impressive machines that are good at navigating their environment, but we're nowhere near the level of fine motor skills, generality and flexibility for robots to collect resources, build factories and run them autonomously. I'm also baffled that he doesn't talk about the alignment problem. In short, the idea is that the only guarantee we have with AI is that it performs well in training according to the criteria we've specified. We can't specify everything we care about (how do you optimize for "don't hurt humans"?) and we can't guarantee that the AI model actually cares about the same thing we care about. It could (and will almost always) fail to generalize exactly to what we want. The smarter the AI, the more it will exploit the gap between what we say we want and what we actually want to game its rewards. You see this in ChatGPT's "hallucinations": if you reward chatGPT for saying "I don't know" in training, it will just say "I don't know" all the time, so you have to penalize it, but if it's going to be penalized anyway, it might as well try to bullshit its way into you believing it's giving you useful information. Effectively, AGI is by default a monkey's paw at best. There are also convergent instrumental goals (things that are useful to pursue for a wide variety of goals) that are highly dangerous such as self-preservation, goal preservation, and resource acquisition (you're more likely to achieve your current goals if you're not dead, if nobody changes your goals, and if you have more stuff).

  • @vvm_signed
    @vvm_signed 4 หลายเดือนก่อน +18

    Sometimes I’m wondering what would happen if we invested a fraction of this money into human intelligence

    • @generichuman_
      @generichuman_ 4 หลายเดือนก่อน +2

      ugh... so edgy...

    • @notaras1985
      @notaras1985 4 หลายเดือนก่อน +3

      @@generichuman_ wrong. What he suggested is extremely efficient

    • @elizabethporco8263
      @elizabethporco8263 4 หลายเดือนก่อน

      D

    • @rutvikrana512
      @rutvikrana512 4 หลายเดือนก่อน +1

      Nah we have that time and money for hundred of years nothing can compare to AI advancement we are achieving today. It will take time but I am pretty sure AI is not a bubble like other rapid industry. I mean even developers don’t know how AI work and AI don’t stop learning. We can’t predict AGI might come earlier than we imagine.

    • @drakey6617
      @drakey6617 4 หลายเดือนก่อน

      @@rutvikrana512what do you mean developers don’t know how AI works? They certainly do. Everyone is just surprised that these simple ideas work so well.

  • @Velereonics
    @Velereonics 4 หลายเดือนก่อน +44

    It's like the antimatter 747 guy or the hyperloop bros who probably knew even at the conception of their ideas that they could not possibly succeed but when a journalist asks how close we are they say "may as well be tomorrow" because then they get money from idiots who think you know it's a long shot but mayeb

    • @libertyafterdark6439
      @libertyafterdark6439 4 หลายเดือนก่อน +10

      This is completely undermining the fact that products do exist, and gains ARE being made.
      You can think it’s too slow, or that there’s, say, an issue with current architectures, but there’s a big difference between “not there yet” and “smoke and mirrors”

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +1

      If you believe what you said relates to A.I, you are firmly in the "I have no idea what is going on" category.

    • @Velereonics
      @Velereonics 4 หลายเดือนก่อน +5

      @@hardboiledaleks9012 You dont know what part of the video I am referring to I guess, and that is not my problem.

    • @TheManinBlack9054
      @TheManinBlack9054 4 หลายเดือนก่อน +3

      @@todd5857 do you really think that AI researchers say all this for grants and money? Maybe they actually do believe what they say and arent being greedy or manipulative

    • @Vastin
      @Vastin 4 หลายเดือนก่อน +2

      @@libertyafterdark6439 I'm of the opinion that these researchers are seriously overestimating their likely future progress AND I think it's moving too fast regardless. I don't really see any way that AI development does anything but further concentrate vast amounts of wealth and power into a very small class while disenfranchising the rest of humanity.
      After all, if you have a smart robot workforce, what are people actually *good for*?

  • @fgadenz
    @fgadenz 4 หลายเดือนก่อน +61

    8:17 by 2020 or 2040?

    • @Phosdoq
      @Phosdoq 4 หลายเดือนก่อน +5

      she just proved that she is human :D

    • @adashofbitter
      @adashofbitter 4 หลายเดือนก่อน +13

      Also mistook “2029” for “by 2020”… so at least 2 of the predictions aren’t that crazy with our current progress

    • @flain283
      @flain283 4 หลายเดือนก่อน

      @@Phosdoq or did she just fool you into thinking that?

    • @pwlott
      @pwlott 4 หลายเดือนก่อน +2

      @@adashofbitter They are in fact shockingly prescient given current trends. Kurzweil was very smart to focus on raw computation.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +2

      @@adashofbitter the narrative for sabine was "all the predictions were wrong"
      This is why she made the mistake. There is a bias in her reporting of the topic.

  • @dasanjos
    @dasanjos 4 หลายเดือนก่อน +1

    Great video - specially with the out takes at the end

  • @PeterPan-ev7dr
    @PeterPan-ev7dr 4 หลายเดือนก่อน +43

    Artificial Stupidity is growing faster than Artificial Intelligence.

    • @gibbogle
      @gibbogle 4 หลายเดือนก่อน

      Natural stupidity.

    • @williamkinkade2538
      @williamkinkade2538 4 หลายเดือนก่อน

      Only for Humans!

    • @PeterPan-ev7dr
      @PeterPan-ev7dr 4 หลายเดือนก่อน

      @@williamkinkade2538 Humans infected with their senseless and stupid data the AI.

    • @Bobbel888
      @Bobbel888 4 หลายเดือนก่อน +1

      ~ the idea of nasty children bears fruit, the brighter they are

    • @markthebldr6834
      @markthebldr6834 4 หลายเดือนก่อน +2

      No, it's authentic stupidity.

  • @jeffgriffith9692
    @jeffgriffith9692 4 หลายเดือนก่อน +27

    You made 2 errors on the predictions quotes - both would put it about now especially Ray's prediction of 2029 sounds spot on...

    • @123100ozzy
      @123100ozzy 4 หลายเดือนก่อน +1

      it does not. I cant overstate how far we are from actual itelligence.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +6

      @@123100ozzy You can't spell "can't" or "intelligence" properly so why should we listen to you about anything intelligence related?

    • @scotte4765
      @scotte4765 4 หลายเดือนก่อน +3

      @@hardboiledaleks9012 You have to admit that the spelling does support the point.

    • @MrAlanCristhian
      @MrAlanCristhian 4 หลายเดือนก่อน +3

      Every engineer on the planet knows that technology doesn't improve infinitely. Eventually it just stop. And that will happen with AI. And also, AI improvement is alreading stalling.

    • @ruzinus_
      @ruzinus_ 4 หลายเดือนก่อน

      ​@@123100ozzydon't confuse intelligence with sapience.

  • @FarFromZero
    @FarFromZero 4 หลายเดือนก่อน +41

    We had the same ideas regarding the internet. "Knowledge explosion". Let's hope the intelligence explosion doesn't turn out as "Nonsense explosion" ;)

    • @dtibor5903
      @dtibor5903 4 หลายเดือนก่อน

      Hopefully it will be intelligent and be not like humans.

    • @mrbrown6421
      @mrbrown6421 4 หลายเดือนก่อน

      There was a "Knowledge Explosion"....IN REVERSE !

    • @idontwanna1234
      @idontwanna1234 4 หลายเดือนก่อน +15

      The Internet WAS a knowledge explosion! Someone who is actually interested in learning can gain knowledge much more easily now than before the Internet. Unfortunately, the corporatization of the Internet has also created huge incentives to fill the network with garbage for profit, and that's how we ended up with TikTok challenges. :/

    • @adashofbitter
      @adashofbitter 4 หลายเดือนก่อน +7

      The internet DID lead to a knowledge explosion. Yes, it’s easy to look at how much is wasted on the internet and say “ha! This knowledge explosion is nothing more than cat memes and people bickering on facebook”, but the internet has incalculably led to an explosion of widespread access to knowledge. And the AI boom is just the latest extension of the knowledge explosion begun by the internet.

    • @PeterNduati-f1q
      @PeterNduati-f1q 4 หลายเดือนก่อน +4

      which actually happened

  • @ChristianIce
    @ChristianIce 4 หลายเดือนก่อน

    Turbo super AGSI NFT Bigdata is just around the corner, and it's still machine learning and text prediction, with zero actual understanding.

  • @irvingthemagnificent
    @irvingthemagnificent 4 หลายเดือนก่อน +34

    Scaling up current models will not get you to AGI. It just gets you more expensive models.

    • @jonatand2045
      @jonatand2045 4 หลายเดือนก่อน

      But almost no one is scaling brain like ai.

    • @SnoodDood
      @SnoodDood 4 หลายเดือนก่อน

      Seems like the progress is starting to pivot toward getting different types of AI systems to interact and work in tandem.

    • @facts9144
      @facts9144 4 หลายเดือนก่อน +5

      Actually the opposite. Don’t talk about something you don’t know anything about.

    • @mkhaytman
      @mkhaytman 4 หลายเดือนก่อน +2

      Must be nice to know better than Google and Microsoft! You should call them and tell them they are wasting billions of dollars building these new clusters.

    • @davidfl4
      @davidfl4 4 หลายเดือนก่อน +1

      To play devils advocate (even though I agree with you) some sort of alien, unexpected intelligence could emerge from sorting through all that chaos. Like a bunch of AI models gathering data from different sources (including real world ones) and making judgements and predictions based on all these different algorithms then perhaps something like intelligence could emerge?
      For instance you could have a llm and another bot tracking facial expressions and emotions, coupled together could they not learn what words induce what emotions and learn to manipulate people?
      I’m not a scientist just thought I’d postulate😂

  • @SomeMorganSomewhere
    @SomeMorganSomewhere 4 หลายเดือนก่อน +5

    "It's robots all the way down" *rolleyes*

  • @stefano94103
    @stefano94103 4 หลายเดือนก่อน +6

    I live in San Francisco and work in the Ai industry. I also read his entire paper line by line. While I agree the Ai industry especially in SF is in a bubble and the challenges you presented are real, they are not unsolvable. Neil deGrasse Tyson said, "It's a dangerous place where you know enough to think that you're right but not enough to know that you're wrong" and as much as I love your content this is where I feel you are on this topic.

    • @redchili385
      @redchili385 4 หลายเดือนก่อน +7

      She didn't address a single argument correctly. This makes most of the people who watch this video dismiss the essay completely, even though it has a lot of concrete points and references. She basically reduced the essay to one section about energy, where he made the claim that 100GW is possible with wells, but she only dismissed it as a joke and deflected with nuclear fusion, which was never mentioned in the entire essay. I hoped to see one good argument in this video, but I saw none.

    • @michaelnurse9089
      @michaelnurse9089 4 หลายเดือนก่อน +2

      Yup, that is the thing. Nobody, not Sabine or the funny looking guy, or anyone else knows for sure where this S-curve ends. A little humility all around would go a long way.

    • @workdevice7808
      @workdevice7808 4 หลายเดือนก่อน

      ​@@michaelnurse9089So far, the only difference between ANi, AGi and ASi seems to be scaling and a belief that if the datasets are big enough the machines will inevitably progress from N to G to S. The likelihood, surely, it that we will just get a very big ANi machine. Intelligence doesn't come from just having a lot of data.

    • @redchili385
      @redchili385 4 หลายเดือนก่อน

      ​@@workdevice7808 It doesn't come from just having a lot of data, but having a lot of data certainly helps the training process. Other factors, such as architecture, compute power, and how the data is processed and selected, can also be improved to develop smarter and broader AI. The best example is the development of current LLM models like GPT-4.

    • @workdevice7808
      @workdevice7808 4 หลายเดือนก่อน

      @@redchili385 Thanks for your reply. Another question: if we're on the right path towards AGi wouldn't we start to see sparks of AGi intelligence already, intelligence for which what we've built already is big enough? Or are we expecting there to be this moment, this spark, where AGi suddenly talks back to us?

  • @Thebentist
    @Thebentist 4 หลายเดือนก่อน

    To be fair I think we’re forgetting also about the unlocks from AGI and organoid computers drastically reducing compute needs. Remember our brain operates the neural net at waaaaayyy lower energy consumption so there’s a chance we can figure out how and do it with a lot less and may already have all the power needed currently for this ASI super cluster

    • @fandomguy8025
      @fandomguy8025 4 หลายเดือนก่อน

      I agree, in fact, it's already been done with honeybees!
      By studying their brains researchers reverse-engineered them into an algorithm that allowed a drone to avoid obstacles using 1% the computing power of deep learning while running 100 times faster!

    • @davidireland1766
      @davidireland1766 2 หลายเดือนก่อน

      A very very different type of neural network

  • @frgv4060
    @frgv4060 4 หลายเดือนก่อน +10

    Sounds like autonomous driving yet again only degrees of magnitude escalated up. The “if you can’t still solve the little problem just look for a bigger problem” approach hehe.

    • @taragnor
      @taragnor 4 หลายเดือนก่อน +3

      Yeah lol. How about this guy worry about figuring how to get an AI to drive a car before he gets into his dream of massive robot swarms that can run an integrated autonomous mining/manufacturing/construction operation.

    • @CaridorcTergilti
      @CaridorcTergilti 4 หลายเดือนก่อน +1

      ​@@taragnorautonomous driving is solved, it is not used because of politics

    • @frgv4060
      @frgv4060 4 หลายเดือนก่อน +2

      @@CaridorcTergilti Nope. Autonomous driving as long as everything stays “normal” on a route is solved. Real full autonomous driving it is not. So you can say it is a political reason, as many restrictions are a political reason like the use of guardrails on stairs and bridges as many things, norms and restrictions that aren’t technically necessary, unless you want to keep alive that clumsy minority that has the audacity of being bumped or slip while on that bridge.
      Edit:
      Imagine that swarm of robots with the current driving capability of an AI (how they can be realistically trained), on a natural environment going mining. I can imagine it and it is funny.

    • @CaridorcTergilti
      @CaridorcTergilti 4 หลายเดือนก่อน

      @@frgv4060 imagine a truck that drives 16 hours a day because the driver can sleep on the highway and only drive the difficult parts. For normal cars, the car can just stop and be teleoperated in case of problems. "If there's a will there's a way"

    • @aaronperrin6108
      @aaronperrin6108 4 หลายเดือนก่อน

      "Waymo's driverless cars were 6.7 times less likely than human drivers to be involved a crash resulting in an injury, or an 85 percent reduction over the human benchmark, and 2.3 times less likely to be in a police-reported crash, or a 57 percent reduction."

  • @rogerwood2864
    @rogerwood2864 4 หลายเดือนก่อน +14

    Sabine, for someone so smart I'm surprised you didn't see the giant pink elephant smearing poop on the walls. It doesn't matter if it uses 10,000GW to achieve a superintelligent model. The country that achieves it wins all the marbles; which means that behind the scenes this is a Manhattan Project-level event. Even if they have to have brownouts in Las Vegas, they will reach their goal.

    • @sisko89
      @sisko89 4 หลายเดือนก่อน +2

      I was thinking the same, along with several other fallacies... I can't believe that someone with such a superficial grasp on the subject made a 10 minute video trying to disprove an essay from someone that actually worked on AI

    • @jdilksjr
      @jdilksjr 4 หลายเดือนก่อน

      @@sisko89 And I can't believe how many people have been conned over the years by sales pitches in technology. AI is still a sales pitch. It is nothing but a program that processes data without really understanding it. If you feed it garbage data it won't know that it is garbage.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน

      She's talking about power issues as if billionaires didn't have enough capital and motivation to build entire fking nuclear plants dedicated to run their super computers.

  • @siceastwood2714
    @siceastwood2714 4 หลายเดือนก่อน +5

    07:58 i don't think that failed predictions of the past are applicable with this kind of AI. The main point of the transformer AI architecture is, that it is fundamentally different to every computing and AI prior to this and actually works like human intelligence does. Like all the past predictions were based on something that sounds similar but is not really comparable

    • @nycbearff
      @nycbearff 4 หลายเดือนก่อน

      No one knows yet how human intelligence works. Ask any neuro scientist. There are hundreds of different hypotheses about intelligence, but we're a long way from understanding the brain. We see the results, but don't know how the brain does it. The fact that you seem to think AI developers do understand human intelligence is an example of how these predictions go wrong - if your basic assumptions are so wrong, your predictions can only be very wrong too.

    • @deitachan7878
      @deitachan7878 4 หลายเดือนก่อน +1

      We do not understand enough about how human intelligence works to say this new kind of AI works the same way as us. At best it seems that the ai very crudely approximates something like a neuron but a lot seems to be missing. I think the biggest advancements that could be made in ai would be achieved from studying the inner workings of human neurons. They are extremely energy efficient comparatively and do not require much training data to learn things. You can see something once and be able to identify it from different angles, lightings, etc.
      Give a million pictures to an ai then feed it a color inverted image of that and it has no clue.

    • @siceastwood2714
      @siceastwood2714 4 หลายเดือนก่อน

      @@deitachan7878 of course there's probably a lot of differences between AI and straight up human intelligence. "But the main difference between previous AI models and the Transformer architecture is that Transformer models can use attention to process information globally, while previous models work sequentially or locally." - Chat GPT
      Traditional AI works sequential and is just text completetion predicting word after word. It is basically pure logic saying only something because something else was prior to this. Humans are not thinking in pure logic, that wouldn't make any sense.
      Instead we do something because of something AND in order for something to happen. We relate actions based on the past and the possible future while reflecting with experience. The transformer architecture now does something similiar modelling answers based on what came prior to a certain word and setting it in relation to the words that follow while reflecting these relations based on training data. It is now able to say something because of something and in order for something to happen based on the data acquired.
      Tbh i have no clue about technicalities, but i believe, that this is by far the biggest difference between logical computing and human intelligence. Everthing else is more like technical details, resource and power constraints.

  • @emilianohermosilla3996
    @emilianohermosilla3996 3 หลายเดือนก่อน

    Great video, Sabine!

  • @prettyfast-original
    @prettyfast-original 4 หลายเดือนก่อน +13

    I think Sabine underestimates the energy+data relationship for A.I. Just 2 years ago, Chat-GPT3 was the only game it town. Now, I can run Llama3 and various derivative models that are equivalent to Chat-GPT4 on my home laptop. Llama4 is expected soon enough. In 2 more years, you might not have Chat-GPT5000 super-intelligent A.G.I, but you will likely have the equivalent of Chat-GPT4 running on a toaster, which may be a better path to achieve A.G.I. than one central massive energy-hungry cluster running one massive model as it is envisioned in this video.

    • @turkeytrac1
      @turkeytrac1 4 หลายเดือนก่อน

      You're literally running a very small portion of the programs mentioned, thus you can run them on your desk top. Sorry, but you're literally running the idiot version. I am overgeneralizing here, but AGI, one that can do the mental gymnastics the human mind will require lots of energy to get there and if it grows as much as they think, it will require exponentially more energy.

    • @prettyfast-original
      @prettyfast-original 4 หลายเดือนก่อน +5

      @@turkeytrac1 I know I am running the idiot version, but if I can run 1000 idiot models for very little energy, maybe that is more useful and efficient for achieving real work. Your brain is not a monolith, its made of smaller less-complex ("idiot") components that work together (cortex, neurons, organelles, and so on).

    • @MarkHentges
      @MarkHentges 4 หลายเดือนก่อน

      Are you special needs? Running the model is not the problem. The problem is training the model. Also, your computer isn't even doing all the computations that result in the output. Your GPT queries are calculated in the cloud

  • @josdejongnl
    @josdejongnl 4 หลายเดือนก่อน +3

    I find these arguments a bit short-sighted. A future lack of data and energy I think assumes that there will not be any major innovation on these two topics.
    I can imagine a future AI will not need as much data to train on as the current AI and works more similar to how a human toddler learns. And AI advocates are hopeful that the industry will be able to develop much more efficient hardware, potentially solving the energy problem. Of course we don't know whether these innovations will actually happen in the future, but good to keep them in mind.

  • @ferdinandbardamou5508
    @ferdinandbardamou5508 4 หลายเดือนก่อน +115

    "AI is the greatest scam ever pulled by the linear algebra industrial complex."
    edit: quote by Fireship.

    • @edmunns8825
      @edmunns8825 4 หลายเดือนก่อน

      Yeah, fuck the LAIC!

    • @EaglePicking
      @EaglePicking 4 หลายเดือนก่อน +6

      Bitcoin?

    • @playingmusiconmars
      @playingmusiconmars 4 หลายเดือนก่อน +1

      Lol - I'd say it's the notion that classical hilbert space quantum mechanics is more than linear Algebra in disguise

    • @rafazafar82
      @rafazafar82 4 หลายเดือนก่อน +8

      These kinds of quotes haven't aged well.

    • @hardboiledaleks9012
      @hardboiledaleks9012 4 หลายเดือนก่อน +6

      @@rafazafar82 The idea that putting words between quotations makes them true or noteworthy is actually peak human imbecility.

  • @jean-francoiskener6036
    @jean-francoiskener6036 3 หลายเดือนก่อน

    I love this woman, she's so informative while also fun