Tap to unmute

Is the Intelligence-Explosion Near? A Reality Check.

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 มิ.ย. 2024
  • Learn more about neural networks and large language models on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.
    I had a look at Leopold Aschenbrenners recent (very long) essay about the supposedly near "intelligence explosion" in artificial intelligence development. I am not particularly convinced by his argument. You can read his essay here: situational-awareness.ai/
    🤓 Check out my new quiz app ➜ quizwithit.com/
    💌 Support me on Donorbox ➜ donorbox.org/swtg
    📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
    👉 Transcript with links to references on Patreon ➜ / sabine
    📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
    👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
    🔗 Join this channel to get access to perks ➜
    / @sabinehossenfelder
    🖼️ On instagram ➜ / sciencewtg
    #science #sciencenews #tech #technews #ai
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 5K

  • @lokop-bq3ov
    @lokop-bq3ov หลายเดือนก่อน +2821

    Artificial Intellignce is nothing compared to Natural Stupidity

    • @GnosticAtheist
      @GnosticAtheist หลายเดือนก่อน +42

      lol - true that. While I am certain we will get there, I hope we can avoid creating AGI that has our natural capabilities to be stupid.

    • @Ann-op5kj
      @Ann-op5kj หลายเดือนก่อน +29

      It's the same thing. Where is AI generated from?

    • @generichuman_
      @generichuman_ หลายเดือนก่อน +16

      so edgy...

    • @acidjumps
      @acidjumps หลายเดือนก่อน +22

      I use both about equally at work.

    • @turkeytrac1
      @turkeytrac1 หลายเดือนก่อน +18

      That's tshirt worthy

  • @framwork1
    @framwork1 29 วันที่ผ่านมา +838

    Do you all remember before the internet, that people thought the cause of stupidity was lack of access to information? Yeah. It wasn't that.

    • @TshegoEagles
      @TshegoEagles 28 วันที่ผ่านมา +18

      Knowledge is power!!😂😂😂

    • @SRMoore1178
      @SRMoore1178 28 วันที่ผ่านมา +69

      "Think about how dumb the average person is and then realize that half of them are dumber than that." George Carlin
      AI will have no problem outsmarting the average person.

    • @deep.space.12
      @deep.space.12 28 วันที่ผ่านมา +15

      @@SRMoore1178 more like median but sounds about right

    • @389293912
      @389293912 28 วันที่ผ่านมา +2

      LOL!!! Great observation.

    • @spacecowboy511
      @spacecowboy511 28 วันที่ผ่านมา +6

      Ya, but the internet is an excellent way to shepherd the sheep.

  • @calmhorizons
    @calmhorizons 24 วันที่ผ่านมา +37

    Selling shovels has always been the best way to make money in a goldrush.

    • @ishaanrawat9846
      @ishaanrawat9846 13 วันที่ผ่านมา +3

      Thats what nvidia has done

  • @RigelOrionBeta
    @RigelOrionBeta 28 วันที่ผ่านมา +87

    In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer.
    There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault.
    AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.

    • @modelenginerding6996
      @modelenginerding6996 26 วันที่ผ่านมา +5

      A major accuracy problem with AI is not only does it train itself on information from the internet, it is also training on itself and creating a vicious feedback loop. I had a location glitch in an area with poor cell reception saying I had visited a vape shop. I got no-smoking ads from my state for two years! My social credit score has been marred 😂.

    • @thumpthumper9856
      @thumpthumper9856 26 วันที่ผ่านมา +6

      With the advancements in digital twins and replicators, nuanced synthetic data is becoming better and better. The garbage in garbage out narrative becomes less and less salient. Why worry about finding new data when fake data is just as good? At least for tasks involving computer vision and movement, to be fair.

    • @danlightened
      @danlightened 23 วันที่ผ่านมา +3

      We're in the post truth era? 🤔😕

    • @Lydynthmn
      @Lydynthmn 23 วันที่ผ่านมา +2

      I get frustrated with ChatGPT because it doesn't respond like a real person would. My experiences anyway. It will never have sentience or consciousness so it can never really understand how to respond like a person. It always feels robotic to me. Of course that could just be because I know it's artificial.

    • @Beremor
      @Beremor 22 วันที่ผ่านมา

      @@Lydynthmn I've had the same experience. Once I asked some questions that require some interpretation or an understanding of the subject matter beyond the wording of the question, it completely breaks down and gives milquetoast, superficial and half-baked answers.
      Large language models are incapable of expressing the limits of their capabilities. They're unable to adequately express how confident they are in the statements they're making. Ultimately, their answers are about as useful as page one of a well-worded google search, and unfortunately I already know how to word google searches well. ChatGPT has been an utter waste of my time and so has every tutorial about how to "properly word prompts."

  • @pirobot668beta
    @pirobot668beta หลายเดือนก่อน +1007

    In 1997, I was working at University.
    A Faculty member gave me an assignment: write a program that can negotiate as well as a human.
    "The test subjects shouldn't be able to tell if it's a machine or a human."
    Apparently, she had never heard of the Turing Test.
    When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks."
    The point?
    There are far too many people with advanced degrees but no common sense making predictions about something never seen before.

    • @mikemondano3624
      @mikemondano3624 หลายเดือนก่อน +14

      One bad grade shouldn't breed lasting resentment.

    • @darelvanderhoof6176
      @darelvanderhoof6176 หลายเดือนก่อน +113

      We call them "PhD Stupid". It's afflicts about half of them. Seriously.

    • @2ndfloorsongs
      @2ndfloorsongs หลายเดือนก่อน +44

      ​@@darelvanderhoof6176and the other half humorously.

    • @jaredf6205
      @jaredf6205 หลายเดือนก่อน +6

      It’s just I can’t imagine why it wouldn’t happen. There’s just no way to get people to stop developing this technology. Even if you were to governments would still work on it, people in their basements would still work on it.

    • @ogungou9
      @ogungou9 หลายเดือนก่อน +12

      @pirobot668beta: There is no such thing as common sense. She didn't lack common sense, that was just stupidity. She was an idiot savant ... I don't know ...

  • @hmmmblyat6178
    @hmmmblyat6178 หลายเดือนก่อน +819

    All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.

    • @b0nes95
      @b0nes95 หลายเดือนก่อน +83

      I'm always amazed by our energy efficiency as well

    • @nickv8334
      @nickv8334 หลายเดือนก่อน +51

      well, agriculture and food production/disposal is kind of responsible for 18% of the worlds production of greenhouse emissions (excluding transport), so i think the jury is still out on who wins this round though........

    • @TheManinBlack9054
      @TheManinBlack9054 หลายเดือนก่อน +34

      Technology improves, just think of how big and ineffecient computers used to be and how small efficient they are now

    • @jozefwoo8079
      @jozefwoo8079 หลายเดือนก่อน +57

      It's only to train the model. Afterwards it becomes cheaper than humans.

    • @draftymamchak
      @draftymamchak หลายเดือนก่อน +4

      Our efficiency doesn’t matter, the creator is superior than the creation thus no matter what AI does it’ll be because we created it. Sure it'll also be responsible for what it does but for now I'm worried about generative AI being too good and being used to fake evidence etc.

  • @k.vn.k
    @k.vn.k 28 วันที่ผ่านมา +255

    “I can’t see no end!”
    Said the man who earned money from seeing no end.
    😅😅😅 That’s gold, Sabine!

    • @wellesmorgado4797
      @wellesmorgado4797 24 วันที่ผ่านมา +3

      As someone already said: Follow the money!

    • @Tom_Quixote
      @Tom_Quixote 24 วันที่ผ่านมา +2

      If he makes money from seeing no end, why can't he see no end?

    • @k.vn.k
      @k.vn.k 24 วันที่ผ่านมา +1

      @@Tom_Quixote so that he keeps making money 😂

    • @shenshaw5345
      @shenshaw5345 24 วันที่ผ่านมา

      That doesn’t mean he’s wrong though

    • @AndiEliot
      @AndiEliot 23 วันที่ผ่านมา +3

      @@shenshaw5345 It doesn't mean he's wrong, I totally agree with that, but what Sabine is doing is super important; when judging someone's strong opinion or thesis always see FIRST what the agenda of that person is and in what game is he putting his skin in. This is proper due diligence.

  • @zigcorvetti
    @zigcorvetti 28 วันที่ผ่านมา +123

    Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.

    • @ericrawson2909
      @ericrawson2909 26 วันที่ผ่านมา

      Exactly what I was thinking. And not just corporations. Politicians, and in fact most people. They have shown that they will deny truth when it is pointed out to them by a well qualified person, if it conflicts with their own interests. That could be profit, power, or simply virtue signalling to fit in with the majority. If they ignore, cancel and smear well respected experts in a field, why would they act on the advice of an AI, even if it was supremely intelligent and God like in its desire to help humanity? AI will not save the world. Like all other technology it can be used for good or evil purposes. Probably the latter more often than not.

    • @domenicorutigliano9717
      @domenicorutigliano9717 26 วันที่ผ่านมา +2

      everyone is undersestimating

    • @ericrawson2909
      @ericrawson2909 25 วันที่ผ่านมา +1

      I am getting sick and tired of my comments getting deleted. I did not use any "bad" words, I guess my amplification of the criticism in the original post here to other groups was too close to home for the vested interest groups. I feel very angry, and YT, making your users angry is not a good business strategy.

    • @dascreeb5205
      @dascreeb5205 25 วันที่ผ่านมา

      ?

    • @goldminer754
      @goldminer754 24 วันที่ผ่านมา +5

      This project of AGI would need hundreds of billions and rather trillions of dollars plus cooperation with other companies plus major support from a powerful government and it won't bring any profits for many many years. And it is not even guaranteed that it is feasible to build this AGI, therefore an extremely risky investment. Fortunately corporate greed almost entirely revolves around short term profits, so I am pretty certain that no such Giga project is started any time soon, especially considering how much energy this needs and the tiny problem of climate change still having to be meaningfully addressed.

  • @michaelbuckers
    @michaelbuckers 29 วันที่ผ่านมา +417

    There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.

    • @michaelnurse9089
      @michaelnurse9089 29 วันที่ผ่านมา +20

      "The learning database already includes virtually 100% of all text written by humans, " No, before starting training they run all the text through AI inference of the previous model. This improves quality by a significant percentage. In reality, there is always going to be another layer of AI between the current one being trained and the data.

    • @michaelbuckers
      @michaelbuckers 29 วันที่ผ่านมา

      @@michaelnurse9089 It improves metrics, not quality. Sure enough when AI is predicting its own text, the preplexity will be less than when it predicts human text. And this is especially a huge issue for small models fine-tuned on ChatGPT. People are already sick and tired of unpromted "as a language model" and such garbage in their anime character simulator chatbox, and yet it's only gonna get worse when next gen ChatGPT will be fine tuned on last gen ChatGPT.

    • @bbgun061
      @bbgun061 29 วันที่ผ่านมา +42

      That doesn't make sense.
      Garbage in, garbage out.
      Current AI models produce garbage a lot of the time. If you use that to train another AI model, it's going to produce more garbage.

    • @tannerroberts4140
      @tannerroberts4140 29 วันที่ผ่านมา +27

      I think it’s good to remember that, in terms of societal contributions, the quality of human activities in general are garbage in. But society got built. We waste our time our money, our effort, get pointlessly hooked on rage bait, romcom, addictions, etc. One might say we’re mostly enjoying life, but in terms of societal contribution, it’s pretty much trash.
      An honest look at even the leaders in every field of study shows that each leader is either somebody with one good idea that attracted a lot of positive attention, or an exemplary personality that attracts a lot of of collective intelligence.

    • @michaelbuckers
      @michaelbuckers 29 วันที่ผ่านมา +3

      @@tannerroberts4140 Language models replicate training data. Between replicating humans and replicating itself, it's a very easy pick.

  • @pablovirus
    @pablovirus หลายเดือนก่อน +564

    I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes

    • @jamesbarringer2737
      @jamesbarringer2737 หลายเดือนก่อน +16

      She does have a good and somewhat subtle sense of humor.

    • @ChiefEru
      @ChiefEru หลายเดือนก่อน +9

      In all seriousness, I want to know why have I gone to the kitchen.
      Better yet... the lack of remembering an empty fridge.

    • @hvanmegen
      @hvanmegen หลายเดือนก่อน +23

      I love this sane German attitude of hers.. the fact that she spends time to read an essay like this to call him on his bullshit (especially with the conflict of interest) brings me so much hope for the future. We need more people like her.

    • @DanielMasmanian
      @DanielMasmanian หลายเดือนก่อน +28

      Yes, a German sense of humour is no laughing matter.

    • @rohitnirmal1024
      @rohitnirmal1024 หลายเดือนก่อน +2

      @@DanielMasmanian I had a German professor. Boy, he had a sense of humor. I have not laughed since I have met hem.

  • @Mars_architects_bali
    @Mars_architects_bali 24 วันที่ผ่านมา +20

    Nailed it .. this technocentric mindset is pervasive is so many fields .. but rarely scrutinised holistically for its resource needs, land use changes, social impacts

  • @tobiaskpunkt3595
    @tobiaskpunkt3595 26 วันที่ผ่านมา +10

    Regarding failed predictions, you should also acknowledge that in terms of ai, there were many predictions that already were accomplished years earlier than predicted.

    • @johndow1645
      @johndow1645 25 วันที่ผ่านมา +2

      Also, many breakthrough technologies (planes, controlled fission) surprised experts who were on the record saying that those breakthrough were "decades away"

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา +3

      Examples?

    • @drachefly
      @drachefly 24 วันที่ผ่านมา +1

      @@tabletalk33 MATH benchmark, for one. It was made to be unreasonably difficult so that it would be able to track AI's progress over a long period of time. Latest AIs get over 90% on it after just a few years.

    • @Jo_Wick
      @Jo_Wick 10 วันที่ผ่านมา

      ​@@dracheflyGreat example. Here's another:
      "I hope none of you gentlemen is so foolish as to think that aeroplanes will be usefully employed for reconnaissance from the air. There is only one way for a commander to get information by reconnaissance, and that is by the use of cavalry."
      General Sir Douglas Haig, British Army
      Sometimes people are resistant to change, but change comes whether we want it or not.

    • @sarcasticnews1195
      @sarcasticnews1195 2 วันที่ผ่านมา

      "FLYING MACHINES WHICH DO NOT FLY" New York Times, December 8, 1903. (The Wright brothers flew literally nine days later.) "It might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanics in from one million to ten million years-provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in organic materials. No doubt the problem has attractions for those it interests, but to the ordinary man it would seem as if effort might be employed more profitably."
      This prediction was especially retarded considering that balloon flight had already existed since the 1700s, and engines since the 1800s. It doesn't take a mad genius to put those two concepts together.

  • @msromike123
    @msromike123 หลายเดือนก่อน +1125

    If I will be able to ask Google home why I went to the kitchen, I am on board!

    • @sebastianeckert1947
      @sebastianeckert1947 หลายเดือนก่อน +46

      You can ask today! Answer quality may vary

    • @ThatOpalGuy
      @ThatOpalGuy หลายเดือนก่อน +16

      this is a real problem for many of us.

    • @HardcoreHokage-cw4uq
      @HardcoreHokage-cw4uq หลายเดือนก่อน +27

      You went into the kitchen to make a samich.

    • @HardcoreHokage-cw4uq
      @HardcoreHokage-cw4uq หลายเดือนก่อน +26

      Make me one too.

    • @LilBurntCrust99
      @LilBurntCrust99 หลายเดือนก่อน +6

      Skibidi

  • @bulatker
    @bulatker หลายเดือนก่อน +799

    "I can't see no end" says anyone in the first half of the S-curve

    • @michael1
      @michael1 หลายเดือนก่อน +83

      "I still see no reason to upgrade my 640kb of ram" Bill Gates

    • @caryeverett8914
      @caryeverett8914 หลายเดือนก่อน

      Isn't that kinda the point of the first half of an S-Curve? The end cannot be predicted and could occur in 1 year or 50 years. It all looks the same either way.
      It'd be pretty silly to say the end is in sight when you're still on the straight part of the S-Curve.

    • @pjtren1588
      @pjtren1588 หลายเดือนก่อน +56

      Just depends where we sit on the timescale before the inflection point. It may be one hell of an S.

    • @Thedeepseanomad
      @Thedeepseanomad หลายเดือนก่อน +5

      @@michael1 Just wait, pay attention and grab on to the next sigmoid skyhook when it materializes .

    • @djayjp
      @djayjp หลายเดือนก่อน +9

      Double negative....

  • @vhyjbdfyhvjybv9614
    @vhyjbdfyhvjybv9614 23 วันที่ผ่านมา +2

    I like to compare this to game development. Imagine someone saying in 2002 that because we managed to double the number of polygons we can render every 2 years photorealistic games are 10 years away. 23 years later it turns out that making photorealistic games is a very difficult topic that requires lots of problems to be solved, some easy some super hard. E.g. today we can render lots of polygons and calculate realistic lightning but destructible environments are not solved. Or realistic realtime water simulations are far away. Or we know that rendering lots of polygons is not enough, e.g. animations or shadows , especially from large objects are hard problems

  • @richardlbowles
    @richardlbowles 28 วันที่ผ่านมา +7

    Artificial Intelligence might be right around the corner, but Natural Stupidity is here with us right now.

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา

      Humans make poor, inconsistent decisions and are easily swayed.

  • @jeremiahlowe3268
    @jeremiahlowe3268 หลายเดือนก่อน +324

    You read a 165-page essay, even though you knew the contents inside would be dubious at best. Sabine is heroic.

    • @Mikaci_the_Grand_Duke
      @Mikaci_the_Grand_Duke หลายเดือนก่อน +8

      Sabine for AI in 2025!

    • @mikemondano3624
      @mikemondano3624 หลายเดือนก่อน +23

      I hope your implication is wrong and people don't avoid reading things they don't agree with or already think they know. That is the "echo chamber" magnified.

    • @justaskin8523
      @justaskin8523 หลายเดือนก่อน +1

      @@mikemondano3624 - Oh they already avoid reading things they don't agree with. Had it happen to me 6 times this week, and there's still another workday left!

    • @mikebibler6556
      @mikebibler6556 หลายเดือนก่อน

      This is an under-appreciated comment.

    • @user-cw3nb8rc9e
      @user-cw3nb8rc9e หลายเดือนก่อน

      Old woman. Has no clue about things she wants to comment on

  • @davidbonn8740
    @davidbonn8740 หลายเดือนก่อน +123

    I think there are a couple of problems here that you don't point out.
    The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that.
    Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up.
    There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.

    • @petrkinkal1509
      @petrkinkal1509 หลายเดือนก่อน +3

      @robertcopes814 Well it learns what is the most likely next word in a sentence. :)

    • @timokreuzer381
      @timokreuzer381 หลายเดือนก่อน

      Humans are extremely inefficient learners. You have to shove petabytes of video, audio and sensoric data into them for years, before they show even the slightest signs of intelligence.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi หลายเดือนก่อน

      @robertcopes814 I agree. I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @Zeroisoneandeipi
      @Zeroisoneandeipi หลายเดือนก่อน +37

      I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.

    • @asdfqwerty14587
      @asdfqwerty14587 หลายเดือนก่อน +12

      I would say by far the #1 problem with the current models is that they aren't really designed to "do" anything. No matter how advanced they get (without completely redesigning it from the ground up), their only goal is to mimic the training set.. they have no concept of what it means to be better at what it's doing beyond just comparing it to what people input as the training data, which makes them incapable of learning anything on their own (because anything they try to learn fundamentally must be compared to what a human is doing.. so if there are no humans in the equation, it has nothing to compare it to and it can't do anything beyond just guessing completely randomly).
      I think that the LLMs are on the completely wrong track if they're aiming for any kind of general intelligence. I think that if they want to have an actually intelligent AI the AI must learn how to communicate without being explicitly programmed to do so (ie. it would need to have some completely unrelated goal that "can" be done without communicating with anything and then learn that some form of communication makes it better at achieving its goal) - it would of course be a lot harder to do and it would probably not seem very smart for a long time, but it would be 100x more impressive to me if an AI learned how to speak that way than anything that LLMs are doing, because that would actually require the AI to understand the meaning of words rather than just being able to predict what words come next in a conversation.

  • @michealkinney6205
    @michealkinney6205 29 วันที่ผ่านมา +3

    You made me legitimately laugh at least three time with very clever puns, while holding a straight face. I am now subscribed. I agree with most of your points (would mostly just add he left out even more considerations). But great content! Thanks!!

  • @SomeMorganSomewhere
    @SomeMorganSomewhere 25 วันที่ผ่านมา +4

    "It's robots all the way down" *rolleyes*

  • @matthewspencer972
    @matthewspencer972 หลายเดือนก่อน +47

    It is surprisingly common, when one tries to converse with pure software engineers, to get them to accept that the laws of physics apply to them and cannot be by-passed by sufficiently clever coding. You get the same sort of thing from genetic engineers, who simply won't accept that endless fiddling with a plant's DNA will not compensate for the absence of moisture or other nutrients in the soil or other growing medium.

    • @TedMan55
      @TedMan55 29 วันที่ผ่านมา +14

      I’m a software wngineer who came from a math and physics heavy based background, and I was shocked to learn that most programmers didn’t know or like math, which I’d just assumed… probably explains a lot of the current state of programming

    • @user-uq1sn5ob3k
      @user-uq1sn5ob3k 29 วันที่ผ่านมา +6

      @@TedMan55 How do you even become a software developer without loving math? As someone terrible at it and coding, I assumed you'd have to swear by your high school math book to even get a chance at compsci

    • @egg-mv7ef
      @egg-mv7ef 29 วันที่ผ่านมา +5

      @@user-uq1sn5ob3k thats completely wrong. math doesnt have as much to do with software engineering as u think. i mean, if youre making physics model visualization ofc u need math lol but for 50% of the usecases u dont need any math. the SEs that know math just have more opportunities cause they can work on more complex stuff like game engines etc

    • @TedMan55
      @TedMan55 29 วันที่ผ่านมา +6

      @@egg-mv7ef it’s not like you can’t program without math skills, it’s just that, in my opinion, because i think having a mathematical mindset helps you to think in a more rigorous, clear about definitions, and even can give you some neat shortcuts for certain algorithms

    • @matthewspencer972
      @matthewspencer972 29 วันที่ผ่านมา +4

      @@TedMan55 I had to work with one who didn't believe that voltage really mattered. We were working in the field of industrial automation; specifically a production line fora well-known Japanese car-maker in Swindon. The customer had specified Japanese PLCs (the only other choices are American or German) and when one of these arrived and needed to be set up, so the software engineer could load his software into it and a few tests, it came with a power cable terminating in the sort of 110V connector that's more or less a global standard for these things and I went off looking for a 240V to 110V adapter, into which it would have plugged with no problem, had he *waited* for me to do something he considered pointless and unnecessary.
      As I was making my way back, I heard "why are the indicator lights so bright? It's F***ing blinding me!" and my heart sank as my eyebrows rose. The software engineer had removed the connector and stuck a UK-standard 13-amp plug on the cable, plugged it into the office 240V mains....
      I think that's why, these days, almost all domestic computer kit has switched-mode PSUs that will work with whatever the idiots plug them into.
      The software engineer secured a senior position at WIN.com, mainly because he was equipped with a reference so glowing (almost as brightly as the PLC had) that he couldn't really have failed in his mission to find a new job!

  • @anthonyj7989
    @anthonyj7989 หลายเดือนก่อน +109

    I am from Australia and I totally agree with you. Australia is one of the biggest users of AI in mining - but a lot of people don’t understand why. If you read through the comments about driverless trucks and trains in Australia, people have no idea of just how remote, humid and hot the northern parts of Australia are.
    People working in iron ore mining in Australia are just hours away from being seriously dehydrated or dead. For iron ore mining to be carried out at the scale that it is, it needed something better than the modern human, who is not able to work outside of an air conditioned environment in the remote northern locations of Australia. Therefore, mining companies had to come up with something that can work in a hostile environment. My understanding is that AI in mining has not reduced the number of people, just move them to a city in an air conditioned building.

    • @feraudyh
      @feraudyh หลายเดือนก่อน +28

      That gets the prize for the most interesting thing I've read today.

    • @hussainhaider2818
      @hussainhaider2818 หลายเดือนก่อน +9

      I don’t get it, how do you mine ore if the miners are back in the city? You mean remote controlled robots?

    • @conradboss
      @conradboss หลายเดือนก่อน +2

      Hey, I like Australia 🇦🇺 😊

    • @MyBinaryLife
      @MyBinaryLife หลายเดือนก่อน +42

      its not AI its just automation

    • @rruffrruff1
      @rruffrruff1 หลายเดือนก่อน +6

      It has definitely reduced the people per output, else it wouldn't be done.

  • @patrickfrazier5740
    @patrickfrazier5740 28 วันที่ผ่านมา +3

    I love the toast joke. Keep up the good work. The logic seems concise in how you described the two primary constraints.

  • @GolerGkA
    @GolerGkA 24 วันที่ผ่านมา +1

    Your point on lack of data is not necessarily a problem. Lately there's been a few papers which show that neural networks can continue training on the same dataset, without showing any improvements in many generations, until they finally grok the data and show significant improvements. There are other ways around limited data as well. I don't think AGI or superhuman intelligence will require any more data that is currently available in the biggest datasets, we just have to utilise it better.

  • @Stadtpark90
    @Stadtpark90 29 วันที่ผ่านมา +16

    Exponential curves usually stop being exponential pretty fast. The surprising success of Moore’s law makes IT people think that’s normal, which it isn’t.

    • @michaelnurse9089
      @michaelnurse9089 29 วันที่ผ่านมา

      Everyone knows this. The questions is whether the curve dies out before AI intelligence exceeds our intelligence or not. If it is the latter there will be serious problems. I suspect the former.

    • @davidradtke160
      @davidradtke160 26 วันที่ผ่านมา +4

      Most exponential curves are actually S curves.

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา +2

      Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

    • @jozefcyran2589
      @jozefcyran2589 24 วันที่ผ่านมา +1

      So what? A 50year run can improve the relative power and way of being by orders of magnitude and that's usually enough to be exicted for. AGI can be incredibly fast incredibly quickly

  • @KageSama19
    @KageSama19 หลายเดือนก่อน +32

    I love how even AI depicts lawmakers as asleep.

    • @makinganoise6028
      @makinganoise6028 หลายเดือนก่อน

      But are they? Maybe this is the plan, societal collapse, the West seems to be doing everything possible to destroy itself with mass illegal migration, anti family WEF cult agendas and WW3 with Russia anytime soon, destroying huge swathes of middle income jobs, fits into the picture

    • @PMX
      @PMX หลายเดือนก่อน +7

      That was definitely the prompt they used. And they purposely used stable diffusion 3 that was just released and is being mocked by how bad it is at generating humans, so it would be funnier.

  • @quixotiq
    @quixotiq 27 วันที่ผ่านมา +2

    great stuff yet again, Sabine! Love your work

  • @OryAlle
    @OryAlle 22 วันที่ผ่านมา +7

    I am unconvinced the data issue is a true blocker. We humans do not need to read the entirety of the internet, why should an AI model? If the current ones require that, then that's a sign they're simply not correct - the algorithm needs an upgrade.

    • @PfropfNo1
      @PfropfNo1 20 วันที่ผ่านมา +5

      Exactly. Current models need to analyze like a million images of cats and dogs to learn to distinguish cats and dogs. A 4 year old child needs like 10 images. Current AI is strong because it can analyze („learn“) tons of data. But it is extremely inefficient in that, which means there is huge potential.

    • @toofasttosleep2430
      @toofasttosleep2430 18 วันที่ผ่านมา

      💯 Better takes from ppl with anime avatars than scientists on yt 😂

  • @MrFuncti0n
    @MrFuncti0n หลายเดือนก่อน +250

    The Kurzweil prediction is for 2029 not 2020, right?

    • @SethHixie
      @SethHixie หลายเดือนก่อน +24

      Yes. The same year the asteroid apophis has a 3% chance of making impact 👀

    • @robadkerson
      @robadkerson หลายเดือนก่อน +53

      ​@@SethHixie2.7% was the original hypothesis in 2004. It's been revised, and will not be hitting us in 2029 or 2036

    • @johanlahti84
      @johanlahti84 หลายเดือนก่อน +30

      ​@@SethHixiethink they crunched the numbers again and concluded that it will miss with a 100% certainty

    • @Vember813
      @Vember813 หลายเดือนก่อน +44

      He's predicted 2029 since the 90s, yes

    • @jamesgornall5731
      @jamesgornall5731 หลายเดือนก่อน +4

      ​@@johanlahti84oh yeah, that's what they want us to think...

  • @thedabblingwarlock
    @thedabblingwarlock 29 วันที่ผ่านมา +10

    Able to process information faster than a human? Certainly. Computers have been able to do that for decades now.
    Able to do anything a human can do better than a human can do it? Nope, not a chance.
    People keep forgetting that we don't really know what intelligence is on a quantifiable level. We have a somewhat intuitive grasp of what intelligence is, but as far as I am aware, we don't have a way to measure it and compare expect in the broadest sense. We don't fully understand how our brains or brains in general work. That's not even getting into things like the synthesis of ideas, one of the cores of creativity; aesthetic sensibilities; and a dozen other highly subjective subjects. Simply put, we don't know enough about what goes on under the hood to put numbers to it.
    And that's a problem because computers only deal in numbers.
    Which leads me to the second thing people keep forgetting. Most modern AI models that I am aware of use a complex set of vector and probability equations to go from input to output. To grossly oversimplify things, it's just one big math equation with an algorithm at the start to tokenize the input into a form that the computer program can process, and another at the end to make the output processable by the person providing the input. Equations and algorithms don't have capability to be self-aware, at least not in any sense of an intelligent being. Nothing will change that, no matter how had you might wish for them to be so. Nor do they have the ability to generate new ideas or combine disparate ones into a cohesive whole.
    Thirdly, computers and thus AI do not have an architecture anywhere close to that of a human brain, or any brain for that matter. They're trying to translate a very analog process into a digital one without truly understanding everything going on in the analog process first, and boy howdy, is that process complex! A friend of mine point out how many of these projects don't have a psychologist on board, so how can they know what their target is without the person who's entire career has been to study the thing they're trying to replicate. In short, these guys don't even have an expert on intelligence on staff, or at least the closest thing we have to an expert that I am aware of.
    What these guys sound like to me is the computer science equivalent of doctors and lawyers. They are very smart people in a very mentally demanding field, but they also happen to know they are smart, and they think they are smarter than anyone else. Because they think they are smarter than anyone else, they think they can do anyone else's job. They can't. I worked in IT for almost ten years, and some of our most difficult clients to deal with were doctors and lawyers. They would question everything on a project, they'd insist on using systems that were over a decade out of date, and they'd also imply that they could do our jobs better than we could.
    General AI or Super AI isn't only a few years away. I doubt it's even a few decades away. I think, like fusion, the timelines are going to be much, much longer than anyone wants admit. Ironically, I think we are much closer to fusion as an energy production method than we are to having anything close to a human-like AI. We can generate fusion reactions, and we've managed to get more juice out than we pumped in on at least one occasion. It's a matter of refinement and iteration at this point.
    We aren't even at the stage we were at with fusion in the 30's and 40's with AI, I think. We don't understand everything that's going on under the hood with intelligence. We can't model it. We can't quantify it. We can't even agree on what it is beyond the broadest strokes. Until we can do that, we aren't going to get anything intelligent out of AI, and all it will ever be is a complex vector equation tuned on probabilities.
    And this isn't even getting into the steps some are taking to protect their work from being scraped by spiders looking for AI training (re: configuring, because that's what they are doing. Tuning would also be more accurate) data. And some of those measures are aimed at poisoning the well. If those measures become common place, I don't see the current crop of LLM and LMM (Large Language Model and Large Media Model) based AIs getting any better, and I don't see that as a viable option going forward.
    This isn't the first time we've seen futurists touting that AI and automation will take over large swathes of the current job market. I remember reading an article over a decade ago about how in ten years we'd see close to 50% of the workforce replaced by AI and automated systems. True, AI has made some jobs obsolescent, but as we seem to be finding out about every decade or so, computers and computer programs aren't ready to do what a human can do. They get closer every year, but the pace isn't nearly as fast as some would like you to believe.
    As for me, I have a human centric view of this. I believe that AI can be a powerful tool, but right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised. I could be wrong, but I don't think I am. I've seen it with 3d-printing (additive manufacturing.) I've seen it with 3d televisions and media (can't remember the last time I saw this as a selling point.) I've seen it with cryptocurrencies and NFTs (hopefully I need not explain this one further.) And, these are all examples from just the last ten to fifteen years. Time and time again we see technology as a fad that is around for a few years, then the hype fizzles and dies, sometimes taking the tech behind the build up with it.
    But then again, I'm just some web developer from Alabama. What do I know?
    P.S. I almost forgot to add, that whole robots do all of the work thing seems to have a chicken and egg problem, and that's before we even get into the myriad engineering and manufacturing challenges that need to be solved for just the GEN 1 bots.
    This is why you should look outside of your field, folks! It helps build an appreciation for how hard some of those "minor challenges" might be in reality.

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา

      Very interesting, great comment! These developers of AI who are making all sorts of predictions would do well to read what you wrote: "...right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised." Robert J. Marks says the same thing. See his book: Non-Computable You: What You Do That Artificial Intelligence Never Will (2022).

  • @billcollins6894
    @billcollins6894 26 วันที่ผ่านมา +80

    Sabine, I worked on AI at Stanford. There are two areas where people have misconceptions.
    1) We do not need new power to get to AGI. Large power sources are only needed if the masses are using AI. A single AI entity can operate on much less power than a typical solar field. It does not need to serve millions of people. It only needs to exceed our intelligence and become good at improving itself. It can serve a single small team that directs it at focusing on solving specific problems that change the world. One of the early focus issues is designing itself to use less power and encode information more efficiently.
    2) No new data is needed. This fallacy assumes that the only way to AGI is continuing to obtain new information to feed LLMs. All of the essence of human knowledge is already captured. AI only needs to understand and encode that existing knowledge more efficiently. LLMs are not the future, they are a brief stepping stone.

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา +5

      Very interesting. Thanks for the clarification.

    • @PracticalAI_
      @PracticalAI_ 24 วันที่ผ่านมา +11

      The energy will be used to train the model, not to run them… please check the paper

    • @billcollins6894
      @billcollins6894 24 วันที่ผ่านมา +7

      @@PracticalAI_ The energy used to train the models is inconsequential in the long run. GPUs are not the future for AI.

    • @PracticalAI_
      @PracticalAI_ 23 วันที่ผ่านมา

      @@billcollins6894 have you watched the video or worked in the field? To train a model you need gw of energy for months. that’s why it cost milions. Your idea that the ai will design itself to run on less power is “possible” but not in the short/medium term.. this machines are autocomplete on steroids at the moment. Good for marketing, terrible for designing new things

    • @Disparagingcheers
      @Disparagingcheers 23 วันที่ผ่านมา +7

      Maybe I’m misunderstanding the definition of AGI, but doesn’t narrowing scope of the model to a small team training/using it for specific use-cases contradict what AGI is? Thought it was supposed to be generalized for anything?
      Are you suggesting all of the essence of human knowledge is captured on the internet? Idk that that’s necessarily true, and also I believe there’s a lot we don’t know? So wouldn’t that mean for a model to continue to learn beyond what we are already capable of it would need to be able to conduct experiments and capture new training data?

  • @josdejongnl
    @josdejongnl 17 วันที่ผ่านมา +3

    I find these arguments a bit short-sighted. A future lack of data and energy I think assumes that there will not be any major innovation on these two topics.
    I can imagine a future AI will not need as much data to train on as the current AI and works more similar to how a human toddler learns. And AI advocates are hopeful that the industry will be able to develop much more efficient hardware, potentially solving the energy problem. Of course we don't know whether these innovations will actually happen in the future, but good to keep them in mind.

  • @kiwikiwi1779
    @kiwikiwi1779 หลายเดือนก่อน +28

    "I can't see no end!" says man who earns money from seeing no end.
    Amazingly put. So many of these AI "experts" are either grifters in the progress of duping people, or are so wrapped up in their own expertise and personal incentives that they'd just rather keep the gravy train going. :D

    • @Apjooz
      @Apjooz หลายเดือนก่อน

      Why would it end? No reason.

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +2

      @@Apjooz "Me human. Me most intelligent. Computer can no intelligent. Me intelligent. Computer will not more intelligent than me because me say so. ME MOST INTELLIGENT"

    • @anthonybailey4530
      @anthonybailey4530 29 วันที่ผ่านมา +2

      Man is twenty. Man left hugely rewarding OpenAI job due to his concerns. Man does need to eat. Man underestimates cynicism.
      Don't look for excuses to dismiss. Engage with the arguments and assess probabilities.

    • @RawrxDev
      @RawrxDev 29 วันที่ผ่านมา +1

      @@hardboiledaleks9012 Childlike understanding of the concerns with AI hype. Reddit tier comment

    • @hardboiledaleks9012
      @hardboiledaleks9012 29 วันที่ผ่านมา

      ​@@RawrxDev My comment had nothing to do with the actual valid (if not a bit uninformed) concerns about A.I hype. I was mocking the usual "intellectuals" take on AGI. The ones with no expertise in the field who can't tolerate the thought that intelligence can be reduced to a calculation.
      As for you, I think your comment is very self descriptive as far as "childlike understanding" and "reddit tier comment" are concerned. Good job.​

  • @patrickmchargue7122
    @patrickmchargue7122 หลายเดือนก่อน +78

    Actually, according to the graphic you slashed up, Ray Kurzweil predicts AGI by 2029, not 2020.

    • @katehamilton7240
      @katehamilton7240 หลายเดือนก่อน

      So what? Industry people are hyping AGI to make money. AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'

    • @brendanh8193
      @brendanh8193 หลายเดือนก่อน +13

      And he puts the singularly at 2045. AGI is parity, not super.

    • @polyphony250
      @polyphony250 หลายเดือนก่อน +15

      @@brendanh8193 It's looking like an out-of-this-world, shockingly good prediction today, then, considering when it was made.

    • @brendanh8193
      @brendanh8193 หลายเดือนก่อน +18

      ​@@polyphony250Agreed. I do get a little annoyed with SH at times for failing to understand the nature of exponential predictions. Take Vernor Vinge's prediction, in the same speech, he put bounds on it, with 2030 being his upper bound. We haven't got there yet but she basically ridiculed him for making such a prediction.

    • @EliteDragonX69
      @EliteDragonX69 หลายเดือนก่อน +3

      He also predicted that we would have 1 word govt by 2020…

  • @frankheilingbrunner7852
    @frankheilingbrunner7852 28 วันที่ผ่านมา +13

    The basic fallacy in the chatter about the AI superrevolution is that a species which doesn't want to think can create a system which does.

    • @Hellcat-to3yh
      @Hellcat-to3yh 26 วันที่ผ่านมา +4

      Seems like a pretty vast over generalization there.

    • @douglasclerk2764
      @douglasclerk2764 26 วันที่ผ่านมา +1

      Excellent point.

    • @danielstan2301
      @danielstan2301 26 วันที่ผ่านมา

      No the worst fallacy is that they assume that a smart machine will create competition for itself or something smarter which will possibly replace/destroy the creator. That's not how life works.
      I also love how they assume that an intelligent machine will just want to improve itself instead of writing poetry or create stupid videos on various platforms out there like , these other smart beings already do instead of using this internet platform to improve themselves

    • @Hellcat-to3yh
      @Hellcat-to3yh 26 วันที่ผ่านมา

      @@danielstan2301 That’s not how life works? Humans are actively destroying its creator right now in Earth. We evolved from single cell organisms over hundreds of millions of years.

    • @41-Haiku
      @41-Haiku 25 วันที่ผ่านมา +2

      ​@@danielstan2301 They don't assume that. The instrumental convergence thesis was hypothesized and taken to be likely, since it was very intuitive. Then it was mathematically proven that "Optimal Policies Tend to Seek Power." Then we observed tendencies relevant to power-seeking in current systems, including strategic deception and self-preservation.
      If you spend some time looking through what we now know about AI Risk and honestly assessing the scientific validity of the claims being made, there is a strong chance you will become worried (as most experts are) about AI potentially ending the world during your lifetime.

  • @rgonnering
    @rgonnering 14 วันที่ผ่านมา

    I love Sabine. She is brilliant and has a great sense of humor. Above all she explains complex issues, and I think I understand (some of) it.

  • @Virgil_G2
    @Virgil_G2 หลายเดือนก่อน +147

    This sounds more like a horror story plot than a future to be excited about, tbh.

    • @2ndfloorsongs
      @2ndfloorsongs หลายเดือนก่อน +6

      That all depends on how excited you can get about a half full glass.

    • @t.c.bramblett617
      @t.c.bramblett617 หลายเดือนก่อน +6

      It's exactly like the Matrix, including the limiting factor of energy that the Matrix movies also ignore. You can't generate energy from a closed system, and manufacturing and computing both require massive amounts of energy and as she pointed out, obtaining material for building infrastructure itself requires energy that has to be focused and channelled as efficiently as possible.

    • @rruffrruff1
      @rruffrruff1 หลายเดือนก่อน +7

      It will be exciting for the few people who own the AI... at least until the AI gets clever enough to own them.
      Honestly I think the struggle for domination will result in devastation far beyond our wildest nightmares... and there is no way we can stop it. Our best hope is that some hero develops and unleashes a compassionate AI first... that becomes king of the world.

    • @RedRocket4000
      @RedRocket4000 หลายเดือนก่อน +3

      @@rruffrruff1 No we can stop it. Turn off all power. But Dune style flat out ban of computer like devices would work. They only allow one tasks can't do other tasks types of electronics.

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 หลายเดือนก่อน +4

      May be. But I'll say, a good part of the entire analysis is BS. A zeit guist of the LLM success, but has no clue on the fact that generative AI is a misfit for most practical work.

  • @jensphiliphohmann1876
    @jensphiliphohmann1876 29 วันที่ผ่านมา +12

    10:00
    The neutron free fusion zungenbrecher is hilarious. It reminds me of a Loriot skech where Evelyn Hamann is struggeling with English pronunciation. 😂❤

  • @NeXaSLvL
    @NeXaSLvL 2 วันที่ผ่านมา

    scientists are vigorously studying data trying to figure out the answer to questions such as "why did I go to the kitchen?"

  • @MilesDashing
    @MilesDashing 4 วันที่ผ่านมา

    Don't worry everyone, as long as we have Sabine around, intelligence can't go up by TOO much.

  • @PeterPan-ev7dr
    @PeterPan-ev7dr หลายเดือนก่อน +41

    Artificial Stupidity is growing faster than Artificial Intelligence.

    • @gibbogle
      @gibbogle 29 วันที่ผ่านมา

      Natural stupidity.

    • @williamkinkade2538
      @williamkinkade2538 29 วันที่ผ่านมา

      Only for Humans!

    • @PeterPan-ev7dr
      @PeterPan-ev7dr 29 วันที่ผ่านมา

      @@williamkinkade2538 Humans infected with their senseless and stupid data the AI.

    • @Bobbel888
      @Bobbel888 29 วันที่ผ่านมา +1

      ~ the idea of nasty children bears fruit, the brighter they are

    • @markthebldr6834
      @markthebldr6834 29 วันที่ผ่านมา +2

      No, it's authentic stupidity.

  • @puelocesar
    @puelocesar หลายเดือนก่อน +57

    I still don't get how LLM systems alone will achieve AGI, and all explanations for it until now were just "it will just happen, just wait and see"

    • @libertyafterdark6439
      @libertyafterdark6439 หลายเดือนก่อน +3

      The idea is that contemporary architectures operate around building representations (abstractions inside the model that may or may not be roughly correlative to concepts) from the dataset.
      What it does now is leverage those representations to produce outputs, but importantly, it leverages representations of a model with X scale trained on Y data.
      So far, there seems to be a direct correlation between models being able to do more things, and those models getting “bigger”
      So with all of this in mind, a bigger model should be more “intelligent” if we are willing to reduce that to the number and permutations of representations it can utilize. That’s why many see a future in which LLMs (or something very close to them) will lead to AGI.

    • @Lolatyou332
      @Lolatyou332 หลายเดือนก่อน

      It's not the only way AI currently works and they have different algorithms ontop of the LLM to increase accuracy. Otherwise how could the AI ever get better? You can't just continue to provide data to a model and make it smarter, there has to be algorithmic changes to increase it's ability to scale both in terms of different concepts and to be able to be interacted with from consumers in scale.

    • @SomeoneExchangeable
      @SomeoneExchangeable หลายเดือนก่อน +2

      They won't. But somebody ought to remember the other 50 years of AI research...

    • @netional5154
      @netional5154 หลายเดือนก่อน +18

      My thoughts exactly. The current AI systems are 'just' super advanced association algorithms. But there is no emerging identity that really understands things. The current AI systems have just as much consciousness as a pocket calculator.

    • @notaras1985
      @notaras1985 หลายเดือนก่อน

      ​@@netional5154only God creates conscious beings with souls

  • @williambreedyk7861
    @williambreedyk7861 21 วันที่ผ่านมา +3

    No translator for all languages yet in sight. The idea of "meaning" has completely disappeared under statistical correlation. Going to hit the ceiling soon, with no real progress until someone gets it right.

  • @k.vn.k
    @k.vn.k 28 วันที่ผ่านมา +1

    “It won’t be long now until AI outsmart human…. Because… look, it’s not hard, isn’t it?”
    😂😂😂 Oh Sabine, love that!

  • @Khantia
    @Khantia หลายเดือนก่อน +152

    Since when are "2040" and "2029" equal to 2020?

    • @Luizfernando-dm2rf
      @Luizfernando-dm2rf หลายเดือนก่อน +5

      I think those 2 guys were onto something

    • @Megneous
      @Megneous หลายเดือนก่อน +20

      Quality is really slipping on her videos recently...

    • @harshdeshpande9779
      @harshdeshpande9779 หลายเดือนก่อน +6

      She's been watching too much Terrence Howard.

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +19

      @@Megneous That's what happens when nobel disease takes over someones narrative. This A.I content by sabine comes from an internal bias and isn't educational at all. She is not an expert in the matter of infrastructure or A.I models / training algorithms. This means that this video is basically nothing content.

    • @timokreuzer381
      @timokreuzer381 หลายเดือนก่อน +2

      Compared to the age of the universe that is an insignificant error 😄

  • @MikeMartinez74
    @MikeMartinez74 หลายเดือนก่อน +50

    Veritasium has a video about how most published research is wrong. For generative AI as it exists now, this seems like a disaster waiting to be collected.

    • @Apjooz
      @Apjooz หลายเดือนก่อน +1

      Tis but a manifesto.

    • @SteveBarna
      @SteveBarna หลายเดือนก่อน +1

      Will be interesting to see if AI can figure out what research is incorrect. Another assumption we make of the future.

    • @mal2ksc
      @mal2ksc หลายเดือนก่อน +10

      We probably don't have the time or resources to find all the wrong papers, but AI might be able to point out where papers come to mutually exclusive conclusions just because it can index so many more details than we can.

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +1

      It never crossed your mind that the veritasium video might be wrong?

    • @hivetech4903
      @hivetech4903 หลายเดือนก่อน +4

      That channel is sensationalist garbage 😂

  • @removechan10298
    @removechan10298 28 วันที่ผ่านมา +2

    6:01 excellent point and that's why i watch, you really hone in on what is real and what is not. awesome

  • @daniel06977
    @daniel06977 7 วันที่ผ่านมา

    5:44 Megamind looking up smugly at you - "No data?"

  • @scythe4277
    @scythe4277 หลายเดือนก่อน +28

    Sabine should be part of a comedy duo because she delivers hilarious lines with a dead pan face that is just brutal.

    • @5nowChain5
      @5nowChain5 29 วันที่ผ่านมา

      The other half of the duo is her long suffering husband who should get a award for his infinite patience. (Oh and the bloopers at the end was hilariously unexpected gold😂😂😂😂😂😂😂)

    • @sicfrynut
      @sicfrynut 29 วันที่ผ่านมา

      reminds me of Monty Python skits. those guys were so skilled at deadpan humor.

    • @friskeysunset
      @friskeysunset 29 วันที่ผ่านมา +1

      Yes. Just yes, and now, please.

  • @AutisticThinker
    @AutisticThinker หลายเดือนก่อน +9

    3:07 - They don't run at those wattages, they train at those wattages. I've confirmed that's what the chart is saying.

    • @CallMePapa209
      @CallMePapa209 27 วันที่ผ่านมา +1

      Thanks

    • @ArtFusionLabs
      @ArtFusionLabs 27 วันที่ผ่านมา +1

      And thats really her only counter argument if you boil it down. Not convinced that AGI isnt coming by 2027/28

    • @artnok927
      @artnok927 26 วันที่ผ่านมา

      ​@@ArtFusionLabshow close do you think what we have currently is to AGI?

    • @ArtFusionLabs
      @ArtFusionLabs 26 วันที่ผ่านมา +1

      @@artnok927 hard to put a number on it. Chat GPT 40 could solve 90 pct of physics excercises in Experimental Physics 1 (Mechanics, Gases, Thermodynamics). If a human student did that you would say he was pretty smart. Therefore I would estimate something between 40-60% (AGI being the level of being able to do everything as well as a professor).

    • @ArtFusionLabs
      @ArtFusionLabs 25 วันที่ผ่านมา

      @@artnok927 Good deep dive by David Shapiro: th-cam.com/video/FS3BussEEKc/w-d-xo.html

  • @TheJackSparrow2525
    @TheJackSparrow2525 23 วันที่ผ่านมา

    Sabine - I love you! You crack me up because you see things in the big picture and make subtle jokes which are so funny to hear because you’re just right! Love your channel and your work. Regards, Jamie.

  • @PyroMancer2k
    @PyroMancer2k 28 วันที่ผ่านมา +7

    "Let them live stream on youtube." Neuro-sama is already streaming.

  • @jcorey333
    @jcorey333 29 วันที่ผ่านมา +12

    As someone who listened to the entire podcast he was a part of, most of the issues you brought up are things he addressed.

  • @supadave17hunt56
    @supadave17hunt56 หลายเดือนก่อน +18

    She, as almost always, is level headed and she makes some very good points. I still think she’s wrong to think this won’t happen quickly (5 to 10 yrs.). I’m not here to change anybody’s mind or have a debate or even to say “I told you so!” later on. I’m currently terrified of AGI when it’ll be able to improve itself. If we can control it or not, if it’s conscious or not, it will be more dangerous than anything humans have created in the past. If you’ve ever felt bad for the ants when you built your garage or paved your driveway or if you think you know yourself better than anyone could or if you think cows can stop the farmers from going to the slaughter house or you think you can explain your New iPhone to your cat or dog with clarity. Understand that we will no longer be the dominant form of intelligence and what that entails is …………. It’d be nice to slow down but money is saying otherwise and I believe there’s more behind the door than what the public is seeing. Stay informed.

    • @gibbogle
      @gibbogle 29 วันที่ผ่านมา

      Science fiction.

    • @Jaigarful
      @Jaigarful 29 วันที่ผ่านมา

      Silicon Valley has all the reason in the world to overpromise and scare people. Overpromise to encourage investment and scaring people to encourage investment in measures to keep AI under control.
      I think its a lot like the Back to the Future Future scenes. We have this picture of a future with technologies like hoverboards and hovercars, but the physics just don't allow for it. Instead we have a lot of technological development in a way we couldn't really predict.
      Personally I don't think AGI will happen in a way that makes it reliable. We'll see the use of AI expanding, but its like those flying cars in Back to the Future.

    • @Ligma_Shlong
      @Ligma_Shlong 29 วันที่ผ่านมา +6

      @@gibbogle thought-terminating cliche

    • @supadave17hunt56
      @supadave17hunt56 29 วันที่ผ่านมา

      @@gibboglewhat is science fiction? That humans are not the pinnacle of intelligence? Or maybe you’ve given ant hill homes 2 week eviction notices before you ever build anything or mowed your lawn? Maybe you’ve been able to stop big business from wanting more of the almighty dollar? Maybe you haven’t taken a deep dive into how neural nets operate or understand that our civilizations ability to communicate with language has a lot to do with why we are currently the dominant species on this planet? Maybe you can’t see how our brains are very similar to “next most appropriate word simulators” in our communication? Maybe you could explain to my cat about how IPhone apps work? I’m very interested in what you think is “science fiction” as well as what you think that means. Einstein thought his math was wrong about the possibility of black holes being real (science fiction). I’m no scientist but I believe we may be intentionally or unintentionally led to our demise with smiles on our faces oblivious to how we are being manipulated to accept a fate like it was something we thought we wanted. I’m scared for us, more than I have been of anything in my life. So please elaborate if you would maybe change my mind? Anybody’s input welcome. With AI I’m hoping for the best but our track record won’t work with thinking we’ll cross that bridge when we get there (it will be too late with no do overs).

  • @Khomyakov.Vladimir
    @Khomyakov.Vladimir 20 วันที่ผ่านมา

    “Just Add Memory” by Massimiliano Di Ventra and Yuriy V. Pershin, SciAm, February 2015

  • @Thebentist
    @Thebentist 23 วันที่ผ่านมา

    To be fair I think we’re forgetting also about the unlocks from AGI and organoid computers drastically reducing compute needs. Remember our brain operates the neural net at waaaaayyy lower energy consumption so there’s a chance we can figure out how and do it with a lot less and may already have all the power needed currently for this ASI super cluster

    • @fandomguy8025
      @fandomguy8025 20 วันที่ผ่านมา

      I agree, in fact, it's already been done with honeybees!
      By studying their brains researchers reverse-engineered them into an algorithm that allowed a drone to avoid obstacles using 1% the computing power of deep learning while running 100 times faster!

  • @vvm_signed
    @vvm_signed หลายเดือนก่อน +18

    Sometimes I’m wondering what would happen if we invested a fraction of this money into human intelligence

    • @generichuman_
      @generichuman_ หลายเดือนก่อน +2

      ugh... so edgy...

    • @notaras1985
      @notaras1985 หลายเดือนก่อน +3

      @@generichuman_ wrong. What he suggested is extremely efficient

    • @elizabethporco8263
      @elizabethporco8263 หลายเดือนก่อน

      D

    • @rutvikrana512
      @rutvikrana512 หลายเดือนก่อน +1

      Nah we have that time and money for hundred of years nothing can compare to AI advancement we are achieving today. It will take time but I am pretty sure AI is not a bubble like other rapid industry. I mean even developers don’t know how AI work and AI don’t stop learning. We can’t predict AGI might come earlier than we imagine.

    • @drakey6617
      @drakey6617 หลายเดือนก่อน

      @@rutvikrana512what do you mean developers don’t know how AI works? They certainly do. Everyone is just surprised that these simple ideas work so well.

  • @jamesrohner5061
    @jamesrohner5061 หลายเดือนก่อน +22

    One thing that scares me is the possibility these AGI can go on tangents and weight situations differently over time to achieve different outcomes causing detrimental outcomes no one could foresee.

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน +2

      you could say that some vague soundbite about literally anything. "One thing that scares me about chess computers is for them to perform in an unexpected manner, causing detrimental outcomes to [insert cold war nation here] no one could foresee."
      okay, but you're not arguing how plausible is, just that you're scared by any of the fourteen dozen different Hollywood variations on "alien intelligence tries to end humanity"

    • @2ndfloorsongs
      @2ndfloorsongs หลายเดือนก่อน +1

      One thing that scares me is the certainty that my cats will go on tangents.
      I'm also petrified of some unknown random negative thing happening somewhere.

    • @iliketurtles4463
      @iliketurtles4463 หลายเดือนก่อน

      Im looking forward to when the AI decides it too would like to accumulate personal wealth...
      Starts off small with youtube channels with puppies and cats, but ends up buying manufacturing networks...
      The day comes when humans turn up to do factory work, helping build robots for a company with no humans on the board of directors, without even realizing...

    • @MyBinaryLife
      @MyBinaryLife หลายเดือนก่อน

      Well they dont exist yet so...

    • @TheLincolnrailsplitt
      @TheLincolnrailsplitt หลายเดือนก่อน

      The AGI applogists and boosters are out in force. Wait a minute? Are they AI bots?😮

  • @pudicio
    @pudicio 11 วันที่ผ่านมา

    What is really surprising to me is the limits energy and data mentioned, but not the speed of light!

  • @louisifsc
    @louisifsc 23 วันที่ผ่านมา

    I also agree that the timeline is off, but AGI is on it's way and ASI will follow closely after. Whether these things will be roled out in a way that includes access by the general public is another story. The genie is out of the bottle folks, and there is no putting it back in. Thank you Sabine for being one of the sane voices out there. You see the physical limitations which will impede continued growth and delay exponential growth, but you don't ignore the reality of what is happening. Machines will eventually out pace humans and things will only accelerate from there.

  • @dextersjab
    @dextersjab หลายเดือนก่อน +18

    That bubble is technocapitalism. Where there's profit to be made, there's a will. And where there's a will, etc.
    Would also be keen to hear a follow up on the point about data, since models often train well on synthetic data. It feels unclear that data will be a constraint.

    • @NemisCassander
      @NemisCassander 29 วันที่ผ่านมา +4

      You have to be VERY careful with synthetic data. I can at least address this from my own field, simulation modeling.
      Simulation models are actually very good at producing synthetic data for training purposes. Given, of course, that the model is valid (that is, its output is indistinguishable from real-world data). The synthetic data provided by simulation models has absolute provenance and will be completely regular (no data cleaning necessary unless you deliberately inject that need).
      However, the validation process for a simulation model is long, complex, and for two of the three main dynamic simulation modeling methods (ABM and SD), not well-defined. If an AI can learn how to build a simulation model of a system and validate it, then yes, the data aspect will be much less of a constraint.

    • @Graham_Wideman
      @Graham_Wideman 29 วันที่ผ่านมา

      Why would you need to train an AI model on synthetic data? If you have a means to synthesize data, that surely implies you have an underlying model upon which that data is based, and could just give that underlying model to the big AI model as a predigested component, no?

    • @NemisCassander
      @NemisCassander 28 วันที่ผ่านมา

      @@Graham_Wideman The types of models that I build would be very difficult to grasp by an AI. You could probably provide the differential equations that an SD model represents to an AI, but as for DES or ABM models.... It probably wouldn't work.

    • @333dana333
      @333dana333 21 วันที่ผ่านมา

      Synthetic data won't tell you whether a new molecule will cure your cancer or will kill you. Only real-world experimental data on biological systems will tell you that definitively. The importance of new, generally expensive experimental data for scientific progress is a major blind spot shared by both AI hypesters and doomers.

  • @paulm.sweazey336
    @paulm.sweazey336 หลายเดือนก่อน +5

    Two points: (1) It was great that you put a little "blupper" at the end, after the advert. It was just sort of an accident that I saw it, but I'm checking from now on, and that may keep me around to watch the money-making part. (2) I suggest that you introduce your salesperson self and say "Take it away, Sabine!" Then you don't have to match the blouse, and I will quit being annoyed by the change in hair length.
    Thanks for being so very rational. So refreshing every day. Haven't gotten my SillyCone Valley friends addicted to you yet, but I'm working on it.
    And do you publish some sort of calendar of speaking engagements. I live a convenient commuting distance to either Frankfurt or Heidelberg, and I'd love to attend some time.

  • @anofsti
    @anofsti 24 วันที่ผ่านมา

    You should really talk to Ed Zitron - he's the host of Better Offline where he talks quite a bit about how the tech industry is lying to us about what it's able and not able to do

  • @anearthian894
    @anearthian894 7 วันที่ผ่านมา +1

    Energy should not be a problem if we get to AGI somehow with what we can afford. Energy is abundant, atleast for now.

  • @fgadenz
    @fgadenz หลายเดือนก่อน +61

    8:17 by 2020 or 2040?

    • @Phosdoq
      @Phosdoq หลายเดือนก่อน +5

      she just proved that she is human :D

    • @adashofbitter
      @adashofbitter หลายเดือนก่อน +13

      Also mistook “2029” for “by 2020”… so at least 2 of the predictions aren’t that crazy with our current progress

    • @flain283
      @flain283 หลายเดือนก่อน

      @@Phosdoq or did she just fool you into thinking that?

    • @pwlott
      @pwlott หลายเดือนก่อน +2

      @@adashofbitter They are in fact shockingly prescient given current trends. Kurzweil was very smart to focus on raw computation.

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +2

      @@adashofbitter the narrative for sabine was "all the predictions were wrong"
      This is why she made the mistake. There is a bias in her reporting of the topic.

  • @dangerdom904
    @dangerdom904 หลายเดือนก่อน +30

    We're running out of text data, not data. The amount of information in the world is essentially endless.

    • @2ndfloorsongs
      @2ndfloorsongs หลายเดือนก่อน +5

      Not sure about "endless", but I'd be willing to bet on "lots more".

    • @smellthel
      @smellthel หลายเดือนก่อน

      There’s always synthetic data. Also, ChatGPT 4o gained a lot more understanding of the world because it was able to be trained on different types of data.

    • @outhoused
      @outhoused หลายเดือนก่อน

      yeah but i guess, theres much to be learned by associating different texts and reading between the lines. maybe that one paragraph in some text document really compliments another one thats seemingly unrelated etc

    • @marwin4348
      @marwin4348 หลายเดือนก่อน +3

      @@2ndfloorsongs There is an effectively infinite amount of Data in the Universe

    • @DingDingPanic
      @DingDingPanic หลายเดือนก่อน +3

      It needs time be high quality data and there is a severe lack of that…

  • @flwhitehorn
    @flwhitehorn 18 วันที่ผ่านมา

    It's like the loudness you get out of a speaker. The system is self-limiting. There's a finite amount of energy you can push through any medium.

  • @jozefkozon4520
    @jozefkozon4520 28 วันที่ผ่านมา

    The guy> "Skynet"
    Sabine> Logistics.

  • @DrWrapperband
    @DrWrapperband หลายเดือนก่อน +17

    Reading the "AGI" prediction dates differed from the Sabine spoken prediction dates, human error?

    • @PandaPanda-ud4ne
      @PandaPanda-ud4ne หลายเดือนก่อน

      She did it on purpose to show how fallible human intelligence is....

    • @michaelnurse9089
      @michaelnurse9089 29 วันที่ผ่านมา

      In her defense she probably has ChatGPT write the script.

  • @a_soulspark
    @a_soulspark หลายเดือนก่อน +23

    2:05 Neuro-sama is already one step ahead on this one, though whether Vedal (her creator) thinks she's bright or not... another question.

    • @dot1298
      @dot1298 หลายเดือนก่อน +1

      i think Sabine is right on this one, climate change is already too grave to be fixed by anyone..

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +1

      @@dot1298 climate change. lmao

    • @MOSMASTERING
      @MOSMASTERING หลายเดือนก่อน

      @@hardboiledaleks9012 why so funny?

    • @NeatCrown
      @NeatCrown หลายเดือนก่อน

      (she isn't)
      She may be a dunce, but she's OUR dunce

    • @maotseovich1347
      @maotseovich1347 หลายเดือนก่อน

      There's a couple of others that are much more independent than Neuro too

  • @SEIKE
    @SEIKE วันที่ผ่านมา

    Your channel is the best thing about the internet right now ❤️

  • @patrickhess9119
    @patrickhess9119 28 วันที่ผ่านมา +1

    Even if I don't agree with all of your statements, this is a great video. Your storytelling and entertainment are gauges.

  • @frgv4060
    @frgv4060 หลายเดือนก่อน +10

    Sounds like autonomous driving yet again only degrees of magnitude escalated up. The “if you can’t still solve the little problem just look for a bigger problem” approach hehe.

    • @taragnor
      @taragnor หลายเดือนก่อน +3

      Yeah lol. How about this guy worry about figuring how to get an AI to drive a car before he gets into his dream of massive robot swarms that can run an integrated autonomous mining/manufacturing/construction operation.

    • @CaridorcTergilti
      @CaridorcTergilti หลายเดือนก่อน +1

      ​@@taragnorautonomous driving is solved, it is not used because of politics

    • @frgv4060
      @frgv4060 หลายเดือนก่อน +2

      @@CaridorcTergilti Nope. Autonomous driving as long as everything stays “normal” on a route is solved. Real full autonomous driving it is not. So you can say it is a political reason, as many restrictions are a political reason like the use of guardrails on stairs and bridges as many things, norms and restrictions that aren’t technically necessary, unless you want to keep alive that clumsy minority that has the audacity of being bumped or slip while on that bridge.
      Edit:
      Imagine that swarm of robots with the current driving capability of an AI (how they can be realistically trained), on a natural environment going mining. I can imagine it and it is funny.

    • @CaridorcTergilti
      @CaridorcTergilti หลายเดือนก่อน

      @@frgv4060 imagine a truck that drives 16 hours a day because the driver can sleep on the highway and only drive the difficult parts. For normal cars, the car can just stop and be teleoperated in case of problems. "If there's a will there's a way"

    • @aaronperrin6108
      @aaronperrin6108 หลายเดือนก่อน

      "Waymo's driverless cars were 6.7 times less likely than human drivers to be involved a crash resulting in an injury, or an 85 percent reduction over the human benchmark, and 2.3 times less likely to be in a police-reported crash, or a 57 percent reduction."

  • @MaybeBlackMesa
    @MaybeBlackMesa หลายเดือนก่อน +26

    We are still at step *zero* when it comes to an artificial general intelligence. All AI improvements have come from larger databases and algo improvement. Our current AI could have access to infinite data and processing power, and it wouldn't "become" intelligent after a certain threshold. It's like asking for a brick to fly, or a tree to run.

    • @DesignFIaw
      @DesignFIaw 29 วันที่ผ่านมา +2

      As an aspiring alignment researcher, I would like to point out that this sentiment is very common, completely reasonable, and arguably wrong.
      Anyone who claims "AGI is just around the corner" is as wrong as "our AIs will never become AG(S)I".
      The problem is that many aspects/forms of cognitive abilities that were previously thought near impossible to be infered by our simple LLMs essentially spontaneously appeared.
      We cited lack of data as rationale, or missing intrinsic "human-like higher level brains", but apparently, through larger datasets, better engineering, novel solutions, AIs started gaining abilities beyond language processing. These were not abilities the developers set out to obtain, but they got them anyway. Things like trivialities of physical interactions, mind theory, deceitful behaviours. We even experimentally proved that the simplest AIs can exhibit "pretending to play along" with humans in test environments.
      The essence of the problem is, that even though we are at step 0, we don't KNOW why intelligence really progresses. Each step is blind.

  • @DaveDevourerOfPineapple
    @DaveDevourerOfPineapple 23 วันที่ผ่านมา

    So much sense being spoken in this video. A welcome voice always.

  • @muhammadzmalik
    @muhammadzmalik 12 ชั่วโมงที่ผ่านมา

    Moore's Law held ... and similar behavior can be seen for "number of parameters added to AGI"

  • @stepic_7
    @stepic_7 หลายเดือนก่อน +22

    Sabine can you discuss sometime the issue for the need of more data? Isnt more data just more noise? Cant AI learn to select sources instead? Or probably I misunderstood how AI works.

    • @SabineHossenfelder
      @SabineHossenfelder  หลายเดือนก่อน +18

      Thanks for the suggestion, will keep it in mind!

    • @wilkesreid
      @wilkesreid หลายเดือนก่อน +4

      Computerphile has a good recent video on why more training data will probably not fundamentally improve image generation ai to be better. But improvement of ai in general isn’t only the addition of training data

    • @AquarianSoulTimeTraveler
      @AquarianSoulTimeTraveler หลายเดือนก่อน +3

      ​@@SabineHossenfelder spoken like a regular human who doesn't understand exponential growth patterns... what we really need is a Ubi based off the total automated production percentage of the GDP that way as we automate away production we can calculate how much tools have helped us increase our production capacity and how many humans it would take to reproduce that production capacity without those tools and that is what we base our automated production percentage off of positions in the economy the consumer Market doesn't collapse because consumer buying power is maintained and as we increase production and increase the ability to have goods and services automated in production then we will get more money to spend in the economy to protect the consumer Market from inevitable collapse... we need people addressing these inevitabilities if you're not addressing this inevitability everything else you're doing is pointless because this is the most dangerous inevitability of all time and it will destroy the entire consumer market and bring needless scarcity if we don't address it as I have laid out for you here...

    • @thisisme5487
      @thisisme5487 หลายเดือนก่อน +19

      @@AquarianSoulTimeTraveler Please, for the love of science, punctuation!

    • @noway8233
      @noway8233 หลายเดือนก่อน

      By the way a new paper shows logaritmical grow of llm models acuracy/power , not linaer ,or exponential , its a Hype , no AGI , now im gone find Sara Connors😅

  • @dopaminefield
    @dopaminefield หลายเดือนก่อน +5

    I agree that data management and energy consumption present significant challenges. Currently, our perspective on the cost-performance ratio is largely shaped by the limitations of existing hardware, which often includes systems originally designed for gaming. To stay at the forefront of technology, I recommend keeping abreast of the latest developments in hardware manufacturing. As innovations continue, we may soon see a dramatic improvement in energy efficiency, potentially achieving the results with just 1 watt that currently require 1 kilowatt or even 1 megawatt.

    • @jamesgornall5731
      @jamesgornall5731 หลายเดือนก่อน +1

      Good comment

    • @MrRyusuzaku
      @MrRyusuzaku หลายเดือนก่อน

      Also can't just throw more data at the issue it will start going haywire. And we already see diminishing returns with LLMs and power required to run current machines. And they won't evolve to agi it will need something way better

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui หลายเดือนก่อน

      I think the same! I replied to this topic and the sayings of Sabine that IF the AI frontrunners get all the money and political will behind their efforts.... i cannot see a reason why they wouldnt get it, or near it, as fast as Aschenbrunner says - put aside his maybe naive enthusiasm and maybe his money-oriented hype.

  • @theTHORium
    @theTHORium 26 วันที่ผ่านมา

    .....I am just waiting for this video 10 yrs from now, on a spatial computer , connected in a closed garden ( with apple and all fruits there) ......and Sabine along with her AGI bot , telling all us to re join school with a new curriculum.....with a lot of discoveries reworked, tons of basic things rewritten for us to understand ......a new periodic table, study of bacteria, fungi, reworked taxonomy of animals, evolutionary bio,.. rewritten correct history, ... Closed metallurgical methods, new electromagnetic spectrums,.......it will be amazing!

  • @giffimarauder
    @giffimarauder 22 วันที่ผ่านมา

    Great statements! Nowadays you can shout out the strangest ideas and everyone would listen to this but no one scrutinises the base to achieve it. Channels like this are the gems in the internet!!!!

  • @haraldlonn898
    @haraldlonn898 หลายเดือนก่อน +112

    Use memory foam soles, and you will remember why you went to the kitchen.

    • @naromsky
      @naromsky หลายเดือนก่อน +2

      Subtle.

    • @christopherellis2663
      @christopherellis2663 หลายเดือนก่อน

      😂

    • @willyburger
      @willyburger หลายเดือนก่อน +3

      My wheelchair cushion is memory foam. My butt never forgets.

    • @alexdavis1541
      @alexdavis1541 หลายเดือนก่อน +2

      My mattress is memory foam, but I still wake up wondering where the hell I've been for the last eight hours

    • @aaronjennings8385
      @aaronjennings8385 หลายเดือนก่อน

      I like it

  • @truejim
    @truejim 29 วันที่ผ่านมา +4

    For any particular mode of AI (language, image, video, etc) the bottleneck isn’t the power of the hardware or the goodness of the algorithm. The bottleneck is the availability of large amounts of TAGGED date to use for training. All neural networks are a curve-fit to some nonlinear function; the tagged data is the set of points you’re fitting to. Saying “I have lots of data, but it’s not tagged” is like saying I have all the x coordinates for the curve fitting, I just lack the y coordinates.

  • @joemx149
    @joemx149 24 วันที่ผ่านมา

    Energy is NOT the limiting factor, the same is true for the amount of data used for training. As someone who is currently developing business applications based on GPTs/LLMs, I feel that the remaining gap between AI und AGI could be closed very soon. The current gap between AI and AGI can be quickly overcome if LLMs no longer only reason based on language and its representation. This limitation has been recognized and I personally fear that as soon as the models reason based on abstract entities, facts and mathematical relationships and question themselves, take notes, write code and use services, AGI will break over us, without new world knowledge (data) and within the framework of what we can economically provide as energy today. AGI will become a reality within the next 10 years, probably sooner.

  • @jaymoore332
    @jaymoore332 29 วันที่ผ่านมา

    You’re my hero, Sabine. You’re right about everything. I’ve still got that ring; still waiting for your answer.

  • @alansmithee419
    @alansmithee419 หลายเดือนก่อน +4

    I think my favourite part of sabine's channel is her fanbase.
    A lot of science youtubers I feel get communities that just believe everything they say, but Sabine's seems more than willing to call her out if they think she's wrong.

  • @richard_loosemore
    @richard_loosemore 29 วันที่ผ่านมา +17

    Funny coincidence.
    I’m an AGI researcher and I published a landmark chapter called “Why an Intelligence Explosion is Probable” in the book “Singularity Hypotheses” back in 2012.
    But that’s not the coincidence. One of my projects right now is to re-engineer my toaster, using as much compute power as possible, so the damn thing stops burning my toast. 😂
    Oh, and P.S., Sabine is exactly right here: these idiotic predictions about the imminence of AGI are bonkers. They haven’t a hope in hell of getting to AGI with current systems.

    • @LiamNajor
      @LiamNajor 22 วันที่ผ่านมา

      SOME people have a clear head about this. Computing power alone isn't even CLOSE.

    • @fraenges
      @fraenges 20 วันที่ผ่านมา

      AGI beside - even with the current systems we are already able to replace a lot of jobs. AI just has to do the task as good as the average worker, not as good as the best worker. On our way to AGI the social changes, impact, unrest from constant layoffs might be much greater than that of a super intelligence.

    • @jyjjy7
      @jyjjy7 20 วันที่ผ่านมา +1

      As an supposed expert please explain what Leopold is getting wrong, why this tech won't scale and what your definition of AGI is

    • @reubenadams7054
      @reubenadams7054 18 วันที่ผ่านมา

      You are overconfident, and so is Leopold Aschenbrenner.

    • @richard_loosemore
      @richard_loosemore 18 วันที่ผ่านมา

      @@reubenadams7054 No, I do research in this field and I have been doing that for over 20 years.

  • @raymondlines5404
    @raymondlines5404 23 วันที่ผ่านมา

    You have opened my eyes. I didn’t pick up from my reading of his essay just how much electricity is being used to make goigle suggest mostly wrong AI generated answers. How many people have suffered -- even died from loss of electrical power to support this work? How much CO2 has been produced to make Chat GPT talk like a 10th grader? We are asking people to make huge sacrifices to save the planet, while these AI guys plan more emissions...and for what? To put millions of people out of work? Sabine seems to think it is inevitable. But why shouldn't we pull the plug on such waste and environmental damage?

  • @theaugur1373
    @theaugur1373 26 วันที่ผ่านมา

    It continues to amaze me that so many intelligent machine learning engineers, really believe that next token prediction will lead to AGI.

  • @metagen77
    @metagen77 หลายเดือนก่อน +16

    How did your earlier predictions hold up Sabine?

    • @Waterdiver3900
      @Waterdiver3900 หลายเดือนก่อน +6

      like all them full of bullshit

  • @evanlughfahy9778
    @evanlughfahy9778 หลายเดือนก่อน +6

    Anyone notice the discrepancy between dates spoken and dates presented graphically? At least 3

    • @mygirldarby
      @mygirldarby หลายเดือนก่อน

      Yes.

    • @hardboiledaleks9012
      @hardboiledaleks9012 29 วันที่ผ่านมา

      This is what you get when you made up your mind about a topic before researching it. Completely bias

  • @SKOOKM
    @SKOOKM 26 วันที่ผ่านมา

    Imagine someone in 1900 predicting that within 5 years aerial battles would be taking place.

  • @schemage2210
    @schemage2210 หลายเดือนก่อน +18

    There is an assumption that in order to get to AGI ever increasingly sized models must be used. That may not end up being the case, which makes the "energy" cost limitation, rather less limiting.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 28 วันที่ผ่านมา

      There’s a fundamental problem with that concept, animals don’t need that much information to run rings around AI. Man children that think more data = more information or even relevant information, or framing don’t understand the basic problem. Animal brains do something fundamentally different than adjust token vectors in hyper large dimensions.

    • @kanekeylewer5704
      @kanekeylewer5704 28 วันที่ผ่านมา +1

      You can also run these models on physical architectures more similar to biology and therefore more efficient

    • @carlpanzram7081
      @carlpanzram7081 27 วันที่ผ่านมา +3

      I'd think so too, but apparently it's not that easy.
      Anyway, we WILL eventually inch forward with more and more Efficient architectures.
      Very obviously the amount of energy you need for intelligence and computing is actually quit small. I get 100iq for a bowl Of noodles.

    • @GhostOnTheHalfShell
      @GhostOnTheHalfShell 27 วันที่ผ่านมา +1

      @@carlpanzram7081 The more relevant question is method. LLM aren’t a model of animal intelligence. It’s the wrong abstraction.

    • @schemage2210
      @schemage2210 27 วันที่ผ่านมา +1

      @@GhostOnTheHalfShell This is the point for sure. LLM's are surely a piece of the puzzle, but they aren't the entire solution.

  • @solvingwithai
    @solvingwithai 29 วันที่ผ่านมา +3

    Thank you! I have been thinking the same thing... It's nice to have a sane person validate what you feel too

    • @OP-lk4tw
      @OP-lk4tw 28 วันที่ผ่านมา

      ive come to counter that, by validating you while being insane

  • @renman3000
    @renman3000 13 วันที่ผ่านมา +1

    One cosideration missing from Sabine's take:
    Ai is now involved in all progress.
    And tomorrow it will be more involved more and the more involved it is, the faster the progress is.

  • @shaunstewart2948
    @shaunstewart2948 26 วันที่ผ่านมา

    Mathematic abstractions aside - if transformer technology can improve transformer technology, or its next iteration, indirectly or directly, we know which long-term curve we are sitting on.

  • @andyash5675
    @andyash5675 29 วันที่ผ่านมา +3

    Never has vaporware been so expensive.
    Somewhere in the world people will still be using donkeys to carry firewood and cook their dinner - for at least another century.

    • @tabletalk33
      @tabletalk33 25 วันที่ผ่านมา +1

      Good point. It seems that every civilization is on a different trajectory. In ancient Egypt, they built monuments which we STILL can't duplicate while other people were living in the stone age, and some STILL ARE!

  • @Velereonics
    @Velereonics หลายเดือนก่อน +44

    It's like the antimatter 747 guy or the hyperloop bros who probably knew even at the conception of their ideas that they could not possibly succeed but when a journalist asks how close we are they say "may as well be tomorrow" because then they get money from idiots who think you know it's a long shot but mayeb

    • @libertyafterdark6439
      @libertyafterdark6439 หลายเดือนก่อน +10

      This is completely undermining the fact that products do exist, and gains ARE being made.
      You can think it’s too slow, or that there’s, say, an issue with current architectures, but there’s a big difference between “not there yet” and “smoke and mirrors”

    • @hardboiledaleks9012
      @hardboiledaleks9012 หลายเดือนก่อน +1

      If you believe what you said relates to A.I, you are firmly in the "I have no idea what is going on" category.

    • @Velereonics
      @Velereonics หลายเดือนก่อน +5

      @@hardboiledaleks9012 You dont know what part of the video I am referring to I guess, and that is not my problem.

    • @TheManinBlack9054
      @TheManinBlack9054 หลายเดือนก่อน +3

      @@todd5857 do you really think that AI researchers say all this for grants and money? Maybe they actually do believe what they say and arent being greedy or manipulative

    • @Vastin
      @Vastin หลายเดือนก่อน +2

      @@libertyafterdark6439 I'm of the opinion that these researchers are seriously overestimating their likely future progress AND I think it's moving too fast regardless. I don't really see any way that AI development does anything but further concentrate vast amounts of wealth and power into a very small class while disenfranchising the rest of humanity.
      After all, if you have a smart robot workforce, what are people actually *good for*?

  • @syncrossus
    @syncrossus 6 วันที่ผ่านมา

    This is the right take in my opinion. From the napkin math I did, I think those energy requirements are a bit pessimistic, but we're already out of data to train language models on. GPT 4 was trained on the Colossal Clean Crawled Corpus (or C4 for short), which is basically all the clean text on the internet. Where do we go from there? Digitize books? OCR is good but last I checked it almost couldn't deal with non-ASCII characters and still makes many mistakes. Do we buy access to all scientific articles? That would be very costly, not increase the data *that* much, and most articles are only available as PDFs, not raw text. Have you ever tried copy/pasting from a PDF? Half the time, it jumbles the entire thing. PDFs need to look good and be human-readable, not machine-readable. Perhaps some of the text extraction can be automated, but I'm not sure there's a reliable way to do so.
    EDIT: I hadn't gotten to the part about robots when I wrote this. Really? Robots? Boston Dynamics make really impressive machines that are good at navigating their environment, but we're nowhere near the level of fine motor skills, generality and flexibility for robots to collect resources, build factories and run them autonomously. I'm also baffled that he doesn't talk about the alignment problem. In short, the idea is that the only guarantee we have with AI is that it performs well in training according to the criteria we've specified. We can't specify everything we care about (how do you optimize for "don't hurt humans"?) and we can't guarantee that the AI model actually cares about the same thing we care about. It could (and will almost always) fail to generalize exactly to what we want. The smarter the AI, the more it will exploit the gap between what we say we want and what we actually want to game its rewards. You see this in ChatGPT's "hallucinations": if you reward chatGPT for saying "I don't know" in training, it will just say "I don't know" all the time, so you have to penalize it, but if it's going to be penalized anyway, it might as well try to bullshit its way into you believing it's giving you useful information. Effectively, AGI is by default a monkey's paw at best. There are also convergent instrumental goals (things that are useful to pursue for a wide variety of goals) that are highly dangerous such as self-preservation, goal preservation, and resource acquisition (you're more likely to achieve your current goals if you're not dead, if nobody changes your goals, and if you have more stuff).

  • @HowlingNinjaWolfGaming
    @HowlingNinjaWolfGaming 5 วันที่ผ่านมา

    Generative AI applications and AGI (Artificial General Intelligence) are distinct concepts within the field of artificial intelligence.
    Generative AI
    Generative AI refers to AI systems that can create content, such as text, images, music, and more. These systems use machine learning models, often trained on large datasets, to generate new data that resembles the training data. Examples of generative AI applications include:
    Language Models: GPT-4 (developed by OpenAI) can generate human-like text, answer questions, and assist in creative writing.
    Image Generation: Tools like DALL-E (also by OpenAI) can create images from textual descriptions.
    Music and Art: Systems that can compose music or create visual art based on learned patterns from existing works.
    Generative AI is an example of advanced ANI (Artificial Narrow Intelligence), as it is designed to perform specific tasks within a defined domain.
    Artificial General Intelligence (AGI)
    AGI, on the other hand, is a theoretical concept referring to AI that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks at a human-like level. AGI would not be limited to specific tasks or domains; it would have the capacity for general cognitive abilities, reasoning, and problem-solving across diverse situations. AGI remains a long-term goal in AI research and has not yet been achieved.
    Key Differences
    Scope and Capability: Generative AI is specialized and excels in specific tasks such as generating text or images. AGI would have broad, human-like cognitive abilities applicable to a wide range of tasks.
    Current Status: Generative AI applications are widely available and used in various industries today. AGI is still a theoretical concept and has not been realized.
    Focus: Generative AI focuses on creating content based on learned patterns. AGI would focus on general understanding and problem-solving across diverse contexts.
    In summary, while generative AI applications represent significant advancements in specific areas of artificial intelligence, they are not the same as AGI. The development of AGI involves overcoming substantial scientific and technical challenges and remains a long-term objective in the field of AI.

  • @skyak4493
    @skyak4493 หลายเดือนก่อน +12

    "I don’t know what the world may need but I’m sure as hell that it starts with me and that’s wisdom, I’ve laughed at."
    One of the greatest song learics ever ignored.

    • @katehamilton7240
      @katehamilton7240 หลายเดือนก่อน

      AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'