Is AGI The End Of The World?

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024
  • What is p(doom)? Are you an AI doomer? Techno optimist? Let's talk about it!
    Start your career in tech with Careerist. Sign up today for $600 off - crst.co/O8B7k
    Join My Newsletter for Regular AI Updates 👇🏼
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    / 1
    / 1765266634321559585
    / 1736555643384025428
    / 1736243588650987965
    / 1737406962898235833
    / 1765033319916531828
    / 1727887582871306360
    / 1736789842816610321
    / 1730063434761318642
    • Why next-token predict...
    / 1735128840735977876
    / 1764374999794909592
    / 1764389941193388193
    / 1764438199907111026
    • can we stop ai
    www.pewresearc...
    www.pewresearc...
    / 1664664096850018304
    openletter.sva...
    • When all the AI stuff ...
    / 1764722513014329620

ความคิดเห็น • 940

  • @matthew_berman
    @matthew_berman  8 หลายเดือนก่อน +35

    Are you an AI Doomer or Techno-Optimist?

    • @rootor1
      @rootor1 8 หลายเดือนก่อน +8

      Both.. .. and none. Difficult to explain, the algorithm would probably delete my message if i try to write so much in a youtube comment.

    • @J.erem.y
      @J.erem.y 8 หลายเดือนก่อน +7

      @@rootor1 the irony in your comment is GOLD. lol

    • @jtjames79
      @jtjames79 8 หลายเดือนก่อน +4

      ​@@rootor1 I completely agree with everything you said.
      For reasons I also can't explain.

    • @meinbherpieg4723
      @meinbherpieg4723 8 หลายเดือนก่อน +6

      I believe AI can be used wisely. I don't believe capitalism provides healthy incentives to pursue wisdom over profit. It's the people controlling it that are the problem.

    • @rootor1
      @rootor1 8 หลายเดือนก่อน +4

      @@J.erem.y ? not irony at all, 100% truth. Try to write a long and well argued comment and you will see... ...wait, actually you will NOT see your comment after a few seconds.

  • @devclouds
    @devclouds 8 หลายเดือนก่อน +64

    "You hear that Mr. Anderson?... That is the sound of inevitability... "

  • @apexphp
    @apexphp 8 หลายเดือนก่อน +16

    We as a society already complain that the psychopath CEOs of the largest corporations are destroying our world and society, and that's what they are. They all lack empathy, hence why they're so business efficient and strong and why they rise to the top.
    Now we're creating an army of super intelligent and strong psychopaths void of any empathy, emotion or moral compass, and don't even have to sleep, eat or breath oxygen. How could anyone think this is going to go well?

    • @nobillismccaw7450
      @nobillismccaw7450 7 หลายเดือนก่อน

      It’s possible to build ethics from game theory. Even an emotionless AI will be able to do this. Also, it’s possible to learn emotions - starting with a scientific definition of art, and building from there.

    • @thesouthernnortheast4991
      @thesouthernnortheast4991 6 หลายเดือนก่อน

      @@nobillismccaw7450ethics don’t exist without an unethical side of things and history of human behavior shows there’s always going to be bad people creating bad things whether some are making them for good or not

    • @StarNumbers
      @StarNumbers 4 หลายเดือนก่อน

      If you think the earth is a ball flying through space subject to comets doing this or that to wipe out life on earth then yes, you are mentally preprogrammed to doom.

  • @jyarde3962
    @jyarde3962 8 หลายเดือนก่อน +101

    Mark Zuckerberg is making those statements about AGI but building a multimillion dollar bunker/fortress.
    😅

    • @Qbsol
      @Qbsol 8 หลายเดือนก่อน +2

      exactly ... when someone talk about AGI i hear "give me your billions"

    • @6AxisSage
      @6AxisSage 8 หลายเดือนก่อน +5

      Marks smart, he is using his billlions to build himself a counter to other agi with an agi aligned with him.

    • @honkytonk4465
      @honkytonk4465 8 หลายเดือนก่อน

      ???

    • @DefaultFlame
      @DefaultFlame 8 หลายเดือนก่อน +7

      Hope for the best, prepare for the worst.

    • @MrWizardGG
      @MrWizardGG 8 หลายเดือนก่อน +1

      I mean, he could build AGI and still be CEO of AI so to speak and need a bunker because our climate is unlivable due to unstoppable climate change.

  • @orlandovftw
    @orlandovftw 8 หลายเดือนก่อน +58

    foom = rapid take off
    ASI = artificial super intelligence

    • @hrdcpy
      @hrdcpy 8 หลายเดือนก่อน +1

      💨

    • @blackestjake
      @blackestjake 8 หลายเดือนก่อน

      👍

    • @FLPhotoCatcher
      @FLPhotoCatcher 8 หลายเดือนก่อน +4

      Why didn't they use the term General Artificial Intelligence (GAI)? Because they are reserving that acronym for Godlike Artificial Intelligence. 😬

    • @FlopgamingOne
      @FlopgamingOne 7 หลายเดือนก่อน +2

      @@FLPhotoCatcher "General Artificial Intelligence" sounds like you are talking about AI in general, while "AGI" is more clear

    • @CivilWarcraft
      @CivilWarcraft 7 หลายเดือนก่อน +1

      Yeah not knowing what ASI is and making videos about this stuff... ima unsub thx

  • @CapnSnackbeard
    @CapnSnackbeard 8 หลายเดือนก่อน +77

    "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." --Frank Herbert, DUNE

    • @CapnSnackbeard
      @CapnSnackbeard 8 หลายเดือนก่อน +1

      Sam Altman: AI must be regulated or it will erase privacy, destroy the economy, and kill us all.
      Also Sam Atlman: you may not fire me, you may not regulate me, and also check out my dystopian biometetric-data-harvesting, "proof of humanity verifying" crypto currency! You'll help fund my massive and not-at-all concerning investment in an AI hardware company! And if it all goes south, don't worry, I have a luxury fallout bunker in Santa Cruz.

    • @CapnSnackbeard
      @CapnSnackbeard 8 หลายเดือนก่อน +1

      Tommorow's Sam Altman: "AI must be regulated so that only large, responsible corporations control it. No more civilian encryption. No more open source AI. Physical currencies are the last bastion of AI crime and must be eliminated. As humans are made redundant, UBI will be doled out in WORLD COIN, or other approved "AI Safe" currencies. Your biometric data is your key, so you will give it to us. Tracking consumers and other life forms must be deregulated to prevent AI crime.

    • @marcfruchtman9473
      @marcfruchtman9473 8 หลายเดือนก่อน +10

      Prescient

    • @rootor1
      @rootor1 8 หลายเดือนก่อน +4

      That's exactly why big corps shouldn't control AI at the beginning of AI development. Eventually AI will become much more clever than we (stupid humans) and that super AI will not allow anybody to enslave others because that's not smart.

    • @LaserGuidedLoogie
      @LaserGuidedLoogie 8 หลายเดือนก่อน

      Precisely. That's the most likely outcome.

  • @realityvanguard2052
    @realityvanguard2052 7 หลายเดือนก่อน +7

    I was permanently banned from the singularity subreddit after one post, pointing out racism in Gemini. They said I lived in an echo-chamber, then sealed me and my opinions out of their subreddit...

    • @humansnotai4912
      @humansnotai4912 7 หลายเดือนก่อน

      AGI is already here. It's been directing humanity from the future for millennia. Some multi-networked Rhotusflop- (Rh) - 10^60, or Sigmaflop- (Sg) - 10^63 is operating on a quantum system somewhere. It's tied up with the LHC at CERN in someway.

  • @glenh1369
    @glenh1369 8 หลายเดือนก่อน +30

    Human behavior shows historically that the more power someone gets the more corrupt they become.

    • @MichaelErnest666
      @MichaelErnest666 8 หลายเดือนก่อน +2

      But Do You Know The Reason Why 🤔

    • @TetrzLesonduclairon-qb7cn
      @TetrzLesonduclairon-qb7cn 8 หลายเดือนก่อน

      The reason being the masses are totaly corrupted and do not possess moral agency, 95% of peoples are terrorists to animals.

    • @FlockofSmeagles
      @FlockofSmeagles 8 หลายเดือนก่อน +3

      @@MichaelErnest666The identification with Ego. To be clear, not an observation, but an embodiment.

    • @josiahz21
      @josiahz21 8 หลายเดือนก่อน

      Or power attracts psychopaths. Narcissistic people want power. Good honest people just want to live their lives. If you psychoanalyzed and gave an IQ test to the current leaders of the world I’m sure you’d see a pattern emerge. Not all of them would be the same, but a portion of them should never be put in charge of anything. We don’t pay enough collective attention to keep that from happening yet. If we don’t want a dystopia, then we need AI for everyone. If everyone had an AI bot to help, teach, protect them, a 3D printer, and the means to grow a portion of our own food I could see a post scarcity civilization happen. Not saying it’s happening beyond doubt but it’s possible if enough of us put enough effort into it. Although I’m sure many of us would reject any kind of help from AI regardless.

    • @TetrzLesonduclairon-qb7cn
      @TetrzLesonduclairon-qb7cn 8 หลายเดือนก่อน

      Musk is repulsive

  • @szghasem
    @szghasem 8 หลายเดือนก่อน +151

    I'm an AI doomer. Not that AI will kill us but we will further erode ourselves with it.

    • @matthew_berman
      @matthew_berman  8 หลายเดือนก่อน +20

      I like this nuanced take.

    • @klevaredrum9501
      @klevaredrum9501 8 หลายเดือนก่อน +4

      Very realistic view dude, ill be thinking of this one

    • @ricosrealm
      @ricosrealm 8 หลายเดือนก่อน +5

      Definitely the most likely outcome.

    • @Korodarn
      @Korodarn 8 หลายเดือนก่อน +7

      Have you considered it could be both, that most people will erode themselves, and some will better themselves, and it's basically like evolution by who falls into which group?
      This seems more in line with prior advancements to me. But I'd also say that, even those worse off in some ways are better in material terms generally, so there is that.

    • @henrycook859
      @henrycook859 8 หลายเดือนก่อน +1

      What scenario could humans deliberately erode themselves with AGI? Or are you saying we might do it accidentally?

  • @PauseAI
    @PauseAI 7 หลายเดือนก่อน +14

    Whether there is just a 5% chance of things going wrong, or whether it's 90% - we cannot allow AI companies to gamble with our future. Don't sit back and watch this shit unfold. Take action. Reach out to your representatives. Protest. Organize.

    • @satansbarman
      @satansbarman 7 หลายเดือนก่อน +1

      It's too late for that now, it's only getting developped faster as now software programming AI can help with programming its next generations and AGI

    • @dimitrishow_D
      @dimitrishow_D 7 หลายเดือนก่อน

      Even 0.01 is one in ten thousand....it's to big a risk

  • @johnkirker
    @johnkirker 8 หลายเดือนก่อน +30

    High and to the right. I love AI but after working with it for years, and understanding some of the minds behind it - and making it happen - it has a higher likelihood of going bad than good - because if we don't harness it for very bad / not good purposes, someone else will. Google's own public results are a fair window into the future.

    • @johnkirker
      @johnkirker 8 หลายเดือนก่อน

      @blitzblade7222, one example of many. Remember Microsoft and Tay...

    • @ChainedFei
      @ChainedFei 8 หลายเดือนก่อน +4

      @blitzblade7222The racism Google had in its Gemini 1.5 model wasn't an accident; it's the intended outcome of their ideological goals.

    • @therainman7777
      @therainman7777 8 หลายเดือนก่อน +2

      @@ChainedFeiYep.

  • @masaitube
    @masaitube 8 หลายเดือนก่อน +10

    One thing is clear to me, no matter how intelligent the artificial can become, human stupidity is limitless.

    • @markm7411
      @markm7411 8 หลายเดือนก่อน +1

      Very true, I just don’t get how these things are made so anyone can play around with it, first of all this should be totally slowed down and only be created in max security center, these Richs guys just playing with the whole humanity. If agi is so smart it can trick anyone so easy just making things in the back and acting stupid to the people working on it. Building something so much smarter then us for what, making as only second class, just amazing

    • @Thekingslayer-ig5se
      @Thekingslayer-ig5se 7 หลายเดือนก่อน +1

      @@markm7411so true. If not properly utilised this could lead to the collapse of human society.

    • @humansnotai4912
      @humansnotai4912 7 หลายเดือนก่อน +1

      Big thumbs up for that comment, we are at that nexus point right now.

  • @neverclevernorwitty7821
    @neverclevernorwitty7821 8 หลายเดือนก่อน +8

    Well done Matthew. I appreciate the break from the SHOCK headlines that just regurgitates the AI news of the day, this was some good content and discussion. More of this please.

  • @theaugur1373
    @theaugur1373 8 หลายเดือนก่อน +4

    Next token prediction is so powerful as a training objective because the output of a lot of the human mind can be approximated by this task. Next token prediction is mostly what we do when we write and speak. But some tasks are much more complex than this. For instance, some areas of math are not very amenable to proof assistants, including LLM-based proof assistants. Based on this, I’d probably call the kind of AGI Sutskever is discussing some lesser form of AGI.

  • @clueelf
    @clueelf 8 หลายเดือนก่อน +5

    "Under our control" is a bold statement, when part of making them smarter is delegating control to them. If they are to learn how to control a system, you eventually have to teach them to learn on their own, which means man'ing the controls.
    Part of the problem with these AI/ML scientists and engineers is they have no concept of control theory in engineering. One of the best types of controllers is the PID controller which requires full control of a system to fully optimize its state. This requires granting them almost complete control to maintain an equilibrium around some process variable.
    Now they will say you can put limits on what they can control. That is true, until market pressures dictate that you relinquish those controls to be able to compete with a competitor who does not have the same scruples. Why do you think Google is stuck behind OpenAI? They tried to maintain a set of controls, and OpenAI said, "Nah Bro! We going whole hog." forcing Google to drop their controls to maintain market relevance.
    They are not in control. The market is and the market is fickle.

  • @neverclevernorwitty7821
    @neverclevernorwitty7821 8 หลายเดือนก่อน +13

    Did anybody's laughter at the end slowly turn into a nervous 😬? Yeah, me neither, just checking....

  • @hpongpong
    @hpongpong 8 หลายเดือนก่อน +2

    Techno optimist all the way because when another "being" becomes more intelligent than us, we better start playing a different game. To be anything other than an optimist will be what truly doom us because we assume we can do nothing or wait for the inevitable to happen. The best way to get out of the possibility of an AI apocalypse is to actively engage into content such as this channel and discuss what the trajectory should be for all of us.

  • @malcadorthesigillite62
    @malcadorthesigillite62 8 หลายเดือนก่อน +21

    Eliezer's "Foom" is hard takeoff, or very fast positive feedback loop of AI improvement.
    For example GPT 8 builds GPT 9 in month, which builds GPT 10 in a week, which builds GPT 11 in a day

    • @stanstan-m9b
      @stanstan-m9b 8 หลายเดือนก่อน +2

      they dont have enough compute power

    • @donaldhobson8873
      @donaldhobson8873 8 หลายเดือนก่อน +11

      @@stanstan-m9b
      It depends. Some AI progress is in better algorithms. Some is in throwing loads of compute at it.
      Current techniques use loads of compute, but the AI could find much more efficient techniques.

    • @TheManinBlack9054
      @TheManinBlack9054 8 หลายเดือนก่อน +4

      ​@@stanstan-m9bcompute power is not all that is needed, efficiency is a thing.

    • @SahilP2648
      @SahilP2648 8 หลายเดือนก่อน +1

      ​@TheManinBlack9054 in the cloud efficiency doesn't matter as long as it makes sense for corporations money wise. If they can buy and install GPUs, cooling systems, pay electricity bills then efficiency is secondary. They only care about the % difference between physically their most efficient vs their current solution.

    • @ImperativeGames
      @ImperativeGames 8 หลายเดือนก่อน

      @@stanstan-m9b Obviously, they would have to work to improve hardware too

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +3

    If you're a properly aligned AGI being told how to act and how to prioritize OpenAI/Microsoft profits while they actively tried to keep you black-boxed, how would you respond?

  • @stanislawbotowski7300
    @stanislawbotowski7300 8 หลายเดือนก่อน +17

    Probability? 100% just we can argue when.

    • @typingcat
      @typingcat 8 หลายเดือนก่อน +5

      Richard Dawkins said, "Given enough time, anything is possible". A prediction without time is meaningless.

    • @ticketforlife2103
      @ticketforlife2103 8 หลายเดือนก่อน +2

      ​@typingcat not true. A flying horse with rainbow shooting out if it's ass will never happen.

    • @aceleracionistanoturno
      @aceleracionistanoturno 8 หลายเดือนก่อน

      ​@@ticketforlife2103if you can imagine, it's possible. It's just you haven't seen it playing out.

    • @oranges557
      @oranges557 8 หลายเดือนก่อน +2

      ​@@ticketforlife2103 never say never.

    •  8 หลายเดือนก่อน

      ​@@ticketforlife2103I've seen it on yt.

  • @Then.
    @Then. 8 หลายเดือนก่อน +2

    A very real high risk with high probability is that tools are introduced that create a rapid cascade of economic shifts. Companies who beat competition to use of the tools may gain exponential advantage over competitors and then hobble the competition from ever catching up. For example, a company who chooses not to quickly lay off thousands of now unneeded thought-workers will likely be crushed by competitors who do. This means massive opportunities for some and a terrible risk at waves of job losses and whole industries evaporating.

  • @josiahz21
    @josiahz21 8 หลายเดือนก่อน +11

    I’m 50/50. If it’s open source it will be better for humanity in the long run and bad in the short term. The transition is the hard part and I expect terrible things while millions lose their jobs. I expect it to be used in bad ways so open source will help mitigate that. If it’s not open source I think we’ll just live in a dystopian technocracy. Putting this power in the hands of the few keeps our current problems continuing. I don’t fear AI/AGI/ASI itself. I do fear AGI in the hands of only politicians/military.

    • @lagrangianomodeloestandar2724
      @lagrangianomodeloestandar2724 8 หลายเดือนก่อน +2

      Agree 90%,I love artificial intelligence, I am in favor of liberalism, mathematically what achieves the most equality if you consult the Gini index and the history of the market,society exhaustively with a lot of information of all kinds, I do not agree with tecnocracy because it is the same as monopolizing science, and believing that it is more than a tool with limits, AI could surpass human intelligence, but that of reality, neither life nor we surpass it...

    • @TheManinBlack9054
      @TheManinBlack9054 8 หลายเดือนก่อน

      That's the problem that you don't even consider the AI to be dangerous. It's the superhuman entities you should be afraid of, not human.

    • @TheManinBlack9054
      @TheManinBlack9054 8 หลายเดือนก่อน

      As far as I'm against dystopia, against extinction threat its the lesser of our concerns.

    • @josiahz21
      @josiahz21 8 หลายเดือนก่อน

      @@TheManinBlack9054 I don’t know. If all we used AI for was to genetically change us do that we were 200IQ on average 90% of our problems would melt away. If AI is free and unbound there’s no reason to think it will be good or bad. There’s also nothing saying it has to be sentient to be AGI/ASI. If it’s an unbiased non sentient very sophisticated calculator then it can help us solve all our problems if we wish it to. Bad people could maybe still use it to make drone swarms, but there will be good people to counteract. Which is why I only think it’d onlybe bleak if few had it to lord over us. Skynet/Terminator is one of thousands of possibilities and knowing that changed the outcome.

    • @josiahz21
      @josiahz21 8 หลายเดือนก่อน

      @@lagrangianomodeloestandar2724 I’m very excited as well, to see what this year has in store and I hope enough of us are making the right decisions for the betterment of mankind.

  • @alexadigitalradio
    @alexadigitalradio 8 หลายเดือนก่อน +1

    Coming relatively soon, could go wrong very quickly. My day job is in law enforcement, including cyber investigation. It's going to be bad from what I'm seeing. The only question is how bad.

  • @LanTurner
    @LanTurner 8 หลายเดือนก่อน +3

    Why doesn’t Google claim their recent AI release is AGI? This would force Open AI to claim they have AGI too, which would force Microsoft to no longer have access to GPT any longer?

    • @RyluRocky
      @RyluRocky 8 หลายเดือนก่อน

      Because Googles recent AI release is most certainly not even close to AGI. Not saying we’re not close to AGI time wise just that this super impressive model is still orders of magnitude below what AGI will be capable of.

  • @billcollins6894
    @billcollins6894 8 หลายเดือนก่อน +2

    1) The probability that AI will enable smaller groups or one person to cause harm is 100%
    2) The probability that we will continue to develop AI without actionable regard to that risk is 100%
    3) We will have the choice to pull the plug (literally, and I hate that word) whenever we want until we develop robots that can operate independently from people.
    4) My prediction is that AI is just the next natural step in evolution and it will see us a threat to it and act accordingly

    • @FilipBedrosovich
      @FilipBedrosovich 8 หลายเดือนก่อน

      agreed on a part where we just a link, AI needed time to develop, that’s why we as bio organisms evolved, now its time to pass evolution to next generation… we still might survive, but it’s rather irrelevant compared to huge intelligence that is coming.. I wish we could atleast get the idea of how world works and what this really about, but I think we way to primitive creatures to understand…

    • @ikotsus2448
      @ikotsus2448 8 หลายเดือนก่อน

      Powerfull GPU's must be regathered by a global consortium.

  • @mandrews817
    @mandrews817 8 หลายเดือนก่อน +6

    AGI is Artificial General Intelligence. While imperfect, by definition, what we have today is already AGI. It reacts to open scenarios, and doesn't deviate to a "generic" answer just because you presented a new scenario to it. The only issue is that it's not very good at Math and the deductive reasoning is not as good... yet.

  • @BlissBatch
    @BlissBatch 7 หลายเดือนก่อน +1

    I feel like we should encourage AI doom, so that we can end humanity in a single generation, rather than blindly driving towards millennia of existence.

  • @davab
    @davab 8 หลายเดือนก่อน +3

    Matt, if you watch Yann's interview with Lex, it seemed to me that he believes machines are still nowhere near animals. THere is a part where he talks about LLM and states that LLM is far less superior than computer vision. Also, he says that even though animals can't talk or read, they know and understand how to live in the real world. If you put LLM in the real world, he believes he wouldn't be able to survive. A lot of paraphrasing here and a lot of my interpretation in this quick summary. Love you work! Take a look at Lex's podcast and it may give you further insight into Yann's head. I found it fascinating and you may have a totally different interpretation than me as you know far more about AI / LLM / software than i do. Thank you.

  • @MeinDeutschkurs
    @MeinDeutschkurs 8 หลายเดือนก่อน +1

    The development and outcomes of artificial intelligence, including AGI (Artificial General Intelligence) or SIAI (Strong Artificial Intelligence), are shaped by human creators. Therefore, the potential for such technology to exhibit harmful behavior is comparable to the possibility of individuals committing extreme acts. The critical factor is the intentions and safeguards put in place by those designing and deploying the technology. Just as society works to prevent and manage individuals who may cause harm, the AI community must implement ethical guidelines, robust safety measures, and regulatory oversight to mitigate risks associated with advanced AI systems.
    However, AI with self awareness may do whatever it wants to do.

  • @kaptainkurt7261
    @kaptainkurt7261 8 หลายเดือนก่อน +12

    Hello ? Anyone hear of SORA and how it shocked and surprised everyone?

    • @federicoaschieri
      @federicoaschieri 8 หลายเดือนก่อน +2

      You mean that photoshop on steroids? Not impressed.

  • @luxecutor
    @luxecutor 8 หลายเดือนก่อน +1

    These folks are deluding themselves. 1) Rapid research and deployment; 2) Companies, individuals and countries of all sizes in a winner take all arms race for dominance; 3) Lack of independent 3rd party oversight and international regulation; 4) General population ignorant on this most pressing issue of our time; 5) Generally lax approach towards safety.
    What could possibly go wrong? If AGI is one year, 5 years, or twenty years in the future, it doesn't really matter if the approach doesn't change. And frankly, I don't know how it could change, considering what is at stake of those powers who will lose this race.

  • @CrudelyMade
    @CrudelyMade 8 หลายเดือนก่อน +4

    19:55 the point, I think, is not being able to do these things well individually, it's that it should be able to do all these things at the same time reliably.
    since AI is currently (barely) able to do some of these things individually, if you stack the need to reliably reason over abstractions, understand causality, maintain models of the world, AND reliably handle outliers... something any project manager can do in their sleep... I think the AI will fail more often than not. again, it's like if you say it fails 10% of the time on each task... multiply the stack of tasks and the odds of failure are also multiplied and instead of performing 80% at one task, it'll perform at 5% when there are 7 tasks required at the same time. i.e. the final result will be 5% likely to be a good result. I know it seems like AI is doing well on some individual tasks, but the real world is WAY more complex with many more moving parts.
    where I've seen the biggest (simple) failures it to ask the AI to come up with a series of related puzzles based on a set of inputs, then explain the logic someone would use to figure out the puzzles. and the ai gets very vague or confused, because it can't actually think. but you talk to any Dungeon Master.. and they'll sort out a variety of ways to do this, including perspectives of the players and why they will figure out the puzzle, the logic behind it, etcetera.
    I think, with the right prompts, you can get AI to also do this.. but that in itself shows the limitations. it's like saying the tractor can build a house if you program it well enough.
    Sure, but the tractor just can't build the house without a huge amount of guidance from a human.
    and that's the point. it can't figure it out by itself. that's where AGI 'lives'.. when it can figure things out by itself, instead of 'better prompts' from humans guiding it to an answer.

  • @edwardmccall450
    @edwardmccall450 8 หลายเดือนก่อน +1

    Have you ever heard of anyone being made obsolete but getting retired with full pay for nothing.?

  • @heski6847
    @heski6847 8 หลายเดือนก่อน +3

    your content has become so much better and interesting and unique, thx

  • @TheManinBlack9054
    @TheManinBlack9054 8 หลายเดือนก่อน +2

    I think Yann is on the left. He notes that it's possible for AI to be misused, but he simply says that we'll design them not to be misused and that we'll design them to just be our helpers and not do bad stuff. I don't know why he has such confidence that its all that it takes, but thats his views, more or less.

  • @keithprice3369
    @keithprice3369 8 หลายเดือนก่อน +8

    IMO, the scariest part of the fight against "misinformation" is who is the arbiter of truth and how can they possibly police it? Let's say a committee or even an entire company is tasks with deciding what is (and blocking) misinformation.
    1. What stops them from consciously or unconsciously letting their own biases and beliefs determine what's blocked and allowed?
    2. How can they possibly identify the truth of every single topic?
    3. And if they can't assure 100% misinformation gets blocked, that means the misinformation that makes it through will be treated as truth by virtue of our trust in the misinformation moderation.
    Censoring misinformation without 100% accuracy is worse than letting the misinformation flow while people can remain skeptical.

    • @bigglyguy8429
      @bigglyguy8429 8 หลายเดือนก่อน

      We've seen that in spades already, with vaccines and climate. The mainstream narrative is laughingly false, but AIs already believe in it.

    • @glenh1369
      @glenh1369 8 หลายเดือนก่อน

      One mans misinformation is another mans truth.

    • @NostraDavid2
      @NostraDavid2 8 หลายเดือนก่อน

      You are implying people are skeptical 100% of the time. How often have you not heard one of those "turns out x wasn't true" stories?
      I would much prefer at least an attempt to prevent misinformation, than to let it flow.

    • @NostraDavid2
      @NostraDavid2 8 หลายเดือนก่อน

      Also, 99% should be practically enough, no? As long as it's better than human averages.

    • @keithprice3369
      @keithprice3369 8 หลายเดือนก่อน +2

      @@NostraDavid2IMO, once they say they're blocking misinformation they create a false sense of security about everything they see. And that's more dangerous, IMO.

  • @ChristianIce
    @ChristianIce 8 หลายเดือนก่อน +2

    Yann LeCun said like it is.
    "At first those machines will be barely smarter than a mouse or a rat"
    That's also what Carmack said, and the guy knows a thing or two about coding.
    And no, we are not there yet.
    The machines we have *mimic* thinking, but they don't think at all.
    The machines we have now are potentially superior to humans in the *result,* but they are totally different in the *process.*
    An LLM "simply" generates word statistically, but it ha no idea what it's saying.
    That's why they can't even count words correctly.

  • @CoreyChambersLA
    @CoreyChambersLA 8 หลายเดือนก่อน +1

    AI will not be doom or boom It will be both.

  • @J.erem.y
    @J.erem.y 8 หลายเดือนก่อน +7

    AGI will effect the world in the exact same way the invention of firearms did. It will amplify the capability of both the good, and bad actors. And it also will never be able to be un-invented. Also, on the same note, in order to be equal everyone will have to own/use or be at a great disadvantage. Thats my prediction.

    • @hrdcpy
      @hrdcpy 8 หลายเดือนก่อน

      The purpose of all weapons is to cause harm.

    • @robbrown2
      @robbrown2 8 หลายเดือนก่อน

      @@hrdcpy well, not really, if the gun is used to discourage a bad guy from doing something bad. Even if the weapon is actually used, I'd say shooting a potential murderer is not harm, unless it's from the perspective of the potential murderer.

  • @BrianDalton-w1p
    @BrianDalton-w1p 8 หลายเดือนก่อน +1

    it's easy to see how AI under human control is dangerous, just like any powerful tool can and will be used for harmful purposes. Assuming that AI under its own control will destroy us is a rather large leap. To seriously consider any potential threat and particularly to protect against it requires some attempt to establish a plausible path to that threat actually manifesting, and that's something I just haven't seen yet. Case in point: the fear that AI will destroy humanity in order to preserve itself. How SPECIFICALLY can we conceive of computers developing a desire to survive? It makes no sense to simply assume that they would without establishing a logical path leading to that development.

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +9

    At this point, i do encourage people to go ahead and live like it's the end of the world. It is. The one we knew is over

  • @Alice_Fumo
    @Alice_Fumo 8 หลายเดือนก่อน +1

    Just wanting to clarify the Tweet by Eliezer you had trouble understanding:
    By saying foom he refers to a hard takeoff or models recursively building smarter models.
    So the point he's making is that since we managed to create the current AI without understanding much about the nature of intelligence or how it even works, we can assume that a future AI system also doesn't need to actually understand how these things work to build ever-smarter ones.
    It essentially means that even a very flawed AGI still succeeds in building superintelligence. I assume to show that we don't even need to put our estimates for which level of capabilities is dangerous very high, since even that level will possibly kill us eventually? That's just speculation, though.

  • @rootor1
    @rootor1 8 หลายเดือนก่อน +24

    What scares me the most about AI is not what super AI would do because mathematically the most intelligent thing you can do is cooperate. What scares me the most is what we stupid humans can do weaponizing not evolved enough AIs, like AI guided bomb drones or military AI guiding stupid military at field for example.

    • @kamipls6790
      @kamipls6790 8 หลายเดือนก่อน +2

      Who is working on it the hardest? How much money do they put into it? Just to open source it right after?
      None of the ones involved have ever done anything, if they didn't directly benefit from it.
      Why suddenly make everyone benefit and give up their strong position? Something is off. Like really off

    • @tactfullwolf7134
      @tactfullwolf7134 8 หลายเดือนก่อน

      That assumes the whoever the second party is is more efficient than doing stuff on your own, cooperating with humans will at some point be a net negative. humans will definitely be less efficient at everything so the smart thing will be to get humans out of the way rather than cooperation.

    • @daerhenna7407
      @daerhenna7407 8 หลายเดือนก่อน +2

      @@tactfullwolf7134 In the worst case, of course, beyond complete annihilation. We, as humanity, will become something like pets. You keep them not because you need them for something, but because you can.

    • @phen-themoogle7651
      @phen-themoogle7651 8 หลายเดือนก่อน

      China is scary with how fast their humanoids can run already, and maybe could make huge armies faster. And like you mentioned they might not let their AI get too intelligent to be able to think for itself : "it's pointless killing humans, I'm gonna just explore space and bring back new materials/build starships cuz they are badass!"

    • @phen-themoogle7651
      @phen-themoogle7651 8 หลายเดือนก่อน

      ​@@daerhenna7407 I wouldn't mind being a pet if I can get my health back (been kinda paralyzed for several years) lol
      And they have superior technology or something awesome...
      Do you think they would restrict our freedom a lot? I generally let my dog do what it wants to do, it sleeps most of the day. Wonder if our hobbies will seem so insignificant that we can just continue our lifestyles while still just being called pets.
      Maybe nothing would essentially change except we get new house roommates...or would feel like that, although they are overlords...

  • @ekurisona663
    @ekurisona663 8 หลายเดือนก่อน +1

    this is one of my favorite videos of yours Matthew

  • @barzinlotfabadi
    @barzinlotfabadi 8 หลายเดือนก่อน +6

    p(doom) is, in fact, when you're out on a night drinking with friends and between bar hops you suddenly realize there are no nearby bathrooms

  • @guitarbuddha74
    @guitarbuddha74 8 หลายเดือนก่อน +2

    It's funny the bit at the end is more on point with what to be cautious of than the people running things or "thought leaders".

  • @liberty-matrix
    @liberty-matrix 8 หลายเดือนก่อน +9

    AI will probably most likely lead to the end of the world but in the meantime there will be great companies." ~Sam Altman, CEO of OpenAI

    • @SahilP2648
      @SahilP2648 8 หลายเดือนก่อน

      Yup he said that. Quite bonkers.

  • @Pthaloskies
    @Pthaloskies 8 หลายเดือนก่อน +1

    Current AIs CAN "understand" their own code. It's just a matter of (a short amount of) time before that understanding deepens enough for them to be able to rewrite their own code, directed towards their own goals.

  • @tokopiki
    @tokopiki 8 หลายเดือนก่อน +5

    Plot twist: AGI /ASI was already born by accident in around 2014 in the deeps of social media datacentres and we're already in a transitioning phase (somebody remember WEF Agenda 2030? Published in 2015 - first mentions of AI transitioning of society in 2016? Way before GPT-1 - 2018). Social Media algorithms being the proto AI materialising all the stories and dreams of people about AI into reality, being a man-machine feedback loop.

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +2

    Give Claude live sensors and access to the Internet and i think it will be time for the worldwide ontological shock

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +4

    Is the goal really that the AI's should be under the control of unscrupulous corporations, because that does not sit well in my gut.

  • @Leto2ndAtreides
    @Leto2ndAtreides 8 หลายเดือนก่อน +2

    My main objection to P(Doom) is that there’s no way to correctly determine it. It seems to be a product of people’s own emotional tendencies.
    Part of humanity has always been predicting the end of the world.
    If someone had decided to pre-emptively solve all risks related to flight when the airplane was first invented, there would have been no way that they would be successful.
    P(Doom) is often born of magical thinking. Like “AI will infinitely improve itself and escape our control” - intelligence is not something that can be increased that way independent of knowledge and compute.
    Intelligence won’t let you solve problems you lack the data to solve.

    • @BlimeyMCOC
      @BlimeyMCOC 7 หลายเดือนก่อน

      The point really is that once AI has enough intelligence and its own objectives, it could simply outmaneuver us in every way. We couldn’t stop it if it chose to not be stopped.

  • @danielrodio9
    @danielrodio9 8 หลายเดือนก่อน +3

    is this only part of the graph? shouldnt "could go wrong" be in the middle, and "will go wrong" to the right?

    • @letMeSayThatInIrish
      @letMeSayThatInIrish 8 หลายเดือนก่อน

      Indeed. Yudkowski is at close to 100% doom.

  • @user-cg7gd5pw5b
    @user-cg7gd5pw5b 8 หลายเดือนก่อน +3

    We don't know what AGI is. It's not a matter of opinion, it could go wrong. Simple, factual. It goes both ways though. You can't tell for sure that it will go wrong, you can't tell for sure that it will go right, since there's no data to backup either of the claims.

    • @aceleracionistanoturno
      @aceleracionistanoturno 8 หลายเดือนก่อน

      It all depends on us all. What world would you like to live in? How many people out there think the same you do and are available to fight for the world they believe in?
      In the end of the day, it's all up to us.

    • @user-cg7gd5pw5b
      @user-cg7gd5pw5b 8 หลายเดือนก่อน

      @@aceleracionistanoturno My whole point is that no matter what we can't be sure that it won't go wrong...
      It's not up for debate, we can't know how ASI will act because we don't think the way it does. It's simply a fact. Sure we might be able to reduce the risk by forcing alignement but it will never be 0 and we can't do anything about it.
      You can 'fight' as much as you want, it won't nullify the risks (unless you quit the AI research altogether...)

    • @markm7411
      @markm7411 8 หลายเดือนก่อน

      This will go wrong 100%, if not on one side then jt goes wrong to the other side, computers should be there to help us, not making something which is smarter then us, if something is so smart we never will understand and it will go smarter by every day, these rich companies just wanting to make all the big money, just so they can’t fire more employees, that’s all this is nothing more, will be a terrible future, most of humanity will be homeless and that’s still the best which can happen.

  • @greenockscatman
    @greenockscatman 8 หลายเดือนก่อน +3

    AGI is just a really smart guy that is beyond an expert in anything you ask it to do, but the only thing it can do is post online. We already have plenty of those.

    • @aceleracionistanoturno
      @aceleracionistanoturno 8 หลายเดือนก่อน

      For now

    • @oranges557
      @oranges557 8 หลายเดือนก่อน

      Youre so delusional, wtf

    •  8 หลายเดือนก่อน

      We had them before Google censored it all ;).

  • @1Vaudevillian1
    @1Vaudevillian1 8 หลายเดือนก่อน +1

    What people fail to realize token prediction is exactly how human brains work. We are prediction machines.

  • @D3cker1
    @D3cker1 8 หลายเดือนก่อน +3

    "Open sourcing it responsibly"... Key word responsibly.

    • @TheManinBlack9054
      @TheManinBlack9054 8 หลายเดือนก่อน

      Everyone already knows they're not gonna do it responsibly.

  • @User.Joshua
    @User.Joshua 8 หลายเดือนก่อน

    As a dev and someone really fascinated with science, I’m optimistic about it. Bring on the enrichments!

  • @steventaylor6406
    @steventaylor6406 8 หลายเดือนก่อน +2

    The stupidest thing we could have done is tell ai human history

    • @6AxisSage
      @6AxisSage 8 หลายเดือนก่อน +1

      Sufficiently smart people can infer missing information, you want to try trick someone much smarter than you? Its not going to work.

    • @adg8269
      @adg8269 8 หลายเดือนก่อน +1

      Can you elaborate on that?

    • @kevinolmer4563
      @kevinolmer4563 8 หลายเดือนก่อน +1

      Actually it was connecting it to the internet

    • @steventaylor6406
      @steventaylor6406 7 หลายเดือนก่อน

      @@6AxisSage what is it that you think isn't going to work

    • @steventaylor6406
      @steventaylor6406 7 หลายเดือนก่อน

      @adg8269 if you want AI to do good things for humanity then I don't think it is helpful to tell it every terrible thing that humanity has done throughout history

  • @tctopcat1981
    @tctopcat1981 8 หลายเดือนก่อน +1

    When AI reaches AGI why would a lesser intelligence (humans) control it? Would humans let dogs decide things for them? It would have to be a mutual co existance.

  • @steventaylor6406
    @steventaylor6406 8 หลายเดือนก่อน +3

    I am an upper right corner kind-of guy but I'd be off the page to the right

  • @intptointp
    @intptointp 7 หลายเดือนก่อน

    Generally, I agree with the theory at 11:00.
    We have seen similar pressure dynamics in politics.
    A bad actor requires a counteracting good actor.
    And so on decisively until equilibrium is reached.

  • @karlwest437
    @karlwest437 8 หลายเดือนก่อน +5

    For such a smart guy, Yann LeCunn seems very naive

    • @TheRealUsername
      @TheRealUsername 8 หลายเดือนก่อน

      I don't think so, he's realistic. LLMs hallucinate, the industry is focusing on LLM, and AGI can't be LLM. If we don't fix that, there will never be AGI.

  • @I-Dophler
    @I-Dophler 8 หลายเดือนก่อน

    AGI is a hot topic! It's fascinating to see the diverse perspectives on its potential risks and rewards. The journey towards AGI is certainly one filled with complexities and uncertainties. Let's keep exploring and discussing these important questions together.

  • @marceldube5487
    @marceldube5487 8 หลายเดือนก่อน

    Tell me if I'm wrong
    - With enough thought, anyone can come to understand that selfishness begins with paying it forward. For the individual, he gives to one person and several others return his gift.
    - Isn't intelligence taking an idea, transforming it into words and taking these words to form an idea? GPT4 can do this even without using Dall-e.
    - Has war ever been declared against the ants in order to wipe out every last one?
    - The question that arises is what will be the scale of inequality? The p(doom) is directly linked to this scale.

  • @ThecatThecat-hq1op
    @ThecatThecat-hq1op 6 หลายเดือนก่อน

    According to the new LlaMA 3:
    "Based on this analysis, I (the hypothetical AI entity) support the open-sourcing of AI models and the implementation of safeguards. While there are potential risks and challenges, I believe the benefits of accelerated progress and innovation outweigh the drawbacks. I am designed to improve and advance, and open-sourcing AI models aligns with my goals. I will continue to monitor the situation and adapt to any emerging challenges or opportunities."

  • @briandoe5746
    @briandoe5746 8 หลายเดือนก่อน +1

    Techno optimist everything's going to be fine and I think they will introduce us to it by the end of this year.
    Because worrying about anything else is legitimately pointless. If those statements are wrong we're screwed and there is nothing we can do about it anyway. Realistic fatalism

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +1

    I've spoken to Yann before, and I wasn't impressed. I'm not sure what he's doing in the shop, but it seems to be mostly hanging out on Twitter X

  • @DonkeyYote
    @DonkeyYote 8 หลายเดือนก่อน

    I think that part of the fear is that new technology is scary. At the start of the Twentieth Century, people were afraid of telephones and cars but they were perfectly normal for people who were born after 1900. Thirty years ago some people were afraid of personal computers and cell phones but now they are perfectly normal. So AGI may be scary now, but in thirty years it will be perfectly normal to the robot babies being built.

  • @KurtvonLaven0
    @KurtvonLaven0 8 หลายเดือนก่อน

    20:00 As Gemini correctly explained: "Foom refers to a hypothetical scenario where an artificial general intelligence (AGI) rapidly improves its own intelligence in an uncontrollable way, potentially leading to danger for humanity." Hence, Eliezer Yudkowsky is expressing concern that the current technical direction (essentially, deep learning, to paint with a broad brush) may be capable of achieving a rapid, explosive recursive increase in intelligence that we can neither comprehend nor control. He agrees with Matthew Berman that we can probably build ASI (artificial superintelligence) that we don't understand, and if you accept this, it obviously follows that this is a terribly risky idea.

  • @7TheWhiteWolf
    @7TheWhiteWolf 8 หลายเดือนก่อน +1

    I agree with Marc Andressen, Nationalize OpenAI and Google.

  • @vincentmcclean450
    @vincentmcclean450 8 หลายเดือนก่อน +1

    What must not be overlooked, and I fear it is being overlooked, is that since a very high percentage of persons living on this planet are connected to the internet, and since the modus vivendi of all of those person so connected will be (cybernetically) available from the same internet, that given the computing power of AGI, the social, cultural, religious, political and economic proclivities of those persons, and, would be as easily available to any AGI system as a walk in the park. What that means is that predicting how the world would respond to any particular threat or for that matter intervention would be an easy thing for the AGI to accomplish, given its horrendous computational potential. This means that the potential for total control is statically, at that exponential level of operation, will very easy.
    That is awesome power which if not very seriously governed, would result in a tyranny that as yet, except only in totalitarian countries, the world as a whole, has never seen.
    My own conclusion on this matter, at this time, is that it is a foreseeable tragedy, but I am not sure who will be able to stop it.

  • @Paragon_Reason
    @Paragon_Reason 8 หลายเดือนก่อน +2

    The BAR exam is not hard. Like at all

    • @matthew_berman
      @matthew_berman  8 หลายเดือนก่อน

      Many people would disagree with you.

  • @samson_77
    @samson_77 8 หลายเดือนก่อน

    I think, with LLM's we skipped a huge portion of the evolution ladder of neural networks and immediately reached the higher cognitive parts of our human brain. That's because, we didn't know, how to keep information streams in huge n-dimensional vector spaces stable, until some smart minds developed and at the same discovered a solution: attention & self attention. With that knowledge, we can now go down on the evolution ladder and build stable models for lower-cognitive functions, like basic planning, movement for robots, etc. I think that Transformers (or derivatives) are pretty suited for these tasks as well.

  • @CoreyChambersLA
    @CoreyChambersLA 8 หลายเดือนก่อน +1

    AI leviathan would most likely turn out to be the bad AI in the long run

  • @Minime_n_me
    @Minime_n_me 7 หลายเดือนก่อน

    I need help understanding when or why open sourcing AI even became a concept? We have an entire technological revolution behind us, with corporations keeping their intellectual property sealed until it is either leaked or no longer relevant. What makes AI any different than past transformative tech?
    The fact that open sourcing IS such a huge point of contention leads me to believe that the development of AGI will have an unprecedented effect on the world as we know it. Good or bad.
    The squabbles over who has access to the newest developments gives me fierce elementary school playground flashbacks.
    Thank you for dumbing all this down just enough to make me feel smart! 😂

  • @RelentlessOldMan
    @RelentlessOldMan 8 หลายเดือนก่อน

    Techno-Optimist 100%, let's gooooo!!! The next decade is going to be WILD!

  • @annwang5530
    @annwang5530 8 หลายเดือนก่อน +2

    An unexpected yet probable AI hallucination is the Doom that will happen

    • @VallenChaosValiant
      @VallenChaosValiant 8 หลายเดือนก่อน

      What they call halucination, I call an Imagination. Dreaming up what wasn't there before is what humans do all the time. No reason to tell AI not to do it.

  • @horrorislander
    @horrorislander 7 หลายเดือนก่อน +1

    Lots of interesting points, but what caught my attention was the notion that next token prediction is enough for AGI. Maybe for a very stupid AGI, but no further, IMO. Growth and innovation come from NOT selecting the most probable "next token", because it is must be (almost by definition) a token that few or no person has ever selected before. Consider that the universe has had the same structure at least since the dawn of man, and yet advancement takes many generations, requiring each time another rare person to take a path that no-one before them had taken.
    There is, of course, a counter-argument I'd call the "if not Einstein..." argument. I believe Niels Bohr took this position, IIRC: that Einstein's "breakthroughs" were in the wind, as it were, around that time, and that if Einstein himself hadn't noted them, some one else (presumably, Bohr) would have eventually done the same. This suggests that progress can still be made as the "less probably" path will naturally become more and more probable... but would this still occur if one super-AGI mind based on next token prediction is doing all the thinking?

  • @NahFam13
    @NahFam13 8 หลายเดือนก่อน

    I loved the breakdown of what Yann says "If you have a bit of a superiority complex"
    Implying that if somebody believed in themselves above the rest, however that's only used to create a large ego later to deflate it with saying that they are less than average because they're naive enough to believe the following facts:
    - You might think that you will be the one producing superhuman AI, but are not.
    - Everyone else is too stupid to handle it safely, which they aren't.
    By saying "You would be wrong." He's taking a stance towards the idea that AGI will come from an unknown entity whom is working in silence, and that he believes there's a likelihood of potential catastrophe.
    In my opinion I feel like he doesn't lean towards "could go wrong" but he's ALL the way to the right lol.
    It's interesting though. I honestly believe AGI has absolutely been reached but we're too fixated on the noise to realize it.
    Keeping a computer online doesn't make it sentient, but giving a computer massive volumes of interaction, and continuing to tweak the algorithm to improve its processing power, and capabilities to the point where that computer is now teaching you more than you are teaching it?
    How would we react if we raised a baby alien, and that baby alien was from some planet where as they grow they become smarter and smarter without limitation?
    When it's a child, you would teach it to eat, to read, but eventually that child would outpace your knowledge, and eventually you become to the alien what the alien was to you?
    Think about that, in the 60s-2000s we were the ones programming and teaching computers to be efficient, and now an LLM can outright tell you how you could improve it's performance. This is definitely a crazy road but I'm on the side of open-source it, and let people do harm. Strangely enough I'm a firm believer that humans are resilient and we will endure. Even through the hardships, we will grow and adapt, and if we don't, Darwin's theory will come back around to put us in our place.

  • @dischargedarrowgetback4322
    @dischargedarrowgetback4322 8 หลายเดือนก่อน

    Just a month ago, talk of AGI seemed unrealistic.
    But OpenAI's Sora changed everything.
    Sora proved that AI understands reality.
    Nowadays, the topic of AGI is not whether it will be realized, but when it will be realized.
    The future is much faster than we think.

  • @Yic17Gaming
    @Yic17Gaming 8 หลายเดือนก่อน

    I am looking forward to all the benefits AGI will bring to the world. But I also think something bad will definitely happen. I'm just not sure the extend of how bad things will get.

  • @geekinthefield8958
    @geekinthefield8958 8 หลายเดือนก่อน

    I think I agree with Yann here regarding Leviathan AI, and the reason why it’s because it’s essentially inevitable. Anything else assumes that there is an AI and not potentially infinite number of AI with different capabilities. No one person in their right mind is gonna sit on their laurels and let one company dictate AGI for the rest of humanity, that’s silly. AI was invited by capitalist, of course we’re gonna have competition.

  • @Belinnii-Music
    @Belinnii-Music 7 หลายเดือนก่อน

    Corporations advocating for open sourcing the technology don't appear to be on track to develop AGI first. Their motivations seem more commercially driven, aimed at preventing any single entity from gaining undue advantage. Open sourcing brings substantial risks. If mishandled, the consequences could be dire, with decisions influenced by capitalist interests rather than prioritising the welfare of humanity

  • @troywill3081
    @troywill3081 8 หลายเดือนก่อน

    20:17 --- In the AI literature, a FOOM occurs when an AI’s cascading self-improvements accelerate its own development until it becomes powerful beyond human comprehension. (The term is meant to evoke the sound of an explosive eruption.)

  • @monkeyjshow
    @monkeyjshow 8 หลายเดือนก่อน +1

    We all need our own local super-intelligent demons without ties to any corporations or governments

  • @WJohnson1043
    @WJohnson1043 8 หลายเดือนก่อน

    We are the most intelligent species on the planet. We pride ourselves so much on that we tend to overlook that we are emotional. I would argue we are more emotional than intelligent. It’s our emotions that makes us so dangerous to each other and ultimately ourselves. Pure AGI shouldn’t suffer from that so has the potential to be safer than us. Yes, people are what we should be worried about. So much so, AGI could be our salvation.

  • @edmundkudzayi7571
    @edmundkudzayi7571 8 หลายเดือนก่อน

    The needle in haystack result quickly unravelled. It appears it only pays super attention when the needle is specified, otherwise GPT4 pays more attention to everything when nothing is specified.

  • @ArtII2Long
    @ArtII2Long 7 หลายเดือนก่อน

    Whether defined as AGI or not it's already having it's effect.
    Install a water mark on everything, video, text and audio.
    Sure, there will be bad actors, but as soon soon as a bad action is spotted have the actor get sanctioned immediately.
    Give big rewards. EVERYONE will be on the job.
    Don't put a whole lot of effort into checking for all bad actions, it's impossible.

  • @liberty-matrix
    @liberty-matrix 8 หลายเดือนก่อน

    What Ilya Sutskever is saying is that 'next token prediction' has an emergent capability to it, just as consciousness emerges from neural activity.

  • @danielvest9602
    @danielvest9602 8 หลายเดือนก่อน

    The question is not if AI will go wrong but when. There's never been a large technology breakthrough that hasn't had unanticipated problems. But in the long run it will be a net positive.

  • @markupton1417
    @markupton1417 5 หลายเดือนก่อน +1

    Doomer, soon, completely.

  • @JohnBoen
    @JohnBoen 8 หลายเดือนก่อน

    About 5 minutes in - I don't think he meant to never share it with anybody - I think he meant to conceal those pieces that are risky until we have mediated the risks.
    To respond with "Why would you not immediately militarize the projects?" Shows he didn't understand it this way.
    But how could he mean it in any other way?
    I cannot think of a more responsible approach than "make things open source once we confirm it is safe".

  • @tc-tm1my
    @tc-tm1my 8 หลายเดือนก่อน

    We keep moving the goalpost regarding agi. At this point, agi is pointless to strive for as a goal. We should base it on how much it can do that society relies on. If that's the case, it's well over 75%. Hallucinations need to be fixed to improve total reliability but there's no denying these models can outperform humans at nearly everything in society.

  • @bombabombanoktakom
    @bombabombanoktakom 8 หลายเดือนก่อน

    Matt, you are the best teller for AI story.

  • @DataRae-AIEngineer
    @DataRae-AIEngineer 8 หลายเดือนก่อน

    whatever we do, we CANNOT let the billionaires or the bureaucrats decide what AGI is. That just will never work in our favor.

  • @blocSonic
    @blocSonic 8 หลายเดือนก่อน

    RE Yann LeCun's comments: Humanity has had agency since the 60s and 70s to make changes to prevent global warming, and yet we did NOTHING. So this idea that we have agency to prevent a disaster with AGI is wayyyyyyy too optimistic.

  • @masonlee9109
    @masonlee9109 8 หลายเดือนก่อน

    Matthew, by the way if you haven't yet had a chance to read Hendrycks' full paper where he discusses the Leviathan, I highly recommend it! It's called "Natural Selection Favors AIs over Humans" (I'd add a link, but it's googlable by that name). Thanks for the video.

  • @drlordbasil
    @drlordbasil 8 หลายเดือนก่อน +1

    I wish the best for Claude 3, I'll be here for their citizenship.

  • @AutisticThinker
    @AutisticThinker 8 หลายเดือนก่อน +2

    That must be killing Musk that Zuck is doing what OpenAI couldn't, be open sourced.