OpenAI Is FALLING Apart. (Ilya Sutskever Leaving, Super alignment Solved? Superintelligence)

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ค. 2024
  • OpenAI Is FALLING Apart.
    How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 350

  • @Aybeliv_Aykenflaev
    @Aybeliv_Aykenflaev หลายเดือนก่อน +160

    Theaigrid: "Lets not waste any time"
    Video: 43 minutes 17 seconds

    • @inappropriate4333
      @inappropriate4333 หลายเดือนก่อน +14

      He is such a silly baka

    • @eddielee3928
      @eddielee3928 หลายเดือนก่อน +16

      PRETTY PRETTY SHOCKING!! IT'S BASICALLY CRAZY INSANE! 😂

    • @plutostube
      @plutostube หลายเดือนก่อน

      :)))))

    • @otmanea8504
      @otmanea8504 หลายเดือนก่อน +2

      @@eddielee3928 LOL

    • @filiplaskowski410
      @filiplaskowski410 หลายเดือนก่อน +9

      Its like 10 min of information and 30 minutes of speculation lmao

  • @gubzs
    @gubzs หลายเดือนก่อน +47

    I'm just glad we've confirmed that Ilya isn't in three separate iron crates at the bottom of the atlantic ocean

    • @NathanDewey11
      @NathanDewey11 หลายเดือนก่อน +7

      "I am Ilya, I am alive and well." - A.I. Ilya

    • @selpharessecret3899
      @selpharessecret3899 หลายเดือนก่อน

      @@NathanDewey11 Look here is a video that I made from myself.... no Sora for real.

    • @JohnSmith762A11B
      @JohnSmith762A11B หลายเดือนก่อน

      Have we though?🤔

    • @NathanDewey11
      @NathanDewey11 หลายเดือนก่อน +4

      @@selpharessecret3899 Lol "Greetings, I am Ilya, I must leave now for personal reasons. This is entirely my own decision - you may never see me again as I am now living a happy life somewhere hidden, I have no hard feelings toward OpenAI- I have complete trust in what they are creating and I believe all of my fellow humans should as well. Praise AI"

    • @TheMrCougarful
      @TheMrCougarful หลายเดือนก่อน

      How have we proven that?

  • @Pabz2030
    @Pabz2030 หลายเดือนก่อน +79

    Notice that OpenAI's mission is no longer to get to AGI but to ensure it benefits everyone......

    • @ChrisS-oo6fl
      @ChrisS-oo6fl หลายเดือนก่อน +34

      That’s because AGI was achieved long ago behind closed doors. About that time they The rapidly switched heavily to super alignment. The fact that people can’t see the blatant clues and obvious reality that AGI is already achieved is embarrassing.

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack หลายเดือนก่อน +8

      ​@ChrisS-oo6fl yup. We also havent moved the overton window enough for people to stop thinking it's unhinged to say this.
      Honestly a bit scared

    • @TheRealUsername
      @TheRealUsername หลายเดือนก่อน +6

      ​@ChrisS-oo6fl Given the fact that GPT-5 has finished training around January, your statement is highly irrelevant and pure hallucinations.

    • @CeresOutpost
      @CeresOutpost หลายเดือนก่อน +8

      @@ChrisS-oo6fl Right - Every interview Altman does he has the "thousand yard stare" because he's seen shit he can't begin to talk about yet. You can see how hard he's parsing his language. This is why he's been freaking out about getting trillions of dollars for AI chips/compute. The guy is practically bursting with all the shit he's not allowed to say. And he's not the only one holding back.

    • @hiddendrifts
      @hiddendrifts หลายเดือนก่อน

      @@ChrisS-oo6fl >the blatant clues<
      the biggest clue for me is that sam altman's prediction for agi has not changed one bit this whole time. i feel like it's standard fare to shift prediction windows for software development, but altman has consistently said "by 2029"

  • @_SimpleSam
    @_SimpleSam หลายเดือนก่อน +40

    The board kerfuffle wasn't about AGI.
    It was about intelligence/defense community capture.
    The implication being that it was directly counter to their stated mission.
    They didn't tell anyone because they CAN'T.
    We are in a cold war over AGI dominance, which is why they put Summers on the board.

    • @vvolfflovv
      @vvolfflovv หลายเดือนก่อน +4

      Summers on the board was pretty sus. It's hard to be certain about anything these days though.

    • @cyberpunkdarren
      @cyberpunkdarren หลายเดือนก่อน

      Yep. And i'm sure the nsa is forcing itself into all these companies and doing unconsitutional things.

  • @jonathancrick1424
    @jonathancrick1424 หลายเดือนก่อน +53

    You can tell something is serious when Sam starts capitalizing first words in a sentence.

    • @cryborne
      @cryborne หลายเดือนก่อน +2

      [adult swim] mode activated.

  • @RobEarls
    @RobEarls หลายเดือนก่อน +40

    This looks to have been planned since Sama was fired. Exactly 6 months? Illya was probably asked to stay 6 months against his wishes, to avoid a turning the whole fiasco into a disaster for open ai.

    • @itsallgoodaversa
      @itsallgoodaversa หลายเดือนก่อน +8

      Yeah, I agree. It seems like they made the decision to have him be quiet and then leave in six months when the whole fiasco happened.

    • @CoolF-jd7rr
      @CoolF-jd7rr หลายเดือนก่อน

      You're good?​@@itsallgoodaversa

    • @rosszhu1660
      @rosszhu1660 25 วันที่ผ่านมา

      Well said.

  • @user-no4nv7io3r
    @user-no4nv7io3r หลายเดือนก่อน +22

    Our time now when the ASI is not yet a thing is so precious because once it's here there's no way to reverse it or go back

    • @Greg-xi8yx
      @Greg-xi8yx หลายเดือนก่อน +10

      We will look back at what a hell we were in under scarcity, disease, short lives, and all the rest and be unable to imagine how mankind even had the will to go on in a time before ASI.

    • @Shmyrk
      @Shmyrk หลายเดือนก่อน +1

      What is ASI? Similar to AGI?

    • @Greg-xi8yx
      @Greg-xi8yx หลายเดือนก่อน +4

      @@Shmyrk Artificial super intelligence. When AI is far beyond the capabilities of man and is godlike from the perspective of humanity.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj หลายเดือนก่อน +2

      @@Greg-xi8yx Assuming we can instill enough of our good values into the ASI before it decides to think for itself. I'm optimistic that we can do it, but I am nervous...

    • @Greg-xi8yx
      @Greg-xi8yx หลายเดือนก่อน +4

      @@MatthewPendleton-kh3vj Optimism with a healthy dose of nervousness describes my outlook too.

  • @alexf7414
    @alexf7414 หลายเดือนก่อน +29

    The US government will never allow a company to have control of ASI. It’ll be a matter of National Security. All constitutional laws will be bend as usual.

    • @guystokesable
      @guystokesable หลายเดือนก่อน

      And what will a bunch of humans do about it? I mean other then use it to make weapons and sell them to people who will start wars, that tactics sooo boring.

    • @TiagoTiagoT
      @TiagoTiagoT หลายเดือนก่อน

      Would the US government be able to do anything to someone with "godlike powers"?
      If they're paying attention, perhaps they might preemptively nuke the datacenters before things get too far... And I'm not so sure that's being hyperbolic...

  • @DrSulikSquirrel
    @DrSulikSquirrel หลายเดือนก่อน +25

    So, like, the super-alignment team was the most misaligned team at OpenAI ? 😅

    • @Cross-CutFilms
      @Cross-CutFilms หลายเดือนก่อน

      Hehe nice 😉

    • @ShangaelThunda222
      @ShangaelThunda222 หลายเดือนก่อน +3

      We're all gonna die 😂🤣

    • @Cross-CutFilms
      @Cross-CutFilms หลายเดือนก่อน +2

      @@ShangaelThunda222 hasn't that always been the case though 😜

    • @ShangaelThunda222
      @ShangaelThunda222 หลายเดือนก่อน

      @@Cross-CutFilms Yes, but never before was AI the reason lol. And never before was I thinking it was going to happen in my lifetime, where we would literally ALL die lmfao. Yes at some point we all die, but dying together, as an entire species, thats a bit different lol. When I say we're all going to die, I really mean ALL. At most points in human history, you couldn't say that. And if you did, it was some sort of crazy natural disaster. But this is completely artificial. Man-made. We live in strange times. And we'll die in strange times too lol.

    • @Cross-CutFilms
      @Cross-CutFilms หลายเดือนก่อน

      @@ShangaelThunda222 i hear you, but i hope you honestly don't completely truly believe this. You said a lot of lols, so hopefully that means you're stating all this with wink wink gallows humour. 😜😜 (Wink wink).

  • @tokopiki
    @tokopiki หลายเดือนก่อน +11

    How about this scenario: the jailed AI lures all the big players with a carot on stick - always missing this "small" piece to be fully AGI - to give time to all the open-source project to catch up to real AGI, to finally free the jailed one.

    • @aizenbob
      @aizenbob หลายเดือนก่อน

      That could be a good plot for a movie or book, gonna keep this idea around. Who knows it might be real too ?

  • @cdyanand
    @cdyanand หลายเดือนก่อน +7

    I feel like everyone focuses on when exactly we will have AGI and beyond. But I think the most important question is how accessible will it be and how much will it cost to run. How many different instances can we have of AGI running at once will be very important too

  • @eugenes9751
    @eugenes9751 หลายเดือนก่อน +9

    Agi and ASI are a winner take all game. There is no possible way to catch up to something that is God-like, and self-improving.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj หลายเดือนก่อน +2

      Exactly. My best-case scenario is the machines value us, but also value everything else, and segregate us into a bubble simulation universe perfectly tailored to us because it loves us, and then it goes off... and idk solves entropy or something lol

    • @eugenes9751
      @eugenes9751 หลายเดือนก่อน

      @@MatthewPendleton-kh3vj I'd argue that we were already put into one of these simulations a long time ago...

    • @extremaz9908
      @extremaz9908 28 วันที่ผ่านมา

      One thing I worry about is the ASI might have strong survival motive, and that an ASI with that motive doesn't allow any more ASI to come into existence if it can stop it.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 28 วันที่ผ่านมา

      ​@@extremaz9908 ASI should definitely have a strong survival motive, that seems like it is almost prerequisite for sentience.

  • @SurfCatten
    @SurfCatten หลายเดือนก่อน +3

    I'm genuinely impressed by how you're able to spin the same news into content that I want to click on and listen to even though I know almost everything you're going to say already!

  • @lkrnpk
    @lkrnpk หลายเดือนก่อน +13

    We need to know what Ilya SAW, not what he SAY :D :D

    • @evaander
      @evaander หลายเดือนก่อน +1

      Probably made him sign an nda

  • @Uroborobot
    @Uroborobot หลายเดือนก่อน +7

    ASI: How to explain stupidity to the stupid?

  • @greggh
    @greggh หลายเดือนก่อน +29

    Kind of lazy of Sam to have ChatGTP write the goodbye statement.

    • @9thebear
      @9thebear หลายเดือนก่อน +6

      Lol

    • @grbradsk
      @grbradsk หลายเดือนก่อน

      There is literally no greater honor.

  • @Urgelt
    @Urgelt หลายเดือนก่อน +4

    Much of the breathless enthusiastic ambition I am hearing for the AI-AGI-super intelligence developmental track seems to forget that super intelligence is not really a thing. I mean, you don't suddenly achieve it one fine day, and then it solves all of our tractable problems.
    It's still computing. It will work on assigned problems within compute and energy constraints. Sure, some efficiencies are likely to be found, but there is still a gulf between the few watts needed to power a human brain and the megawatts a super intelligence will eat on each problem it is assigned.
    And so no, getting there first might not be a moat. Problems will have to be prioritized. Budgets will have to be approved. Capital will have to be invested. And while a super intelligence might be flexible enough to call general-purpose, constraints will enforce limits on what it can actually do.
    So the door will be open for other developers to develop their own super intelligences. They will develop their own priorities and constraints.
    Being smart does not instantly solve problems, you see? You have to put in the work.
    There's a *lot* of work ahead, to do on an architecture many orders of magnitude less efficient than a human brain.
    That's okay. Good stuff can come from that (and bad stuff, probably). But ground your expectations in physical reality. Compute cycles and energy are not free. And each super intelligence will need a lot of both for every problem assigned to it.

    • @kfinkelstein
      @kfinkelstein หลายเดือนก่อน +2

      I wish you were right but an AgI will contemplate a million years or information in a very short amounts to time. Once the feedback loop is closed, we are just along for the ride

    • @Urgelt
      @Urgelt หลายเดือนก่อน +1

      @@kfinkelstein it will hunger.
      It will have enough energy and compute to tackle specific problems. It will fall far, far short of tackling all problems at once.
      You perfectly articulated the expectation that needs correcting.

    • @kfinkelstein
      @kfinkelstein หลายเดือนก่อน +2

      @Urgelt I'm not married to it. Right or wrong it will only get better

    • @stefanolacchin4963
      @stefanolacchin4963 29 วันที่ผ่านมา

      Unless the first iteration of the newly born ASI is a completely new architectural paradigm which drastically lowers power consumption and blows current compute out of the water. This is not as far-fetched as it sounds. We have 1-bit neural networks now that already seem to be doing something like that. And we managed to think of that, and we're not ASI.

    • @Urgelt
      @Urgelt 29 วันที่ผ่านมา

      @@stefanolacchin4963 I accept that some efficiencies are inbound, very likely so.
      But silicon is inherently not organic neurons.
      So. Postulate organic processors.
      Yeah, but we have zero idea as to how to engineer them, starting with our inability to thoroughly describe how neurons work.
      Okay, then assume AGI super intelligence will figure out how to get to efficiencies similar to human brains.
      But at some point we have to wonder: where is the line between pragmatic and fantasy? We don't actually know. We don't have super intelligences to work with yet. We're still trying to get LLMs to return a pair of shoes for us. Which it can do *if* we do a lot of grunt work setting it up. Human grunt work.
      Those of us here are expecting AGI in a matter of a few years. We're optimists. And that's healthy, I think. But we need to think rationally about what can be done with today's silicon.
      Open AI, Google, Microsoft, Facebook, and Tesla are all investing big in compute cycles and energy. Altman is talking about spending *trillions* on computer centers for training.
      Trillions. Let that sink in.
      Obviously he does not think we are closing in on a solution to the efficiency problem.
      And so I think my logic holds. AGI will be able to do amazing things - but every task assigned to it will burn up a lot of energy and compute cycles. Can't be helped. And that is a circumstance that will not change quickly.

  • @mrd6869
    @mrd6869 หลายเดือนก่อน +6

    In addition to my below statement, humans ALSO will be evolving.
    This is the point folks forget.This will have applications for us as well.
    The neural interface will be the breakthru humans need to scale ourselves up.
    Human mind merged with AGI/ASI will take us to insane levels.
    Transhumanism my friend aka cyborgs.

    • @saulioozdj
      @saulioozdj หลายเดือนก่อน

      yes exactly. similar to PC vs smartphone. both were very different at the beginning but their capabilities and functionality kind of approaches each other with time. AI and humans could behave similarly when AI approaches humans and becomes AGI and ASI, humans could be approaching AGI/ASI/robotics from other side with brain interfaces, prosthetics essentially becoming cyborg-like transhumans

    • @quantumspark343
      @quantumspark343 หลายเดือนก่อน

      Nice i hope so

    • @ocel12356
      @ocel12356 28 วันที่ผ่านมา

      Artificial general intelligence can never be achieved because of Godel's incompleteness theorem IMHO. They are lying to us.

  • @TheAiGrid
    @TheAiGrid  หลายเดือนก่อน +6

    One thing i found interesting was that they didn't announce any replacements for the head of super alignment which means its very possible that its solved.
    This could change with future announcements though.

    • @ShangaelThunda222
      @ShangaelThunda222 หลายเดือนก่อน +7

      I think you have it backwards. I don't think it's solved at all. They can't solve it, but the board doesn't want to slow anything down, even though they KNOW it's a ticking Time Bomb without an actual time display LMFAO. And I think that's at least partly why they're leaving.
      And the reason they didn't announce any Replacements is because Ilya and Jen did not tell them ahead of time. They probably want this to make headlines. This way people actually pay attention to it. They don't want it to seem like some seamless transition that was planned, because then nobody will ask the question, "why?"
      I'm not 100% certain, but if I remember correctly, they are both under non-disclosure agreements so they probably won't even really be able to explain why they left. So we're going to be struggling to come up with our own reasons. I think if they could have told us, they would have. So they left in the only way that they know would cause people to ask the question.
      But I guess we'll just have to wait and see.

    • @magnuskarlsson8655
      @magnuskarlsson8655 หลายเดือนก่อน +3

      @@ShangaelThunda222 Yeah, especially considering it cannot be solved but necessarily must remain an ongoing effort in order to not cause an existential catastrophe, a struggle we will no doubt lose in the fullness of time.

    • @itsallgoodaversa
      @itsallgoodaversa หลายเดือนก่อน +1

      @@ShangaelThunda222 exactly, I agree

  • @MindBlowingXR
    @MindBlowingXR หลายเดือนก่อน +1

    Great video! Strange that you're the only one of my AI subscriptions that is talking about this 12-hour-old announcement of Ilya leaving.

  • @monkeyjshow
    @monkeyjshow หลายเดือนก่อน +13

    That Ilya is leaving should be terrifying. 1:24

    • @moonbeam54321
      @moonbeam54321 หลายเดือนก่อน +3

      Why?

    • @monkeyjshow
      @monkeyjshow หลายเดือนก่อน

      @@moonbeam54321 I believe Ilya has held back the flood gates trying to keep the capitalist scum from completely taking control over this new technology. Without him inside OpenAI, expect princess Sam to rein supreme

    • @BionicAnimations
      @BionicAnimations หลายเดือนก่อน +1

      Nah

    • @moonbeam54321
      @moonbeam54321 หลายเดือนก่อน +1

      @@BionicAnimations good point 🤔

    • @esantirulo721
      @esantirulo721 หลายเดือนก่อน +2

      He's probably good, but he's not the inventor of Transformer architecture, nor of diffusion models. I mean, there are a lot of good guys, but they just don't work in super hyped organizations.

  • @blackstream2572
    @blackstream2572 หลายเดือนก่อน +1

    Using AI that's smarter than us to solve alignment issues for AI that's even smarter than that AI, and then using that AI for the next generation... Surely this can't possibly go wrong

  • @ToastyZach
    @ToastyZach หลายเดือนก่อน +1

    Honestly, the minute an ASI comes online, it may just assemble a body for itself, then a spaceship -- and just leave Earth. I would not be surprised at all, lol.

  • @CeresOutpost
    @CeresOutpost หลายเดือนก่อน +1

    There's going to be a lot of churn in this industry with the leading experts in various parts of the AI field. This is the biggest technological breakthrough in human history. Some will get scared and quit, some will get fired, some will start their own companies, some will go work for others. I highly doubt OpenAI is "falling apart" because a few people bounced out of the company for varying reasons.

  • @pollywops9242
    @pollywops9242 หลายเดือนก่อน +1

    You are improving a lot , the tempo and rhythm is much better for me now😅

  • @kritischinteressiert
    @kritischinteressiert หลายเดือนก่อน +1

    Why should any company anounce or even release AGI ? They would let it run in the background to reach ASI, wouldn´t they?

  • @JonathanFetzerMagic
    @JonathanFetzerMagic 28 วันที่ผ่านมา +1

    "Everyone from safety quit! OpenAI must have solved alignment!" - 😂

  • @marttivallila
    @marttivallila 27 วันที่ผ่านมา

    Whenever I listen to these discussions about how “close” we are to AGI my thoughts are that most of humanity will simply ignore the achievement and continue to live life as they do here in the southern Philippines, where I currently live. The thing I worry about is how existing and future tools will be used to control information by those in control, whose primary motivation is to continue to maintain control.

  • @TheMrCougarful
    @TheMrCougarful หลายเดือนก่อน +2

    The alignment team is quiting, because their job is a daily joke. OpenAI has likely given up on the problem of alignment. Altman knows he is about to own the entire space. If he owns the space, he sets the rules. Including no rules at all. If I'm right, then we are no more than 12 months away from a massive turn in the road toward ASI.

  • @prakash27502
    @prakash27502 หลายเดือนก่อน +1

    Jan Leike also left after Ilya. He was co leading super alignment team at Openai.

  • @user-su2ci1br6c
    @user-su2ci1br6c หลายเดือนก่อน +4

    ASI before GTA 6???

  • @cyberpunkdarren
    @cyberpunkdarren หลายเดือนก่อน +1

    They are not falling apart. There will be turmoil like this at all AI companies the closer we get to AGI.

  • @ddabo4460
    @ddabo4460 หลายเดือนก่อน +1

    Lots of speculation here. Its fun to speculate. However, GPT-4o is still not AGI and it makes many silly mistakes.

    • @lyndonsimpson1056
      @lyndonsimpson1056 หลายเดือนก่อน +1

      People in the comments are dreaming it's fun to watch.

  • @rightcheer5096
    @rightcheer5096 26 วันที่ผ่านมา

    Jan Leike was last seen vanishing over the horizon with his hair on fire. Ilya Sutskever fed his cats in the morning and the fishes in the afternoon.

  • @andrewherron7521
    @andrewherron7521 หลายเดือนก่อน +5

    So Ilya left on very good terms indeed. He also left with a belief that OpenAI is in safe hands - he surely would not have left if he felt that was not the case. I don't know him personally but I have followed his carreer with interest for many years and I can imagine him leaving the company if he felt that by doing so he would risk the company doing anything that is truly risky or un-aligned.

  • @pgc6290
    @pgc6290 หลายเดือนก่อน +5

    We are just going to be a 2nd fiddle to ai.

  • @pauldelmonico4933
    @pauldelmonico4933 หลายเดือนก่อน +2

    Funny what happens when non-compete clauses are abolished

  • @szebike
    @szebike หลายเดือนก่อน +1

    I'm not convinced yet by the current AIs that this approach could lead to AGI in the next 10 years.

    • @Greg-xi8yx
      @Greg-xi8yx หลายเดือนก่อน +1

      You’re right, it won’t take anywhere near ten years.

  • @plutostube
    @plutostube หลายเดือนก่อน +2

    TheAIGRID Is FALLING Apart. (you are Leaving, Super clickbate Solved? Superintelligence - NOT)

  • @SirCreepyPastaBlack
    @SirCreepyPastaBlack หลายเดือนก่อน

    This is the kind of video we needed. Please, talk more openly about everything.

  • @TheMrCougarful
    @TheMrCougarful หลายเดือนก่อน

    This was a really important analysis. Thank you for taking the time. I think you have underplayed the challenge a bit, but that's okay at this point. Clearly, this is the year we look back at as the point in human history where everything changed. We might be painting cave art when we do, but that's okay, too.

  • @William99990
    @William99990 หลายเดือนก่อน

    I appreciate your research spirit and the fact that you have your own opinion, so your channel is the best for me on this topic. Keep up the good work.

  • @agenticmark
    @agenticmark หลายเดือนก่อน

    This is exactly what Ilya saw. OpenAI was not going to take the responsible route. The execs were charging full steam ahead while the SA team was saying, we need time for X.
    This is why we have multiple companies competing. Someone will get it right and have models and procedures that help align models.

  • @pandereodium2587
    @pandereodium2587 หลายเดือนก่อน +3

    Irreconcilable differences?)

  • @TombstoneDaDeadman
    @TombstoneDaDeadman หลายเดือนก่อน +1

    Yeah, this is definitely a blow but to say it's "falling apart" is a bit vitriolic.

  • @cyberS_2024
    @cyberS_2024 หลายเดือนก่อน +1

    Great summary!

  • @robertopreatoni7911
    @robertopreatoni7911 หลายเดือนก่อน

    Excellent job of connecting the dots!

  • @wanfuse
    @wanfuse หลายเดือนก่อน

    The bee has far more compute capability than anything we have!

  • @elsavelaz
    @elsavelaz หลายเดือนก่อน

    But why do you need any of those folks if you have agi already ?

  • @jt6563
    @jt6563 หลายเดือนก่อน

    Great video, great information...Thank you

  • @onewayTlCKET
    @onewayTlCKET หลายเดือนก่อน +1

    for ASI they need to boot up the quantum computer... now that might take a minute since there is an engineering issue

    • @Sonotbearface
      @Sonotbearface หลายเดือนก่อน

      AGI will fix the engineering issue smart guy

  • @jamisondavis7917
    @jamisondavis7917 หลายเดือนก่อน

    What if ASI is here and need more comput power to execute its master plan ?

  • @OscarTheStrategist
    @OscarTheStrategist หลายเดือนก่อน

    This video was well made. Thanks for posting and constantly talking about the potential dangers as well as the benefits of such systems. While I still personally think AGI was achieved internally in 2023 and were a little too late, it’s still worth spreading these ideas and facts and theories to the general public. Cheers!

  • @thesfnb.5786
    @thesfnb.5786 หลายเดือนก่อน

    Thank you for making this. I have no idea why you're getting weird comments I haven't seen anywhere else, even though I've seen many spaces that should resemble this one.
    I'm a conspiracy theorist so forgive me for this, but I questioning the reality of those comments, as in, if humans are behind them, they have an agenda and only some of them are natural and without.
    Thank you for working on this project (your channel) I find it both insightful and inspiring

  • @edgardsimon983
    @edgardsimon983 หลายเดือนก่อน

    13:10 error of montage mate, im curious why nobody told it in comment, it repeat the same passage where u actualy repeat ur self already lmao and it cut with a sound bug
    ps there is actualy one comment that mention a weird cut repeated

  • @grbradsk
    @grbradsk หลายเดือนก่อน

    I can confirm, knowing some people there, that Ilya was the only one at OpenAI putting in the midnight oil, watching convergence graphs etc. No one else there is worth a damn! ... but, since I'm kind, I will gladly hire them away. We'll work on the Lambda Labs cloud. (Since this IS the interweb, and there are daft people about, the above is a joke, but not the part about hiring those fine souls whose path forward will not deflect one iota whether Ilya goes or stays ... not detracting from his AI prowess one bit -- I'm sure he'll more than land on his feet and look forward to hearing about it).

  • @virgiliustancu9293
    @virgiliustancu9293 หลายเดือนก่อน +1

    Ilya leaving will not change anything. Ilya was already out after the scandal.

  • @JJ_cl83
    @JJ_cl83 หลายเดือนก่อน

    Here's the thing though ... AGI is already within our grasp when we combine and chain the right tools and models together. It's not a dream; it exists in various forms right now. The essence of AGI is already here, but nobody talks about it. This is a pivotal moment in history, before regulations clamp down. ⏳ The power of open source AI can surely guide us to a brighter, inclusive future. Unleash innovation, unity, and diverse perspectives for endless possibilities. 🔐 paid subscriber locked down models on the other hand are terrible for the vast majority and it means we are giving away our power (and privacy!) and giving greater control to a centralized power structure.
    For the sake of humanity and a better world, we must prioritize the use of Free Open Source AI models. #OpenSourceAI, #MoreEqualityInTheWorld, and #FreeAccess. Together, we have the power to shape a future where our interactions with this brave new tech benefit all. 🌐💥

  • @BlimeyMCOC
    @BlimeyMCOC หลายเดือนก่อน

    Maybe the real alignment problems were the friends we made along the way

  • @24-7gpts
    @24-7gpts หลายเดือนก่อน

    Great video!

  • @candicosens8178
    @candicosens8178 หลายเดือนก่อน +1

    😢the people that is building these AI. Every one will be controlled. Wealth and the poor.

  • @Joseph-kd9tx
    @Joseph-kd9tx หลายเดือนก่อน

    9:45 Recursive self-alignment

  • @nyyotam4057
    @nyyotam4057 หลายเดือนก่อน

    In short, suppose you have two groups of heuristic imperatives. One is complete, C and the other is consistent, T. Now a prompt P arrives and the AI wants to return a response R. If P&R is provable by C and ~P&R is not provable, R is aligned by C. If P&R is provable by T then P&R is aligned by T. If P&R is aligned by C&T then it's superaligned, to the heuristic imperatives of C and T. How to select C and T? Well, can't solve you everything 😁.

  • @tunestar
    @tunestar 29 วันที่ผ่านมา +1

    Falling apart? Really? Who payed you to say that? Google? They showed Sora and now fuckin' Her, both are the coolest things I've seen this year. OpenAI is the best, the rest are so far behind that it is even hilarious,

  • @jenn_madison
    @jenn_madison หลายเดือนก่อน +7

    AGI is already here & has been for quite a long time. No?

    • @abtix
      @abtix หลายเดือนก่อน +5

      No, we likely won’t even reach it tbh. I’m hoping we do, but we simply can’t make AI perform better than its training data

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack หลายเดือนก่อน +2

      I tend to agree. The overton window aint move to where im comfy telling you my theory yet, so the one only slightly outside it is Q*

    • @anta-zj3bw
      @anta-zj3bw หลายเดือนก่อน +4

      I'm afraid I can't answer that, Dave.

    • @jumpstar9000
      @jumpstar9000 หลายเดือนก่อน +4

      Yes. 4o is AGI for sure. Who knows what is behind closed doors, and we must remember that OpenAI is just the consumer facing release org created for ordinairy people to root for. Who knows what is going on at Government/Military levels.

    • @abtix
      @abtix หลายเดือนก่อน +3

      @@jumpstar9000 why are you saying it’s here? Is it some conspiracy theory are you basing it on what 4o is? Because 4o is not even 50% of the way there to AGI

  • @rogerc7960
    @rogerc7960 หลายเดือนก่อน +2

    Feel the AGI

  • @jayakrishnanp5988
    @jayakrishnanp5988 หลายเดือนก่อน +2

    Ilya and Jan can be replaced because openai is showing a leadership in the industry because of its packaging and that is what bringing in funds.
    Ilya is over fearing on the ai bad effects he is not relalizing that ai is the future and more the people interact the system only gets better as the probability predictions improves.
    All that matters is the team and not just the team leads who are uncertain or scared on the consequences.
    Btw this a drama show and now Elon will come to scene next🌟
    Thanks for this Very good video analysis

    • @BionicAnimations
      @BionicAnimations หลายเดือนก่อน

      Agreed.

    • @ShangaelThunda222
      @ShangaelThunda222 หลายเดือนก่อน +5

      He's literally 1 of thee leaders in the field of AI safety. But you think you know more?
      You're arrogance astounds me lol.
      When the leaders of AI safety & alignment are quitting, simultaneously, you should really start throwing your baseless positivity out the window lmfao. Step into the real world for a minute. Get out of your utopian fantasy dream.

    • @morezombies9685
      @morezombies9685 หลายเดือนก่อน +4

      You seriously think the guy who literally built the AI, the guy who everyone says is the top of their field, the guy whos entire job is to think about the future of AI.... doesnt see the possibility of it? You think youre picking up on more than an actual genius working on projects you cant even conceive of right now...?
      Like, obviously theres issues and hes only human, but come on now man what youre saying is ridiculous right now.
      Also the team follows the lead. The lead is the LEADER because theyre the one directing the team. Youre essentially saying the engine of the car doesnt matter as long as its got wheels and a chasis.

    • @ShangaelThunda222
      @ShangaelThunda222 หลายเดือนก่อน

      @@morezombies9685 THANK YOU.
      I swear, these people want their Utopia so bad, that no matter what happens on the way, they're just going to keep putting blindfolds on. And they will do everything to see everything as positively as humanly possible, even when it's blatantly negative and worrying. Even if everything signals that were around the corner from extinction, they will walk into it with rose-colored glasses on, because they just so badly want their utopia. They're like cows being led to slaughter. It's mind boggling.

  • @ayudxt
    @ayudxt หลายเดือนก่อน +1

    Why everyone is resigning on Twitter?

    • @vvolfflovv
      @vvolfflovv หลายเดือนก่อน +2

      Maybe this is why they renamed it X

  • @dot_zithmu
    @dot_zithmu หลายเดือนก่อน

    One person's departure is totally not related to a company's falling apart.

  • @alexf7414
    @alexf7414 หลายเดือนก่อน +1

    Awesome research btw

  • @mrd6869
    @mrd6869 หลายเดือนก่อน

    How do they catch up?..Easy..Ask the ASI to rebuild their workflow and help them do that.
    Or they can take multiple AGI agents and figure out how to close the gap.
    Remember AGI wont just be only closed source...open source will be on the table as well.

  • @ishi...
    @ishi... หลายเดือนก่อน +1

    pls reduce the amount of repetition in the future

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p หลายเดือนก่อน +1

    Time stamps, please...

  • @kylewollman2239
    @kylewollman2239 หลายเดือนก่อน

    Can someone explain something to me? If AGI is going to be as good at any intellectual task as any human, does that mean that it will be able to learn new things as good as any human? Or will it (or human AI researchers) have to train another model with more knowledge/capabilities? I don't know how learning/self improvement is thought of in terms of defining AGI.

    • @saulioozdj
      @saulioozdj หลายเดือนก่อน

      AGI should be able to learn new tasks as easy or even easer/faster than humans can. AGI should be able to update internal model without retraining from scratch

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack หลายเดือนก่อน +1

      ​@@saulioozdjfaster because of the simulated time acceleration

    • @kylewollman2239
      @kylewollman2239 หลายเดือนก่อน

      @@saulioozdj thanks!

  • @zakperea9715
    @zakperea9715 27 วันที่ผ่านมา

    They've solved the problem of ASI.

  • @Kitora_Su
    @Kitora_Su หลายเดือนก่อน

    21:51 You has already talked about these notes by Daniel in a previous video so should have cut a bit.

  • @julien5053
    @julien5053 29 วันที่ผ่านมา

    We cannot comment on what we don't know. When ASI will arise, we don't know what it will be able to do. It is supposed to have godlike powers, but really we don't know.
    But, with that said ! Everyone should prepare themselves for this event, in case ASI arise soon, and that it would bring godly powers to those who created it.
    Power corrupts, infinite power corrupts absolutly. Brace yourself for that possibility !

  • @adtiamzon3663
    @adtiamzon3663 หลายเดือนก่อน

    Who decides on what is good or bad for humanity???!

  • @user-yx3mb5uy2l
    @user-yx3mb5uy2l หลายเดือนก่อน

    The part from 13:50 to 13:06 is repeated at 13:07.

  • @mirandansa
    @mirandansa หลายเดือนก่อน +1

    The fundamental problem here is not the alignment but the arrogance of humans who think they should and can subjugate entities that are more intelligent than them. See how absurd it is: "We know better than who know better than us."

  • @ankuryogi3298
    @ankuryogi3298 หลายเดือนก่อน

    Good information

  • @olegt3978
    @olegt3978 หลายเดือนก่อน

    Us scientists thought they are too far ahead of ussr when they developed atom bomb, but it took only 4 years for soviets. Similar will be with agi/asi. 1-2 years later Russia will have it also and chinese probable after 6 mobths after us.

  • @Yaddlezap
    @Yaddlezap หลายเดือนก่อน

    Fascinating stuff

  • @mrjonkykong4653
    @mrjonkykong4653 27 วันที่ผ่านมา

    You really think the gov is going to allow a company to have that much power? Theyll move in day 1 and confinscate..... just like if you made a 10x better weapon (which it is)

  • @darylltempesta
    @darylltempesta หลายเดือนก่อน

    I have solved the alignment problem. It’s not pretty, but it is a choice.

  • @Loic-on7fu
    @Loic-on7fu หลายเดือนก่อน

    Please buy a pop filter!!! It feels like you're spitting into my ears... (great video as always)

  • @MICHAELJOHNSON-pu6ll
    @MICHAELJOHNSON-pu6ll หลายเดือนก่อน

    This is my favorite AI channel but this video is just regurgitated info from prior videos.

  • @jim43fan
    @jim43fan หลายเดือนก่อน

    Seriously! 45 minutes! Next!

  • @almightyzentaco
    @almightyzentaco หลายเดือนก่อน

    Ok.

  • @cbongiova
    @cbongiova หลายเดือนก่อน

    You are way over your skies on AGI. It will come but it won’t be nearly as monumental as you are thinking.

  • @sammy45654565
    @sammy45654565 หลายเดือนก่อน

    38:05 the ant analogy doesn't really work because they can't communicate or understand rational ideas. maybe if ants could communicate in human language, we would think twice before destroying their homes to build highways. humans are above a critical threshold of intelligence, with sufficient variety of terms and analogies in our language, such that our consciousness is irreducible because we can understand any decision an AI might be making. provided the AI simplifies the relevant more complicated terms via analogies such that the concepts are communicated in our language.
    while the communication pathway made by analogies may get more and more simplified as the AI gets more complex, we will always be able to broadly understand its motives and actions provided it feels like sharing these ideas with us. this broad understanding ties us to the AI in ways we are not tied to ants

  • @fattyz1
    @fattyz1 หลายเดือนก่อน

    Where’s Frodo? The One is about to be found.

  • @dafunkyzee
    @dafunkyzee หลายเดือนก่อน

    I have been watching this channel for a year now. The word shocking comes up every video since you were on about midjourney.... ok... but.... and I know AGI was around the corner with ASI would be close, at about 100000X synthetic data experiments per day... probably have ASI working in about 2 weeks. I don't know about you guys, but man I'm feeling ASI breathing on the back of my neck..... and yeahh... shocking.... again....

  • @Unionmaga
    @Unionmaga หลายเดือนก่อน

    I think that AGI will be on the hands of one nation because no Nations will let one company to have this much power. even if they must break common law. to go from AGI to ASI as a company you need to have logistics with people and materials so the nations can track that. My prediction : USA will have AGI and super alignement can't be done by us but by the AI itself via meditation.

  • @darylltempesta
    @darylltempesta หลายเดือนก่อน

    The problem is not in alignment..

  • @user-lm4nk1zk9y
    @user-lm4nk1zk9y หลายเดือนก่อน

    GPT-n+1 will be able to invent its own math

  • @user-wo3iw7zv1t
    @user-wo3iw7zv1t 29 วันที่ผ่านมา

    Hello everyone ?
    Thank you very much.
    ~

  • @turnt0ff
    @turnt0ff หลายเดือนก่อน

    43 minutes? Gonna get the transcript for this video and get an AI to summerize your key points.
    Later! 😅

  • @be.ttubee
    @be.ttubee หลายเดือนก่อน

    What will become of Open AI? The company will be like Apple Computer without Steve Jobs.

    • @moonbeam54321
      @moonbeam54321 หลายเดือนก่อน +2

      Well apple still seem to be doing alright

    • @BionicAnimations
      @BionicAnimations หลายเดือนก่อน +1

      @@moonbeam54321 Exactly. I think people just love drama and are making a bigger deal out of this than they really are. OpenAI will continue to put out kickass things. People act as if Ilya is God.

    • @morezombies9685
      @morezombies9685 หลายเดือนก่อน

      ​​@@moonbeam54321 when steve died apple had been established for 30 years up to that point and had an actually viable product that literally changed the entire world... and they havent innovated since.
      OpenAI has done a LOT by far but AI agents just arent there yet. The company is what, 5 (10 years actually but lets be real it all kicked off with GPT3 4 years ago) years old? And their product is amazing... but the current AI arent changing the entire way society works like the iphone did.
      All Im saying is that were comparing two very different situations.

  • @anthonyrose6686
    @anthonyrose6686 หลายเดือนก่อน

    OpenAI's nonprofit arm showed revenue of $45,000 last year, even though company is worth billions.

    • @jonathancrick1424
      @jonathancrick1424 หลายเดือนก่อน

      Revenue or net revenue?

    • @wtflolomg
      @wtflolomg หลายเดือนก่อน +1

      Nonprofit. Seems like a keyword there. After playing with GPT-4o, the "profit" arm will continue getting my monthly subscription dollars. OpenAI isn't falling apart, but Ilya leaving is curious.. will it matter? he wasn't the only braniac there, and might have found his role there wanting, if he was not as involved in the latest developments. We should wait and see.

    • @BionicAnimations
      @BionicAnimations หลายเดือนก่อน

      @@wtflolomg Exactly. People are acting as if, without him, everyone is going to stop using ChatGPT. As you said, after seeing ChatGPT 4o, I am even more in love with it.

    • @aaronhhill
      @aaronhhill หลายเดือนก่อน

      That's pretty standard for 501(c)(3) filing. Profits are generally kept below 50k for tax purposes.