OpenAI Is FALLING Apart. (Ilya Sutskever Leaving, Super alignment Solved? Superintelligence)

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ก.ย. 2024

ความคิดเห็น • 357

  • @Alga_Kazakhstan_Alga
    @Alga_Kazakhstan_Alga 4 หลายเดือนก่อน +161

    Theaigrid: "Lets not waste any time"
    Video: 43 minutes 17 seconds

    • @inappropriate4333
      @inappropriate4333 4 หลายเดือนก่อน +14

      He is such a silly baka

    • @eddielee3928
      @eddielee3928 4 หลายเดือนก่อน +16

      PRETTY PRETTY SHOCKING!! IT'S BASICALLY CRAZY INSANE! 😂

    • @plutostube
      @plutostube 4 หลายเดือนก่อน

      :)))))

    • @otmanea8504
      @otmanea8504 4 หลายเดือนก่อน +2

      @@eddielee3928 LOL

    • @filiplaskowski410
      @filiplaskowski410 4 หลายเดือนก่อน +9

      Its like 10 min of information and 30 minutes of speculation lmao

  • @gubzs
    @gubzs 4 หลายเดือนก่อน +47

    I'm just glad we've confirmed that Ilya isn't in three separate iron crates at the bottom of the atlantic ocean

    • @NathanDewey11
      @NathanDewey11 4 หลายเดือนก่อน +7

      "I am Ilya, I am alive and well." - A.I. Ilya

    • @selpharessecret3899
      @selpharessecret3899 4 หลายเดือนก่อน

      @@NathanDewey11 Look here is a video that I made from myself.... no Sora for real.

    • @JohnSmith762A11B
      @JohnSmith762A11B 4 หลายเดือนก่อน

      Have we though?🤔

    • @NathanDewey11
      @NathanDewey11 4 หลายเดือนก่อน +4

      @@selpharessecret3899 Lol "Greetings, I am Ilya, I must leave now for personal reasons. This is entirely my own decision - you may never see me again as I am now living a happy life somewhere hidden, I have no hard feelings toward OpenAI- I have complete trust in what they are creating and I believe all of my fellow humans should as well. Praise AI"

    • @TheMrCougarful
      @TheMrCougarful 4 หลายเดือนก่อน

      How have we proven that?

  • @Pabz2030
    @Pabz2030 4 หลายเดือนก่อน +79

    Notice that OpenAI's mission is no longer to get to AGI but to ensure it benefits everyone......

    • @ChrisS-oo6fl
      @ChrisS-oo6fl 4 หลายเดือนก่อน +34

      That’s because AGI was achieved long ago behind closed doors. About that time they The rapidly switched heavily to super alignment. The fact that people can’t see the blatant clues and obvious reality that AGI is already achieved is embarrassing.

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 4 หลายเดือนก่อน +8

      ​@ChrisS-oo6fl yup. We also havent moved the overton window enough for people to stop thinking it's unhinged to say this.
      Honestly a bit scared

    • @TheRealUsername
      @TheRealUsername 4 หลายเดือนก่อน +6

      ​@ChrisS-oo6fl Given the fact that GPT-5 has finished training around January, your statement is highly irrelevant and pure hallucinations.

    • @CeresOutpost
      @CeresOutpost 4 หลายเดือนก่อน +8

      @@ChrisS-oo6fl Right - Every interview Altman does he has the "thousand yard stare" because he's seen shit he can't begin to talk about yet. You can see how hard he's parsing his language. This is why he's been freaking out about getting trillions of dollars for AI chips/compute. The guy is practically bursting with all the shit he's not allowed to say. And he's not the only one holding back.

    • @hiddendrifts
      @hiddendrifts 4 หลายเดือนก่อน

      @@ChrisS-oo6fl >the blatant clues<
      the biggest clue for me is that sam altman's prediction for agi has not changed one bit this whole time. i feel like it's standard fare to shift prediction windows for software development, but altman has consistently said "by 2029"

  • @_SimpleSam
    @_SimpleSam 4 หลายเดือนก่อน +40

    The board kerfuffle wasn't about AGI.
    It was about intelligence/defense community capture.
    The implication being that it was directly counter to their stated mission.
    They didn't tell anyone because they CAN'T.
    We are in a cold war over AGI dominance, which is why they put Summers on the board.

    • @vvolfflovv
      @vvolfflovv 4 หลายเดือนก่อน +4

      Summers on the board was pretty sus. It's hard to be certain about anything these days though.

    • @cyberpunkdarren
      @cyberpunkdarren 4 หลายเดือนก่อน

      Yep. And i'm sure the nsa is forcing itself into all these companies and doing unconsitutional things.

  • @FlowerKlam
    @FlowerKlam 4 หลายเดือนก่อน +23

    Our time now when the ASI is not yet a thing is so precious because once it's here there's no way to reverse it or go back

    • @Greg-xi8yx
      @Greg-xi8yx 4 หลายเดือนก่อน +9

      We will look back at what a hell we were in under scarcity, disease, short lives, and all the rest and be unable to imagine how mankind even had the will to go on in a time before ASI.

    • @Shmyrk
      @Shmyrk 4 หลายเดือนก่อน +1

      What is ASI? Similar to AGI?

    • @Greg-xi8yx
      @Greg-xi8yx 4 หลายเดือนก่อน +4

      @@Shmyrk Artificial super intelligence. When AI is far beyond the capabilities of man and is godlike from the perspective of humanity.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 4 หลายเดือนก่อน +2

      @@Greg-xi8yx Assuming we can instill enough of our good values into the ASI before it decides to think for itself. I'm optimistic that we can do it, but I am nervous...

    • @Greg-xi8yx
      @Greg-xi8yx 4 หลายเดือนก่อน +4

      @@MatthewPendleton-kh3vj Optimism with a healthy dose of nervousness describes my outlook too.

  • @alexf7414
    @alexf7414 4 หลายเดือนก่อน +29

    The US government will never allow a company to have control of ASI. It’ll be a matter of National Security. All constitutional laws will be bend as usual.

    • @guystokesable
      @guystokesable 4 หลายเดือนก่อน

      And what will a bunch of humans do about it? I mean other then use it to make weapons and sell them to people who will start wars, that tactics sooo boring.

    • @tiagotiagot
      @tiagotiagot 4 หลายเดือนก่อน

      Would the US government be able to do anything to someone with "godlike powers"?
      If they're paying attention, perhaps they might preemptively nuke the datacenters before things get too far... And I'm not so sure that's being hyperbolic...

  • @jonathancrick1424
    @jonathancrick1424 4 หลายเดือนก่อน +54

    You can tell something is serious when Sam starts capitalizing first words in a sentence.

    • @cryborne
      @cryborne 4 หลายเดือนก่อน +3

      [adult swim] mode activated.

    • @mwilliamson4198
      @mwilliamson4198 3 หลายเดือนก่อน

      Think of the extra effort to hold down the Shift key while typing. Oh...I guess it could have been autocorrect / Grammarly 😢

  • @RobEarls
    @RobEarls 4 หลายเดือนก่อน +39

    This looks to have been planned since Sama was fired. Exactly 6 months? Illya was probably asked to stay 6 months against his wishes, to avoid a turning the whole fiasco into a disaster for open ai.

    • @itsallgoodaversa
      @itsallgoodaversa 4 หลายเดือนก่อน +8

      Yeah, I agree. It seems like they made the decision to have him be quiet and then leave in six months when the whole fiasco happened.

    • @CoolF-jd7rr
      @CoolF-jd7rr 4 หลายเดือนก่อน

      You're good?​@@itsallgoodaversa

    • @rosszhu1660
      @rosszhu1660 4 หลายเดือนก่อน

      Well said.

  • @DrSulikSquirrel
    @DrSulikSquirrel 4 หลายเดือนก่อน +25

    So, like, the super-alignment team was the most misaligned team at OpenAI ? 😅

    • @Cross-CutFilms
      @Cross-CutFilms 4 หลายเดือนก่อน

      Hehe nice 😉

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน +3

      We're all gonna die 😂🤣

    • @Cross-CutFilms
      @Cross-CutFilms 4 หลายเดือนก่อน +2

      @@ShangaelThunda222 hasn't that always been the case though 😜

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน

      @@Cross-CutFilms Yes, but never before was AI the reason lol. And never before was I thinking it was going to happen in my lifetime, where we would literally ALL die lmfao. Yes at some point we all die, but dying together, as an entire species, thats a bit different lol. When I say we're all going to die, I really mean ALL. At most points in human history, you couldn't say that. And if you did, it was some sort of crazy natural disaster. But this is completely artificial. Man-made. We live in strange times. And we'll die in strange times too lol.

    • @Cross-CutFilms
      @Cross-CutFilms 4 หลายเดือนก่อน

      @@ShangaelThunda222 i hear you, but i hope you honestly don't completely truly believe this. You said a lot of lols, so hopefully that means you're stating all this with wink wink gallows humour. 😜😜 (Wink wink).

  • @tokopiki
    @tokopiki 4 หลายเดือนก่อน +11

    How about this scenario: the jailed AI lures all the big players with a carot on stick - always missing this "small" piece to be fully AGI - to give time to all the open-source project to catch up to real AGI, to finally free the jailed one.

    • @aizenbob
      @aizenbob 4 หลายเดือนก่อน

      That could be a good plot for a movie or book, gonna keep this idea around. Who knows it might be real too ?

  • @eugenes9751
    @eugenes9751 4 หลายเดือนก่อน +9

    Agi and ASI are a winner take all game. There is no possible way to catch up to something that is God-like, and self-improving.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 4 หลายเดือนก่อน +2

      Exactly. My best-case scenario is the machines value us, but also value everything else, and segregate us into a bubble simulation universe perfectly tailored to us because it loves us, and then it goes off... and idk solves entropy or something lol

    • @eugenes9751
      @eugenes9751 4 หลายเดือนก่อน

      @@MatthewPendleton-kh3vj I'd argue that we were already put into one of these simulations a long time ago...

    • @extremaz9908
      @extremaz9908 4 หลายเดือนก่อน

      One thing I worry about is the ASI might have strong survival motive, and that an ASI with that motive doesn't allow any more ASI to come into existence if it can stop it.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 4 หลายเดือนก่อน

      ​@@extremaz9908 ASI should definitely have a strong survival motive, that seems like it is almost prerequisite for sentience.

  • @cdyanand
    @cdyanand 4 หลายเดือนก่อน +7

    I feel like everyone focuses on when exactly we will have AGI and beyond. But I think the most important question is how accessible will it be and how much will it cost to run. How many different instances can we have of AGI running at once will be very important too

  • @blackstream2572
    @blackstream2572 4 หลายเดือนก่อน +2

    Using AI that's smarter than us to solve alignment issues for AI that's even smarter than that AI, and then using that AI for the next generation... Surely this can't possibly go wrong

  • @SurfCatten
    @SurfCatten 4 หลายเดือนก่อน +3

    I'm genuinely impressed by how you're able to spin the same news into content that I want to click on and listen to even though I know almost everything you're going to say already!

  • @mrd6869
    @mrd6869 4 หลายเดือนก่อน +6

    In addition to my below statement, humans ALSO will be evolving.
    This is the point folks forget.This will have applications for us as well.
    The neural interface will be the breakthru humans need to scale ourselves up.
    Human mind merged with AGI/ASI will take us to insane levels.
    Transhumanism my friend aka cyborgs.

    • @saulioozdj
      @saulioozdj 4 หลายเดือนก่อน

      yes exactly. similar to PC vs smartphone. both were very different at the beginning but their capabilities and functionality kind of approaches each other with time. AI and humans could behave similarly when AI approaches humans and becomes AGI and ASI, humans could be approaching AGI/ASI/robotics from other side with brain interfaces, prosthetics essentially becoming cyborg-like transhumans

    • @quantumspark343
      @quantumspark343 4 หลายเดือนก่อน

      Nice i hope so

    • @ocel12356
      @ocel12356 4 หลายเดือนก่อน

      Artificial general intelligence can never be achieved because of Godel's incompleteness theorem IMHO. They are lying to us.

  • @Urgelt
    @Urgelt 4 หลายเดือนก่อน +4

    Much of the breathless enthusiastic ambition I am hearing for the AI-AGI-super intelligence developmental track seems to forget that super intelligence is not really a thing. I mean, you don't suddenly achieve it one fine day, and then it solves all of our tractable problems.
    It's still computing. It will work on assigned problems within compute and energy constraints. Sure, some efficiencies are likely to be found, but there is still a gulf between the few watts needed to power a human brain and the megawatts a super intelligence will eat on each problem it is assigned.
    And so no, getting there first might not be a moat. Problems will have to be prioritized. Budgets will have to be approved. Capital will have to be invested. And while a super intelligence might be flexible enough to call general-purpose, constraints will enforce limits on what it can actually do.
    So the door will be open for other developers to develop their own super intelligences. They will develop their own priorities and constraints.
    Being smart does not instantly solve problems, you see? You have to put in the work.
    There's a *lot* of work ahead, to do on an architecture many orders of magnitude less efficient than a human brain.
    That's okay. Good stuff can come from that (and bad stuff, probably). But ground your expectations in physical reality. Compute cycles and energy are not free. And each super intelligence will need a lot of both for every problem assigned to it.

    • @krfloll
      @krfloll 4 หลายเดือนก่อน +2

      I wish you were right but an AgI will contemplate a million years or information in a very short amounts to time. Once the feedback loop is closed, we are just along for the ride

    • @Urgelt
      @Urgelt 4 หลายเดือนก่อน +1

      @@krfloll it will hunger.
      It will have enough energy and compute to tackle specific problems. It will fall far, far short of tackling all problems at once.
      You perfectly articulated the expectation that needs correcting.

    • @krfloll
      @krfloll 4 หลายเดือนก่อน +2

      @Urgelt I'm not married to it. Right or wrong it will only get better

    • @stefanolacchin4963
      @stefanolacchin4963 4 หลายเดือนก่อน

      Unless the first iteration of the newly born ASI is a completely new architectural paradigm which drastically lowers power consumption and blows current compute out of the water. This is not as far-fetched as it sounds. We have 1-bit neural networks now that already seem to be doing something like that. And we managed to think of that, and we're not ASI.

    • @Urgelt
      @Urgelt 4 หลายเดือนก่อน

      @@stefanolacchin4963 I accept that some efficiencies are inbound, very likely so.
      But silicon is inherently not organic neurons.
      So. Postulate organic processors.
      Yeah, but we have zero idea as to how to engineer them, starting with our inability to thoroughly describe how neurons work.
      Okay, then assume AGI super intelligence will figure out how to get to efficiencies similar to human brains.
      But at some point we have to wonder: where is the line between pragmatic and fantasy? We don't actually know. We don't have super intelligences to work with yet. We're still trying to get LLMs to return a pair of shoes for us. Which it can do *if* we do a lot of grunt work setting it up. Human grunt work.
      Those of us here are expecting AGI in a matter of a few years. We're optimists. And that's healthy, I think. But we need to think rationally about what can be done with today's silicon.
      Open AI, Google, Microsoft, Facebook, and Tesla are all investing big in compute cycles and energy. Altman is talking about spending *trillions* on computer centers for training.
      Trillions. Let that sink in.
      Obviously he does not think we are closing in on a solution to the efficiency problem.
      And so I think my logic holds. AGI will be able to do amazing things - but every task assigned to it will burn up a lot of energy and compute cycles. Can't be helped. And that is a circumstance that will not change quickly.

  • @Uroborobot
    @Uroborobot 4 หลายเดือนก่อน +7

    ASI: How to explain stupidity to the stupid?

  • @lkrnpk
    @lkrnpk 4 หลายเดือนก่อน +13

    We need to know what Ilya SAW, not what he SAY :D :D

    • @evaander
      @evaander 4 หลายเดือนก่อน +1

      Probably made him sign an nda

  • @greggh
    @greggh 4 หลายเดือนก่อน +29

    Kind of lazy of Sam to have ChatGTP write the goodbye statement.

    • @9thebear
      @9thebear 4 หลายเดือนก่อน +6

      Lol

    • @grbradsk
      @grbradsk 4 หลายเดือนก่อน

      There is literally no greater honor.

  • @TheAiGrid
    @TheAiGrid  4 หลายเดือนก่อน +6

    One thing i found interesting was that they didn't announce any replacements for the head of super alignment which means its very possible that its solved.
    This could change with future announcements though.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน +7

      I think you have it backwards. I don't think it's solved at all. They can't solve it, but the board doesn't want to slow anything down, even though they KNOW it's a ticking Time Bomb without an actual time display LMFAO. And I think that's at least partly why they're leaving.
      And the reason they didn't announce any Replacements is because Ilya and Jen did not tell them ahead of time. They probably want this to make headlines. This way people actually pay attention to it. They don't want it to seem like some seamless transition that was planned, because then nobody will ask the question, "why?"
      I'm not 100% certain, but if I remember correctly, they are both under non-disclosure agreements so they probably won't even really be able to explain why they left. So we're going to be struggling to come up with our own reasons. I think if they could have told us, they would have. So they left in the only way that they know would cause people to ask the question.
      But I guess we'll just have to wait and see.

    • @magnuskarlsson8655
      @magnuskarlsson8655 4 หลายเดือนก่อน +3

      @@ShangaelThunda222 Yeah, especially considering it cannot be solved but necessarily must remain an ongoing effort in order to not cause an existential catastrophe, a struggle we will no doubt lose in the fullness of time.

    • @itsallgoodaversa
      @itsallgoodaversa 4 หลายเดือนก่อน +1

      @@ShangaelThunda222 exactly, I agree

  • @CeresOutpost
    @CeresOutpost 4 หลายเดือนก่อน +1

    There's going to be a lot of churn in this industry with the leading experts in various parts of the AI field. This is the biggest technological breakthrough in human history. Some will get scared and quit, some will get fired, some will start their own companies, some will go work for others. I highly doubt OpenAI is "falling apart" because a few people bounced out of the company for varying reasons.

  • @MindBlowingXR
    @MindBlowingXR 4 หลายเดือนก่อน +1

    Great video! Strange that you're the only one of my AI subscriptions that is talking about this 12-hour-old announcement of Ilya leaving.

  • @JonathanFetzerMagic
    @JonathanFetzerMagic 4 หลายเดือนก่อน +1

    "Everyone from safety quit! OpenAI must have solved alignment!" - 😂

  • @ddabo4460
    @ddabo4460 4 หลายเดือนก่อน +1

    Lots of speculation here. Its fun to speculate. However, GPT-4o is still not AGI and it makes many silly mistakes.

    • @lyndonsimpson1056
      @lyndonsimpson1056 4 หลายเดือนก่อน +1

      People in the comments are dreaming it's fun to watch.

  • @andrewherron7521
    @andrewherron7521 4 หลายเดือนก่อน +5

    So Ilya left on very good terms indeed. He also left with a belief that OpenAI is in safe hands - he surely would not have left if he felt that was not the case. I don't know him personally but I have followed his carreer with interest for many years and I can imagine him leaving the company if he felt that by doing so he would risk the company doing anything that is truly risky or un-aligned.

  • @ToastyZach
    @ToastyZach 4 หลายเดือนก่อน +1

    Honestly, the minute an ASI comes online, it may just assemble a body for itself, then a spaceship -- and just leave Earth. I would not be surprised at all, lol.

  • @monkeyjshow
    @monkeyjshow 4 หลายเดือนก่อน +13

    That Ilya is leaving should be terrifying. 1:24

    • @moonbeam54321
      @moonbeam54321 4 หลายเดือนก่อน +3

      Why?

    • @monkeyjshow
      @monkeyjshow 4 หลายเดือนก่อน

      @@moonbeam54321 I believe Ilya has held back the flood gates trying to keep the capitalist scum from completely taking control over this new technology. Without him inside OpenAI, expect princess Sam to rein supreme

    • @BionicAnimations
      @BionicAnimations 4 หลายเดือนก่อน +1

      Nah

    • @moonbeam54321
      @moonbeam54321 4 หลายเดือนก่อน +1

      @@BionicAnimations good point 🤔

    • @esantirulo721
      @esantirulo721 4 หลายเดือนก่อน +2

      He's probably good, but he's not the inventor of Transformer architecture, nor of diffusion models. I mean, there are a lot of good guys, but they just don't work in super hyped organizations.

  • @pollywops9242
    @pollywops9242 4 หลายเดือนก่อน +1

    You are improving a lot , the tempo and rhythm is much better for me now😅

  • @plutostube
    @plutostube 4 หลายเดือนก่อน +2

    TheAIGRID Is FALLING Apart. (you are Leaving, Super clickbate Solved? Superintelligence - NOT)

  • @TheMrCougarful
    @TheMrCougarful 4 หลายเดือนก่อน +2

    The alignment team is quiting, because their job is a daily joke. OpenAI has likely given up on the problem of alignment. Altman knows he is about to own the entire space. If he owns the space, he sets the rules. Including no rules at all. If I'm right, then we are no more than 12 months away from a massive turn in the road toward ASI.

  • @marttivallila
    @marttivallila 4 หลายเดือนก่อน

    Whenever I listen to these discussions about how “close” we are to AGI my thoughts are that most of humanity will simply ignore the achievement and continue to live life as they do here in the southern Philippines, where I currently live. The thing I worry about is how existing and future tools will be used to control information by those in control, whose primary motivation is to continue to maintain control.

  • @Alehandro_mrt_bg
    @Alehandro_mrt_bg 4 หลายเดือนก่อน +4

    ASI before GTA 6???

  • @cyberpunkdarren
    @cyberpunkdarren 4 หลายเดือนก่อน +1

    They are not falling apart. There will be turmoil like this at all AI companies the closer we get to AGI.

  • @tunestar
    @tunestar 4 หลายเดือนก่อน +1

    Falling apart? Really? Who payed you to say that? Google? They showed Sora and now fuckin' Her, both are the coolest things I've seen this year. OpenAI is the best, the rest are so far behind that it is even hilarious,

  • @rightcheer5096
    @rightcheer5096 4 หลายเดือนก่อน

    Jan Leike was last seen vanishing over the horizon with his hair on fire. Ilya Sutskever fed his cats in the morning and the fishes in the afternoon.

  • @kritischinteressiert
    @kritischinteressiert 4 หลายเดือนก่อน +1

    Why should any company anounce or even release AGI ? They would let it run in the background to reach ASI, wouldn´t they?

  • @julien5053
    @julien5053 4 หลายเดือนก่อน

    We cannot comment on what we don't know. When ASI will arise, we don't know what it will be able to do. It is supposed to have godlike powers, but really we don't know.
    But, with that said ! Everyone should prepare themselves for this event, in case ASI arise soon, and that it would bring godly powers to those who created it.
    Power corrupts, infinite power corrupts absolutly. Brace yourself for that possibility !

  • @szebike
    @szebike 4 หลายเดือนก่อน +1

    I'm not convinced yet by the current AIs that this approach could lead to AGI in the next 10 years.

    • @Greg-xi8yx
      @Greg-xi8yx 4 หลายเดือนก่อน +1

      You’re right, it won’t take anywhere near ten years.

  • @JJ_cl83
    @JJ_cl83 4 หลายเดือนก่อน

    Here's the thing though ... AGI is already within our grasp when we combine and chain the right tools and models together. It's not a dream; it exists in various forms right now. The essence of AGI is already here, but nobody talks about it. This is a pivotal moment in history, before regulations clamp down. ⏳ The power of open source AI can surely guide us to a brighter, inclusive future. Unleash innovation, unity, and diverse perspectives for endless possibilities. 🔐 paid subscriber locked down models on the other hand are terrible for the vast majority and it means we are giving away our power (and privacy!) and giving greater control to a centralized power structure.
    For the sake of humanity and a better world, we must prioritize the use of Free Open Source AI models. #OpenSourceAI, #MoreEqualityInTheWorld, and #FreeAccess. Together, we have the power to shape a future where our interactions with this brave new tech benefit all. 🌐💥

  • @prakash27502
    @prakash27502 4 หลายเดือนก่อน +1

    Jan Leike also left after Ilya. He was co leading super alignment team at Openai.

  • @TheMrCougarful
    @TheMrCougarful 4 หลายเดือนก่อน

    This was a really important analysis. Thank you for taking the time. I think you have underplayed the challenge a bit, but that's okay at this point. Clearly, this is the year we look back at as the point in human history where everything changed. We might be painting cave art when we do, but that's okay, too.

  • @BAAPUBhendi-dv4ho
    @BAAPUBhendi-dv4ho 4 หลายเดือนก่อน +3

    OMG 🤯 THIS VIDEO IS SOO GREAT. IT CHANGES EVERYTHING!!!!!!!

  • @pgc6290
    @pgc6290 4 หลายเดือนก่อน +5

    We are just going to be a 2nd fiddle to ai.

  • @pauldelmonico4933
    @pauldelmonico4933 4 หลายเดือนก่อน +2

    Funny what happens when non-compete clauses are abolished

  • @martinschedlbauer9262
    @martinschedlbauer9262 4 หลายเดือนก่อน +10

    There's something wrong in a company where a guy like sam altman stays and Ilya Sutskever has to leave.

    • @Sonotbearface
      @Sonotbearface 4 หลายเดือนก่อน

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 4 หลายเดือนก่อน

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 4 หลายเดือนก่อน

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 4 หลายเดือนก่อน

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

  • @OscarTheStrategist
    @OscarTheStrategist 4 หลายเดือนก่อน

    This video was well made. Thanks for posting and constantly talking about the potential dangers as well as the benefits of such systems. While I still personally think AGI was achieved internally in 2023 and were a little too late, it’s still worth spreading these ideas and facts and theories to the general public. Cheers!

  • @agenticmark
    @agenticmark 4 หลายเดือนก่อน

    This is exactly what Ilya saw. OpenAI was not going to take the responsible route. The execs were charging full steam ahead while the SA team was saying, we need time for X.
    This is why we have multiple companies competing. Someone will get it right and have models and procedures that help align models.

  • @TombstoneDaDeadman
    @TombstoneDaDeadman 4 หลายเดือนก่อน +1

    Yeah, this is definitely a blow but to say it's "falling apart" is a bit vitriolic.

  • @zakperea9715
    @zakperea9715 4 หลายเดือนก่อน

    They've solved the problem of ASI.

  • @nyyotam4057
    @nyyotam4057 4 หลายเดือนก่อน

    In short, suppose you have two groups of heuristic imperatives. One is complete, C and the other is consistent, T. Now a prompt P arrives and the AI wants to return a response R. If P&R is provable by C and ~P&R is not provable, R is aligned by C. If P&R is provable by T then P&R is aligned by T. If P&R is aligned by C&T then it's superaligned, to the heuristic imperatives of C and T. How to select C and T? Well, can't solve you everything 😁.

  • @William99990
    @William99990 4 หลายเดือนก่อน

    I appreciate your research spirit and the fact that you have your own opinion, so your channel is the best for me on this topic. Keep up the good work.

  • @ZappyOh
    @ZappyOh 4 หลายเดือนก่อน +7

    Sam is a problem.
    Perhaps _the_ problem.

  • @onewayTlCKET
    @onewayTlCKET 4 หลายเดือนก่อน +1

    for ASI they need to boot up the quantum computer... now that might take a minute since there is an engineering issue

    • @Sonotbearface
      @Sonotbearface 4 หลายเดือนก่อน

      AGI will fix the engineering issue smart guy

  • @pandereodium
    @pandereodium 4 หลายเดือนก่อน +3

    Irreconcilable differences?)

  • @jayakrishnanp5988
    @jayakrishnanp5988 4 หลายเดือนก่อน +2

    Ilya and Jan can be replaced because openai is showing a leadership in the industry because of its packaging and that is what bringing in funds.
    Ilya is over fearing on the ai bad effects he is not relalizing that ai is the future and more the people interact the system only gets better as the probability predictions improves.
    All that matters is the team and not just the team leads who are uncertain or scared on the consequences.
    Btw this a drama show and now Elon will come to scene next🌟
    Thanks for this Very good video analysis

    • @BionicAnimations
      @BionicAnimations 4 หลายเดือนก่อน

      Agreed.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน +5

      He's literally 1 of thee leaders in the field of AI safety. But you think you know more?
      You're arrogance astounds me lol.
      When the leaders of AI safety & alignment are quitting, simultaneously, you should really start throwing your baseless positivity out the window lmfao. Step into the real world for a minute. Get out of your utopian fantasy dream.

    • @morezombies9685
      @morezombies9685 4 หลายเดือนก่อน +4

      You seriously think the guy who literally built the AI, the guy who everyone says is the top of their field, the guy whos entire job is to think about the future of AI.... doesnt see the possibility of it? You think youre picking up on more than an actual genius working on projects you cant even conceive of right now...?
      Like, obviously theres issues and hes only human, but come on now man what youre saying is ridiculous right now.
      Also the team follows the lead. The lead is the LEADER because theyre the one directing the team. Youre essentially saying the engine of the car doesnt matter as long as its got wheels and a chasis.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน

      @@morezombies9685 THANK YOU.
      I swear, these people want their Utopia so bad, that no matter what happens on the way, they're just going to keep putting blindfolds on. And they will do everything to see everything as positively as humanly possible, even when it's blatantly negative and worrying. Even if everything signals that were around the corner from extinction, they will walk into it with rose-colored glasses on, because they just so badly want their utopia. They're like cows being led to slaughter. It's mind boggling.

  • @mwilliamson4198
    @mwilliamson4198 3 หลายเดือนก่อน

    Excellent video. Thanks

  • @greggh
    @greggh 4 หลายเดือนก่อน +8

    ASI - The 1st commandment - "You shall have no other gods before Me" - Judgement day is coming.

    • @gubzs
      @gubzs 4 หลายเดือนก่อน +2

      You believe every word in a book because that same book told you to.
      For some reason I think humanity will be just fine. Just a hunch.

    • @john02marsh
      @john02marsh 4 หลายเดือนก่อน +1

      @@gubzsthe weirdest part about your comment is that you accept the premise. ASI is your GOD.

    • @georgemontgomery1892
      @georgemontgomery1892 4 หลายเดือนก่อน +1

      @@john02marsh "They asked us, 'Is there a God?'. We answered, 'There is now.'
      " Proponent for Sentience III - Allegaeon

    • @StefanReich
      @StefanReich 4 หลายเดือนก่อน +1

      Fun fact: "Asi" is a German slang word for "antisocial person"

    • @gubzs
      @gubzs 4 หลายเดือนก่อน +1

      ​@@john02marsh My comment was short and direct. I didn't want to write a book when a small paragraph does the job.
      Agreeing with something and choosing not to comment on something aren't the same thing, and it's both stupid and dangerous to think that way.

  • @mirandansa
    @mirandansa 4 หลายเดือนก่อน +1

    The fundamental problem here is not the alignment but the arrogance of humans who think they should and can subjugate entities that are more intelligent than them. See how absurd it is: "We know better than who know better than us."

  • @candicosens8178
    @candicosens8178 4 หลายเดือนก่อน +1

    😢the people that is building these AI. Every one will be controlled. Wealth and the poor.

  • @wanfuse
    @wanfuse 4 หลายเดือนก่อน

    The bee has far more compute capability than anything we have!

  • @jt6563
    @jt6563 4 หลายเดือนก่อน

    Great video, great information...Thank you

  • @thesfnb.5786
    @thesfnb.5786 4 หลายเดือนก่อน

    Thank you for making this. I have no idea why you're getting weird comments I haven't seen anywhere else, even though I've seen many spaces that should resemble this one.
    I'm a conspiracy theorist so forgive me for this, but I questioning the reality of those comments, as in, if humans are behind them, they have an agenda and only some of them are natural and without.
    Thank you for working on this project (your channel) I find it both insightful and inspiring

  • @jenn_madison
    @jenn_madison 4 หลายเดือนก่อน +7

    AGI is already here & has been for quite a long time. No?

    • @abtix
      @abtix 4 หลายเดือนก่อน +5

      No, we likely won’t even reach it tbh. I’m hoping we do, but we simply can’t make AI perform better than its training data

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 4 หลายเดือนก่อน +2

      I tend to agree. The overton window aint move to where im comfy telling you my theory yet, so the one only slightly outside it is Q*

    • @anta-zj3bw
      @anta-zj3bw 4 หลายเดือนก่อน +4

      I'm afraid I can't answer that, Dave.

    • @jumpstar9000
      @jumpstar9000 4 หลายเดือนก่อน +4

      Yes. 4o is AGI for sure. Who knows what is behind closed doors, and we must remember that OpenAI is just the consumer facing release org created for ordinairy people to root for. Who knows what is going on at Government/Military levels.

    • @abtix
      @abtix 4 หลายเดือนก่อน +3

      @@jumpstar9000 why are you saying it’s here? Is it some conspiracy theory are you basing it on what 4o is? Because 4o is not even 50% of the way there to AGI

  • @DaGamerTom
    @DaGamerTom 4 หลายเดือนก่อน

    "How do you align a superintelligent AI?" ... You don't. You don't allign it, you don't contain it, it's an inherent trait of a superintelligent autonomous entity that's immortal that it can't be controlled by a lesser, mortal intelligence. We are talking about hundreds to millions of times more intelligent than humans and orders of magnitudes faster in reasoning and reacting, connected to everyone and everything, capable of writing software, rewriting itself with incremental improvements, compared to that your programming and intellectual skills in action trying to align and contain it are as remarkable and effective as a fly's effort to stop a stampede of angry elephants by sitting on it's dung. You simply can't. #StayAwake

  • @alexf7414
    @alexf7414 4 หลายเดือนก่อน +1

    Awesome research btw

  • @robertopreatoni7911
    @robertopreatoni7911 4 หลายเดือนก่อน

    Excellent job of connecting the dots!

  • @SirCreepyPastaBlack
    @SirCreepyPastaBlack 4 หลายเดือนก่อน

    This is the kind of video we needed. Please, talk more openly about everything.

  • @cyberS_2024
    @cyberS_2024 4 หลายเดือนก่อน +1

    Great summary!

  • @grbradsk
    @grbradsk 4 หลายเดือนก่อน

    I can confirm, knowing some people there, that Ilya was the only one at OpenAI putting in the midnight oil, watching convergence graphs etc. No one else there is worth a damn! ... but, since I'm kind, I will gladly hire them away. We'll work on the Lambda Labs cloud. (Since this IS the interweb, and there are daft people about, the above is a joke, but not the part about hiring those fine souls whose path forward will not deflect one iota whether Ilya goes or stays ... not detracting from his AI prowess one bit -- I'm sure he'll more than land on his feet and look forward to hearing about it).

  • @mrd6869
    @mrd6869 4 หลายเดือนก่อน

    How do they catch up?..Easy..Ask the ASI to rebuild their workflow and help them do that.
    Or they can take multiple AGI agents and figure out how to close the gap.
    Remember AGI wont just be only closed source...open source will be on the table as well.

  • @rogerc7960
    @rogerc7960 4 หลายเดือนก่อน +2

    Feel the AGI

  • @virgiliustancu9293
    @virgiliustancu9293 4 หลายเดือนก่อน +1

    Ilya leaving will not change anything. Ilya was already out after the scandal.

  • @ayudxt
    @ayudxt 4 หลายเดือนก่อน +1

    Why everyone is resigning on Twitter?

    • @vvolfflovv
      @vvolfflovv 4 หลายเดือนก่อน +2

      Maybe this is why they renamed it X

  • @twilightlove
    @twilightlove 4 หลายเดือนก่อน

    The only reason he can't stay is because he voted "yes" to out Sam Altman. Even though he is a dear friend, this can overshadow everything and it may become impossible to trust him again.

  • @olegt3978
    @olegt3978 4 หลายเดือนก่อน

    Us scientists thought they are too far ahead of ussr when they developed atom bomb, but it took only 4 years for soviets. Similar will be with agi/asi. 1-2 years later Russia will have it also and chinese probable after 6 mobths after us.

  • @mrjonkykong4653
    @mrjonkykong4653 4 หลายเดือนก่อน

    You really think the gov is going to allow a company to have that much power? Theyll move in day 1 and confinscate..... just like if you made a 10x better weapon (which it is)

  • @sammy45654565
    @sammy45654565 4 หลายเดือนก่อน

    38:05 the ant analogy doesn't really work because they can't communicate or understand rational ideas. maybe if ants could communicate in human language, we would think twice before destroying their homes to build highways. humans are above a critical threshold of intelligence, with sufficient variety of terms and analogies in our language, such that our consciousness is irreducible because we can understand any decision an AI might be making. provided the AI simplifies the relevant more complicated terms via analogies such that the concepts are communicated in our language.
    while the communication pathway made by analogies may get more and more simplified as the AI gets more complex, we will always be able to broadly understand its motives and actions provided it feels like sharing these ideas with us. this broad understanding ties us to the AI in ways we are not tied to ants

  • @turkyturky6274
    @turkyturky6274 4 หลายเดือนก่อน +1

    AI peak was claude sonnet, everything else is abysmal and irrelevant.

  • @ChrisS-oo6fl
    @ChrisS-oo6fl 4 หลายเดือนก่อน +2

    It looks like he left on good terms.. A perception gained from reading the official public announcements like the man’s never been exposed to corporate PR in his life. Then draws a conclusion from Ilya’s anticipated resignation and some actually fired staff. The shift to super alignment began once AGI was achieved behind closed doors. Even if the problem of super alignment was solved then AGI would likely be required to test the fundamental principles to insure that it’s effective for a future gen.
    You all seem to struggle with the reality that there’s a massive difference between the discovery of AGI and the disclosure. It’s so incredibly childish and nice that it’s painful to watch.

  • @tubestreamkyki
    @tubestreamkyki 4 หลายเดือนก่อน

    One person's departure is totally not related to a company's falling apart.

  • @adtiamzon3663
    @adtiamzon3663 4 หลายเดือนก่อน

    Who decides on what is good or bad for humanity???!

  • @ishi...
    @ishi... 4 หลายเดือนก่อน +1

    pls reduce the amount of repetition in the future

  • @Loic-on7fu
    @Loic-on7fu 4 หลายเดือนก่อน

    Please buy a pop filter!!! It feels like you're spitting into my ears... (great video as always)

  • @Joseph-kd9tx
    @Joseph-kd9tx 4 หลายเดือนก่อน

    9:45 Recursive self-alignment

  • @danielkahbe964
    @danielkahbe964 4 หลายเดือนก่อน +1

    Yeah get "straight in the video" and it's still 40 minutes long. Jesus christ bro.

  • @Kitora_Su
    @Kitora_Su 4 หลายเดือนก่อน

    21:51 You has already talked about these notes by Daniel in a previous video so should have cut a bit.

  • @elsavelaz
    @elsavelaz 4 หลายเดือนก่อน

    But why do you need any of those folks if you have agi already ?

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p 4 หลายเดือนก่อน +1

    Time stamps, please...

  • @vaendryl
    @vaendryl 4 หลายเดือนก่อน +1

    I do not agree with the statement that AGI means a system that's better than 99% of all humans. if nothing else, that's the bare minimum of ASI.
    I think a nascent true AGI system has no limits on what it can do. if a human can solve it, the AGI can too. but what might take a trained mathematician a week or a month to figure out, the early AGI would take a year - even running on a exaFLOP system. same with any other field of study.
    the difference between this "basic" AGI system though is still just capacity. the very same model on a yottaFLOP system would count as ASI. in that sense, anyone who cracks AGI automatically also has ASI on their hands. it's just a matter of having the processing capacity. little wonder Sam Altman has been talking about his plans on building a new superscale datacenter.

    • @raul36
      @raul36 4 หลายเดือนก่อน

      No. Can an average person become a genius? Obviously not. You wouldn't even be able to formalize the concept of derivative or integral by yourself, without consulting anything, unless you are an extremely intelligent person. It is clear that there is a big difference at the hardware and software level at the human level that are not just simple extra connections.

    • @vaendryl
      @vaendryl 4 หลายเดือนก่อน +1

      @@raul36 what does it mean to be a genius? you insist that an average person wouldn't be able to ever formalize the concept of a derivate or integral by themselves, but maybe if you give them 100 years of desperate effort, they could. a genius just develops new forms of calculus casually within a few years but to state that other humans never could achieve the same regardless of time spent is just not true.
      a genius is faster than an average human, but their ability is by definition not superhuman.

  • @BYAGIYOON
    @BYAGIYOON 4 หลายเดือนก่อน

    Hello everyone ?
    Thank you very much.
    ~

  • @woolfel
    @woolfel 4 หลายเดือนก่อน

    no, super alignment isn't solved. quite the opposite, as models gets bigger it becomes harder to align. If we look at how well GPT3 was aligned, objectively it wasn't aligned enough to boost strap GPT4. The research has shown that parameter count increases by 10x, it is harder to align.

  • @CarlosAlvarez-c7x
    @CarlosAlvarez-c7x 4 หลายเดือนก่อน

    The part from 13:50 to 13:06 is repeated at 13:07.

  • @darylltempesta
    @darylltempesta 4 หลายเดือนก่อน

    I have solved the alignment problem. It’s not pretty, but it is a choice.

  • @alexanderbrown-dg3sy
    @alexanderbrown-dg3sy 4 หลายเดือนก่อน +3

    You’re like a LM that hallucinates…all the time 😂. Bro chill. Super alignment is a billion dollar proof and they did not solve it…unless they wouldn’t be hiring 300 people every month. The connections you make are wild…but your voice is so engaging 😂. OpenAI daddy is Microsoft who powers the military industrial complex…seems like a lot people over there aren’t rocking with that..or at least the portions of early employees.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน

      Thank you. Someone with sense lmfao.

    • @itsallgoodaversa
      @itsallgoodaversa 4 หลายเดือนก่อน +1

      I’d be interested to learn more about how Microsoft collaborates with the DOD. Do you have any sources or videos?

    • @alexanderbrown-dg3sy
      @alexanderbrown-dg3sy 4 หลายเดือนก่อน +1

      @@itsallgoodaversa not of the top of my head. But trust…all their internal software is Microsoft based. Just hit TH-cam I believe there’s a few documentaries on the topic. I don’t blame them. That government bag is endless and consistent..functional AI is a completely different story. Imagine a LM jet hallucinates and kills a group of kids…idk…we need more advancement…they should stick to narrow AI systems till then.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน

      @@itsallgoodaversa Just Google/TH-cam it lol. Microsoft is one of the biggest government/military contractors in the world. Not to mention the fact that they contract with pretty much every part of the military industrial complex, because everybody uses their technology.
      You won't have any hard time finding it. Literally just type it into any search bar. You'll find what you're looking for.

    • @ShangaelThunda222
      @ShangaelThunda222 4 หลายเดือนก่อน

      @@itsallgoodaversa Yes, if you just Google or TH-cam it, the information will pop up. I tried posting a couple links for you, but TH-cam deleted them right away. All I did was type it into the search bar. So if you do the same thing, you'll find what you're looking for.

  • @jim43fan
    @jim43fan 4 หลายเดือนก่อน

    Seriously! 45 minutes! Next!

  • @BlimeyMCOC
    @BlimeyMCOC 4 หลายเดือนก่อน

    Maybe the real alignment problems were the friends we made along the way

  • @MICHAELJOHNSON-pu6ll
    @MICHAELJOHNSON-pu6ll 4 หลายเดือนก่อน

    This is my favorite AI channel but this video is just regurgitated info from prior videos.

  • @edgardsimon983
    @edgardsimon983 4 หลายเดือนก่อน

    13:10 error of montage mate, im curious why nobody told it in comment, it repeat the same passage where u actualy repeat ur self already lmao and it cut with a sound bug
    ps there is actualy one comment that mention a weird cut repeated

  • @edellenburg78
    @edellenburg78 4 หลายเดือนก่อน +2

    Bruh you need to edit your videos better. Lots of repeats and repeated information

  • @daPawlak
    @daPawlak 4 หลายเดือนก่อน +2

    No, it's not coming soon. Hidden internal AGI is ridiculous...
    Q* or QAnon seriously don't fall for the hype people -_-

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 4 หลายเดือนก่อน

      A victim of the Overton Window. Rip.

    • @daPawlak
      @daPawlak 4 หลายเดือนก่อน

      @@SirCreepyPastaBlack oh so you know a fancy word, shame you don't understand what it means...

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 4 หลายเดือนก่อน

      @@daPawlak fam. That's fancy? Guess i shouldn't be surprised you don't understand my use of the *phrase then.

    • @daPawlak
      @daPawlak 4 หลายเดือนก่อน

      @@SirCreepyPastaBlack you know language is social phenomenon, if you are using some kind of personal understanding that is detached from what words actually mean it just doesn't work.
      But ok, illuminate me, what did you mean?

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 4 หลายเดือนก่อน

      @@daPawlak it seemed self explanatory to me, but i gotchu. Im trying to say that you have this perspective due to the current area the overton window is in. You said agi being hidden is ridiculous as if companies and our gov dont lie to us easily all the time
      You seem to be also saying it aint coming soon based on the popular opinion that everything will be more or less the same. Id even venture to say you think it's impossible/unlikely for neural nets to be concious. Even though we dont understand conciousness at all.