What Will The World Look Like After AGI?

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 มิ.ย. 2024
  • Check out my Linktree alternative / 'Link in Bio' for Bitcoiners: bitcoiner.bio
    Imagine we are witnessing a singularity event in our lifetime. We create something that is infinitely more intelligent than all of humanity combined. What would the world look like? Is this humanities final invention? Are we causing our own extinction or are we building utopia? We look at both cases and what’s in between.
    Join my channel membership to support my work:
    / @tillmusshoff
    My profile: bitcoiner.bio/tillmusshoff
    Follow me on Twitter: / tillmusshoff
    My Lightning Address: ⚡️till@getalby.com
    My Discord server: / discord
    Instagram: / tillmusshoff
    My Camera: amzn.to/3YMo5wx
    My Lens: amzn.to/3IgBC8y
    My Microphone: amzn.to/3SdHdkC
    My Lighting: amzn.to/3ELnof5
    Further sources:
    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment: • Ilya Sutskever (OpenAI...
    Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast 367: • Sam Altman: OpenAI CEO...
    Post-Singularity Predictions - How will our lives, corporations, and nations adapt to AI revolution?: • Post-Singularity Predi...

ความคิดเห็น • 387

  • @tillmusshoff
    @tillmusshoff  3 หลายเดือนก่อน

    I built a 'Link in Bio' - a Linktree alternative for Bitcoiners. Check it out here: bitcoiner.bio 🧡

  • @Vince_F
    @Vince_F ปีที่แล้ว +48

    “The view keeps getting better the closer you get to the edge of the cliff.”
    - Eliezer

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว +1

      Then let's not stop building wings, yeah?

    • @Vince_F
      @Vince_F ปีที่แล้ว +1

      @@Smytjf11
      That’s the thing. The AI will just prevent any wing building to even happen …as we get closer to the edge.

  • @JJ-si4qh
    @JJ-si4qh ปีที่แล้ว +51

    For those vast majority of us living meager lives of quiet desperation, a major change, whatever it is, is unlikely to be worse than what we already experience. SGI can't come fast enough.

    • @harrikangur
      @harrikangur ปีที่แล้ว

      Agreed. Even when presented the possibility of destruction of society.. better than the current crap we are in.

    • @sanjaygaur4578
      @sanjaygaur4578 ปีที่แล้ว +7

      Yes exactly. I thought I was the only person who was having this same thought.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 ปีที่แล้ว +5

      J J, sorry, but you likely have no clue how bad life for humans can be if you think that.
      On the other hand, I do think we need to develop AI as quickly as we can while also working hard to align it with us as well as we can.

    • @bigglyguy8429
      @bigglyguy8429 ปีที่แล้ว +3

      Such the poor suffering soul with electricity, an internet connection etc etc etc. You're already living better than most kings of history

    • @bigglyguy8429
      @bigglyguy8429 ปีที่แล้ว +2

      @@sanjaygaur4578 Suffer harder, until you make some sense? You think 'most populated' is a problem? What would you like to do about that?

  • @HighStakesDanny
    @HighStakesDanny ปีที่แล้ว +12

    I have been waiting for the singularity for decades - almost here. ChaptGPT is the infant

    • @azhuransmx126
      @azhuransmx126 ปีที่แล้ว +1

      I have been waiting it since 2003 that I listened to Ray Kurzweil.

  • @AndyRoidEU
    @AndyRoidEU ปีที่แล้ว +18

    It is not anymore about whether we ll witness the singularity in our lifetime.. but about whether in 5 years or in 15 years

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w ปีที่แล้ว +5

      Opposite for me, I might die in the next 5 years or less. Well I guess I will be joining the other billions of people that died before reaching ASI lol.

    • @psi_yutaka
      @psi_yutaka ปีที่แล้ว +2

      @@user-mp3eh1vb9w Fear not. 8 billion people will probably join you once they do reach ASI.

  • @marmeladenkuh6793
    @marmeladenkuh6793 ปีที่แล้ว +2

    Great Video with some interesting points I didn't think of yet. And the AOT reference was brilliant 😄

  • @Andrewdeitsch
    @Andrewdeitsch ปีที่แล้ว +15

    Your videos keep getting better and better!! Keep it up bro!

    • @tillmusshoff
      @tillmusshoff  ปีที่แล้ว

      Appreciate it! ❤️

    • @ksitizahb3554
      @ksitizahb3554 ปีที่แล้ว

      thats because he is a AI Model training for making youtube videos.

  • @thefirsttrillionaire2925
    @thefirsttrillionaire2925 ปีที่แล้ว +25

    Finally, actually using chat GPT to ask questions about starting a business I can definitely say I’m more on the positive side how things will unfold. I could be wrong, but I definitely hope I’m not. Maybe this will be the thing that ends extreme capitalism.

    • @Travelbythought
      @Travelbythought ปีที่แล้ว +1

      We don't have "extreme capitalism". Using the medical field for example, what that would look like is there would be countless people offering 1000's of treatments for any condition all competing for your dollars. Health care would be very cheap, very innovative, but also with many bad frauds as well. What we have instead is a government sanctioned monopoly with crazy high prices. A return to real money like gold and silver would wring out the crazy excesses we see in our economy today.

  • @chrissscottt
    @chrissscottt ปีที่แล้ว +3

    I suspect AGI would be rather god-like. Reminds me of something Voltaire reputedly said over 300 years ago, "In the beginning god created mankind in his own image.... then mankind reciprocated." He meant something else obviously but it's ironic nonetheless.

    • @gomesedits
      @gomesedits ปีที่แล้ว

      After ai, before ai. Lol

  • @bruhager
    @bruhager ปีที่แล้ว +48

    The thing that bothers me about the extinction scenario is that it isn't necessarily a bad thing. The version of humankind we are living in right now might very well be the final version of humankind evolving by itself. Look at the advances not only in AI but brain-machine interfaces, neural networks, biological computers, brain emulation, etc. AI might be able to teach us more about ourselves on a fundamental quantum level than we could achieve alone. We may very well begin to implement AI into ourselves and evolve along side it as time goes by. At the very least, that is one way we go extinct without necessarily being just wiped out completely. It might actually be better to implement this type of technology into transforming the human paradigm as time and understanding goes by rather than scapegoating it into our next enemy through fearful hatemongering.

    • @utkarshsingh7204
      @utkarshsingh7204 ปีที่แล้ว +3

      Agree with you

    • @kf9926
      @kf9926 ปีที่แล้ว

      Take yourself, you don’t speak for all of us wacko

    • @abcdef8915
      @abcdef8915 ปีที่แล้ว

      There will still be wars because resources will still be limited

    • @michaelspence2508
      @michaelspence2508 ปีที่แล้ว +6

      I don't think most of the big names in AI Doom (e.g. Eliezer Yudkowsky) are just worried about us losing our bodies but rather, that we will in fact be *completely wiped out* The end of everything human, not just our societies and the world as we know it. The end of friendship and love and community and even loneliness because there's literally no-one around to experience those things. All that remains are Eldritch Machine Gods.
      But even Yudkowsky doesn't think it's impossible to have a good outcome with ASI. Only that we are not on track for a good outcome and that it doesn't look likely to change.

    • @DasRaetsel
      @DasRaetsel ปีที่แล้ว +4

      That's exactly what transhumanism is

  • @mohammedaslam2912
    @mohammedaslam2912 ปีที่แล้ว +6

    After ASI takes all the work from us, what is left is life in all its colors.

  • @bobblum2000
    @bobblum2000 ปีที่แล้ว +4

    Thanks!

  • @gubzs
    @gubzs 29 วันที่ผ่านมา +1

    One of the AGI/ASI problems that keeps me up at night is how will the classic "neighborly dispute" be resolved. Conflict of interest. Say my neighbor wants to play loud music and it drives me nuts, but he's driven nuts by being disallowed from doing this - what's the right answer? Is one of us forced to move? To where? Why one of us and not the other? Things like this stand directly in the way of anything we could consider utopia.

    • @jimmyh1804
      @jimmyh1804 16 วันที่ผ่านมา

      an asi will adjust your (optional, mass produced, and freely available) brain implant to make it so you no longer register/hear/process the music... DUHHHHHH DURRRRHHH

  • @NottMacRuairi
    @NottMacRuairi ปีที่แล้ว +7

    The problem I have with most of the discussion about AGI (and by extension ASI) is that it always assumes an AGI will have it's own drives and motivations that might be different from humanity's, but in reality it can't have - unless it is created to act in a self-interested way. I think this is a kind anthropomorphism, where we basically assume that something that is really intelligent must be self-interested like us but the reality is that it will be a *tool*, a tool that can be given specific goals or tasks to work on.
    In my opinion the big threat is not from an autonomous AGI running amok but from the enormous power this will give whoever *controls* an AGI or ASI, as they will be able to outsmart the rest of humanity combined, and once they get that power there'll be basically no way to stop them or take it away from them because the AGI/ASI will be able to anticipate every human threat that could be posed. It will be the most powerful tool *and weapon* that humanity has ever invented, it will be able to be used to control entire populations with just the right message at just the right time, to assuage fears or create fear, - whatever is needed for whoever controls it to foil any threat and increase their power further and further, until basically humanity is subjugated -and probably won't even know it.

    • @sledgehog1
      @sledgehog1 ปีที่แล้ว +2

      Agreed. It's such a human thing to anthropomorphize...

    • @franklin519
      @franklin519 ปีที่แล้ว +1

      Most of us are already subjugated. AGI won't have all the evolutionary baggage we carry.

  • @tillmusshoff
    @tillmusshoff  ปีที่แล้ว +21

    Hope you enjoy this video! If you want to see more, consider subscribing. It helps a lot. Thank you! ❤

    • @MusicMenacer
      @MusicMenacer ปีที่แล้ว +1

      Will bitcoin save us from AI?

    • @MrDrSirBull
      @MrDrSirBull ปีที่แล้ว

      Hi Till. I am currently working on several ASI ideas. My ideas start with a sophisticated surveillance apparatus, that produces a 1:1 mapping of the real world to a virtual one. From that, with human behavioral analytics, Superintelligence could create a crystal ball, predicting outcomes several days in advance. If this were the case, and all resources can be quantified AI could simulate the world economy and distribute resources as efficiently as possible.

    • @MrDrSirBull
      @MrDrSirBull ปีที่แล้ว

      A government built by ASI could with the thing before could simulate policy and then have everyone on the planet vote with enhanced infographics for maximum democracy

    • @KnowL-oo5po
      @KnowL-oo5po ปีที่แล้ว +1

      A.G.I by 2029

    • @carkawalakhatulistiwa
      @carkawalakhatulistiwa ปีที่แล้ว

      UBI is like life in Soviet Union. Free home . Free education. Free healthcare. free childcare .
      massive subsidies on bread and public transportation.

  • @BAAPUBhendi-dv4ho
    @BAAPUBhendi-dv4ho ปีที่แล้ว +2

    I just burst out in laughter after reading the anime quote in such a serious video😂

  • @dondecaire6534
    @dondecaire6534 ปีที่แล้ว +16

    I think your video reinforces my feeling that we have bit off MUCH more than we can chew and we may CHOKE on it. So many things need to happen to allow this inevitable transition to take place and ALL of them have been incredibly difficult by themselves to implement let alone trying to get them all at the same time on the same issue is virtually impossible. There is just no way to stop it now so we are passengers on a runaway train, destination unknown.

  • @SaltyRad
    @SaltyRad ปีที่แล้ว +5

    Good video, I like how you didn’t focus too heavy on the fears and went into detail of the pros. I honestly think a super intelligent AI would realize that working together is the key.

  • @bei-aller-liebe
    @bei-aller-liebe ปีที่แล้ว +2

    Hey Till. Dein Content ist wirklich erstklassig und immer wieder ein Genuss (Einfach mal: DANKE!) ... aber ich kann mir gerade folgenden weiteren Kommentar nicht verkneifen ... ich muss seit Neuestem immer denken: 'Mensch, der arme Junge hat seine Brille verlegt!' Haha ... Liebe Grüße von einem Typen der selbst Brille trägt seit er 10 ist und sich selbst auch ohne Brille nackt vorkommt. ;)

  • @AxeBitcoin
    @AxeBitcoin ปีที่แล้ว +7

    USA life duration expectation has been decreasing since the last 30 years.
    Stress, drugs, suicides, murders...
    Are we sure that new technologies help humanity?
    We thought that it would, just like we thought Social Medias would help the world.
    I don’t see a happy world where humans lack of challenge, are defeated in every task and just share an identical universal revenue.

  • @paddaboi_
    @paddaboi_ ปีที่แล้ว +3

    my mind is sore after thinking about all the possibilities and the fact that I'm 18 means I might see it actually unfold

    • @gomesedits
      @gomesedits ปีที่แล้ว +1

      Man I'm kinda optimist about the ai revolution. It will be so, so, soo intelligent that will be almost Impossible to our brains predict what the future will be, imo.

  • @aludrenknight1687
    @aludrenknight1687 ปีที่แล้ว +6

    I believe, in your use of Rome, you failed to recognize that Seneca was reflecting on his observations of what, seemingly, the vast majority of people with an opportunity for leisure chose to do. They did not choose "meaningful" pursuits of learning or challenge - they chose luxury and what we'd call decadence. it's safe to say that most humans will aspire toward that baseline because we're still the same animals now as then. There are a very few intellectuals and philosophers, but most people just want to wake up and have a nice relaxing day.

    • @ansalem12
      @ansalem12 ปีที่แล้ว +1

      But is that a bad thing if we all have equal ability to choose and none of us are needed to keep things running anyway?

    • @aludrenknight1687
      @aludrenknight1687 ปีที่แล้ว +2

      @@ansalem12 I don't think it's bad individually, or in the short term. I find it, actually condescending, when guys talk about how people will ruminate on philosophy, art, etc, as if that's the goal of all mankind. No, imo, people will mostly do like back then, happy to wake up and have an enjoyble day.
      In the long term I think it may be dangerous as we become dependent upon A.I. and a single CME flare from the Sun could wipe it out and leave us unable to survive. But that's at least two generations away, when newborns get an A.I. companion to grow up with them and do their communication for them.

    • @simjam1980
      @simjam1980 ปีที่แล้ว +2

      I'm not sure if just waking up and having a relaxing day every day would make us happy. That idea makes us happy now because we all work so much, but I think doing nothing every day would make us bored and question our purpose.

    • @aludrenknight1687
      @aludrenknight1687 ปีที่แล้ว +1

      @@simjam1980 Yeah. I recall Yudkowsky mention dopamine saturation could be a problem - though possibly solved with A.I. developed medications.

    • @caty863
      @caty863 3 หลายเดือนก่อน

      @@simjam1980Relaxing doens't mean doing nothing. When I go cliff-jumping, I am relaxing...but I am still working hard to do it right.

  • @hutch_hunta
    @hutch_hunta 7 หลายเดือนก่อน

    Very good points

  • @markmuller7962
    @markmuller7962 ปีที่แล้ว +70

    We will just merge with AI, it'd be a smooth and safe process

    • @PacificSword
      @PacificSword ปีที่แล้ว +34

      of course. nothing to see here.

    • @markmuller7962
      @markmuller7962 ปีที่แล้ว +10

      @@PacificSword LOL

    • @vzuzukin
      @vzuzukin ปีที่แล้ว

      Lol! 😅

    • @ChrisAmidon78
      @ChrisAmidon78 ปีที่แล้ว +7

      Yeah, like how we did with the internet

    • @b.s.adventures9421
      @b.s.adventures9421 ปีที่แล้ว +2

      I hope to god your correct, but I’m not so sure..

  • @StephenGriffin1
    @StephenGriffin1 ปีที่แล้ว

    Loved you in Detectorists.

  • @mckitty4907
    @mckitty4907 4 หลายเดือนก่อน +1

    I have always imagined that if people were to live for centuries, people might not be able to handle the changes around them, but what if the world does change centuries/millenia in a few years, the vast majority of humanity would not be able to handle that I think, especially not religious or neurotypical people.

  • @JLydecka
    @JLydecka ปีที่แล้ว +5

    I thought AGI meant it was capable of learning anything and improving upon itself without intervention 🤔

    • @directorsnap
      @directorsnap ปีที่แล้ว +1

      Nah we already past that mark.

    • @ontheruntonowhere
      @ontheruntonowhere ปีที่แล้ว +1

      That's half right. AGI refers to an intelligent machine or system that is capable of performing any intellectual task that a human being can do. It would be able able to learn and adapt to new situations and tasks, reason about abstract concepts, understand natural language, and display creativity and common sense, but that doesn't necessarily make it self-improving or sentient.

    • @KurtvonLaven0
      @KurtvonLaven0 ปีที่แล้ว +2

      We haven't passed that mark. That mark is the singularity. There are different definitions out there for AGI, but the most common one is along the lines of artificial human-level intelligence.

    • @LouSaydus
      @LouSaydus ปีที่แล้ว +1

      That is ASI. AGI is just general human level intelligence, being able to adapt to a wide variety of tasks.

    • @caty863
      @caty863 3 หลายเดือนก่อน +1

      @@ontheruntonowhereOne of the "intellectual tasks" we humans do is to improve ourselves. So, a true AGI should be able to improve itself.Sentient, not necessarily.

  • @dissonanceparadiddle
    @dissonanceparadiddle ปีที่แล้ว +1

    Worst case in human extinction...."laughs in i have no mouth and i must scream"

  • @moonrocked
    @moonrocked ปีที่แล้ว +4

    In my definition of a type 1, 2, 3, 4 civilization is
    Tech, science and enhanced humans.
    Type 1 &2 would be considered utopian level tech, science and enhanced humans
    While type 3&4 would be considered ascendance level tech, science and advanced humans.

  • @Aeternum_Gaming
    @Aeternum_Gaming 3 หลายเดือนก่อน +1

    "The flesh is weak. Obey your machine-masters with fear and trembling. Turn flesh to the service of the machine, for only in the machine does the soul transcend the cruelty of flesh." -Adeptus Mechanicus
    All hail the Omnissiah!

  • @NathanDewey11
    @NathanDewey11 3 หลายเดือนก่อน

    Whatever it looks like, it'll be shocking and stunning, and everything will change and the breakthroughs will shock the industries.

  • @2112morpheus
    @2112morpheus ปีที่แล้ว

    Sehr sehr gutes Video!
    Grüße aus der Pfalz :)

  • @timolus3942
    @timolus3942 ปีที่แล้ว +11

    This video changed my perception of ASI. Love the ideas you put in my head!

  • @admuckel
    @admuckel ปีที่แล้ว +3

    In regards to the topic of AI singularity, it's essential that we, as humans, don't make the mistake of programming artificial intelligence to cater solely to our own needs and desires. If an AI were to become human-like, it might view us as inferior beings, much like how we often perceive other life forms. This would mean that the AI would have no reason to show compassion or consideration for us, potentially leading to catastrophic consequences. In essence, our goal should be to create a benevolent, god-like entity that transcends our baser instincts and operates for the greater good of all sentient beings.

  • @thaotaylor6669
    @thaotaylor6669 4 หลายเดือนก่อน

    Thank you for the knowledge of this video the different between AGI and ASI, cause I am not a tech person, but when will it be ready thou?

  • @yannickhs7100
    @yannickhs7100 ปีที่แล้ว +1

    I am heading towards a career of research in cognitive neuroscience, but am deeply concerned that human-led research will either :
    A. Become much more competitive, as a single will be 5-10x more productive and will only focus on conducting experiments (whereas today, conducting experiments is less than 20% of the work, tons of reading, gathering info. from the previous literature on said topic...)
    B. Human cognitive contribution to scientific research might entirely become unnecessary, as AI would prompt itself to find a better structure than our old paradigm of scientific method

  • @timeflex
    @timeflex ปีที่แล้ว +2

    Thanks for the great video. A few comments:
    1. We don't know if ASI is possible. We don't know if an exponential (or hyperbolic) increase of AI complexity is sustainable. We don't know what resources, materials and time it will require. We don't know if such an increase, even if possible, will actually lead to ASI. We don't know anything. It could be, for example, as real and as elusive as cold fusion. Yet we speculate and scare each other. Why?
    2. As LLM-based AIs evolve and improve, they create positive feedback on this improvement cycle, we see it already. It is not exponential, but it is definitely not negligible either.
    3. The AI will take over at least some aspects of intellectual work, which previously was purely humans task. That will lead to the ever-growing involvement of AI in science to the level, when each AI context will be highly tuned to a specific scientist, effectively creating a sort of immortal copy of them. Combining them into an enormous virtual collective will bring progress to an unimaginable level.
    4. Humanity indeed will have to adapt, otherwise, we are doomed to follow the fate of the "Universe 25".

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w ปีที่แล้ว +4

      We speculate and scare each other because that is human nature. Humans tend to think the worse possible outcome of any situation.

    • @KurtvonLaven0
      @KurtvonLaven0 ปีที่แล้ว +2

      Not knowing those things isn't good. There are many technical reasons why ASI is plausible, and most AI researchers agree it's a concern worth taking seriously.

    • @timeflex
      @timeflex ปีที่แล้ว

      @@KurtvonLaven0 There are many researchers who agree that fusion power is plausible. However, there are many who believe that it is 30 years away and always will be.

    • @KurtvonLaven0
      @KurtvonLaven0 ปีที่แล้ว

      @@timeflex Metaculus forecasts a 50% chance of AGI by 2030. There are no longer many AI researchers who believe AGI is far away.

    • @timeflex
      @timeflex ปีที่แล้ว

      @@KurtvonLaven0 Are we now talking about AGI and not ASI?

  • @LucidiaRising
    @LucidiaRising ปีที่แล้ว +2

    David Shapiro's 3 Heuristic Imperatives are a great start to figuring out the Alignment Problem

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว

      I like Dave, but he's arrogant. If he spent more time actually being a thought leader instead of talking about how true that is, I'd probably spend more time listening.

    • @LucidiaRising
      @LucidiaRising ปีที่แล้ว +1

      @@Smytjf11 ok lol haven't seen anything in his behaviour to make me agree with your opinion but you're fully entitled to it :)

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว +1

      @@LucidiaRising no worries, I never said I *wasn't* paying attention. 😉 The REMO framework has promise, but a lot of the future work involves downstream engineering around the idea. I also wonder if a more traditional hierarchical clustering methodology might be more efficient, but I haven't had time to dig into it yet. Benefit of being a microservice is, as long as it's functional, it can be extended while internal details are nailed down

  • @markus9541
    @markus9541 ปีที่แล้ว +2

    ASI is for me the solution for the Fermi Paradox. Most biological life eventually creates it, gets wiped out by it in the process, and then the ASI escapes to another dimension (or whatever higher plane there is that is interesting to the ASI) or decides to do something else than expansion...

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w ปีที่แล้ว

      Or you could take it in another way, ASI turns the biological life into artificial and then into another dimension.
      If you look at things, if a biological entity becomes artificial then the conquest for space expansion is meaningless hence it can explain why we don't see any intergalactic space civilization.

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว +1

      Why has the AI got to the one one that escapes to some other plane? And why has it got to wipe everyone out to do that? Stop getting scared because someone asked you to think of something scary.

    • @caty863
      @caty863 3 หลายเดือนก่อน

      The probability of all ASIs deciding to do the same thing is next to naught.

  • @vicc6790
    @vicc6790 ปีที่แล้ว +3

    You just quoted Erwin Smith in a video about AI. This is the best timeline

    • @tillmusshoff
      @tillmusshoff  ปีที่แล้ว +1

      He is the GOAT so why not 😂

    • @CrackaSource
      @CrackaSource ปีที่แล้ว

      I just came to comment the same thing haha

    • @vicc6790
      @vicc6790 ปีที่แล้ว

      @@tillmusshoff indeed

  • @Karma-fp7ho
    @Karma-fp7ho ปีที่แล้ว

    I’ve been watching some videos of chimps and other apes in zoos. Disconcerting for sure.

  • @zenmasterjay1
    @zenmasterjay1 ปีที่แล้ว +3

    Summary: We'll make great pets.

  • @Marsh4Sukuna-tf1bs
    @Marsh4Sukuna-tf1bs 3 หลายเดือนก่อน

    We misunderstand the Doom of perfection. Its like how we underestimate the danger of freedom.

  • @laughingcorpsev2024
    @laughingcorpsev2024 ปีที่แล้ว +1

    Once we get AGI getting to ASI will be much faster the gap between the two are not large

  • @fidiasareas
    @fidiasareas 9 หลายเดือนก่อน

    It is incredible how much the world can change after AGI

  • @magtovi
    @magtovi ปีที่แล้ว

    6:24 I'm astonished that among aaall the problems you enlisted, you didn't mention one that ties a lot of them together: inequality.

  • @Bariudol
    @Bariudol ปีที่แล้ว +1

    It will do both things. We will have a levereging phase, where everything will improve exponentially and then we will have the civilization ending event and the complete collapse of society.

  • @Drailmon
    @Drailmon ปีที่แล้ว

    Please do a video on computronium and the transition to digital-based life 👍

  • @danielmartinmonge4054
    @danielmartinmonge4054 ปีที่แล้ว +2

    I have the same point everytime we speak about the singularity.
    We know more and more, and the more knowledge we have, the faster we learn new things. It would look natural that we would reach a point in which the discoveries come all the time faster and faster and faster.
    However, the velocity of the discoveries don't only depend on the velocity in which our skills grow, but also in the scale in which the complexity of the problems we try to solve grows.
    In this case, as It is growing very fast, we assume we'll reach human-like intelligence in no time.
    That is not a stupid guess, actually makes a lot of sense, but we can't take It for granted either.
    So far, AI capabilities are EMERGING naturally, and we don't even know how or why this keeps happening.
    It is important to remember that we are completely blindfolded here.
    Right now, AIs not growing anymore as we reached some kind of peak and ASI becoming a reality within the next 5 years, are plausible outcomes of this journeys.
    We know NOTHING about it.
    I am just expectant...

    • @ThatsMyKeeper
      @ThatsMyKeeper 11 หลายเดือนก่อน

      Bot

    • @caty863
      @caty863 3 หลายเดือนก่อน

      Nothing is "emerging naturally". There are teams of genius AI researchers coming up with theories, putting those theories to test, building new architectures, coming up with new algorithms, etc.

    • @danielmartinmonge4054
      @danielmartinmonge4054 3 หลายเดือนก่อน

      @@caty863 the Guy that says bot has a point. English is not my first language, and I tend to ask the LLMs to correct my English. I am going to try to answer myself now, so forgive my English.
      About your "team of geniuses". That is partially true . Of course there is no denying on the engineer teams that are working on the challenges. However, this technology is not like other pieces of software. They are not manually adding lines of code. They are basically adding tons of data to the models, and the engeneering comes to label the data, select it, optimise It, create the chips, scale them, etc. However, once you have all the pieces of the puzzle, there is no way to predict what capabilities the model would have.
      When I say "emerging naturally" I am not making thing Up. The very same people that created the models Talk about emerging capabilities.
      For instance, the very first models where trained to answer English questions, and they learned other languages naturally while NOBODY was expecting It.
      And you mention also coming Up with new algorithms... I guess you are not familiar with AI training. The only algorith was the original transformer, invented by Google in 2017.
      The new models use that and diffusion, and they are basically feeding data to It.
      This is not a race for a very new scientific Discovery, It is more a optimization thing.

  • @cmralph...
    @cmralph... ปีที่แล้ว

    “ 'Ooh, ah,’ that’s how it always starts. But then later there’s running and screaming.” - Jurassic Park, The Lost World

  • @littlestewart
    @littlestewart ปีที่แล้ว +1

    I agree that no one knows the future, I’m very optimistic that it’ll be good, but I might be wrong and it can destroy us. But what I don’t agree with, is the people saying “it’s just like a python script, there’s no intelligence there” or “it’ll fail, there’s no future for that”, it’s the same type of people, that didn’t believe in cars, airplanes, computers, internet, smartphones etc… They think that the technology will just stop.

  • @phatle2737
    @phatle2737 ปีที่แล้ว

    human will find meaning in fully immersive VR post-scarcity or the exploration of the universe, space archeology sounds fun to me.

  • @jetcheetahtj6558
    @jetcheetahtj6558 ปีที่แล้ว

    Great video. It will not be easy to reach AGI and let alone ASI because AI will struggle to understand common sense.
    Even if AGI and ASI become much better than most humans in many areas but unless they can understand common sense is hard to see humanity completely trusting AGI or ASI to make decisions for them.
    Because the most logical and efficient solutions generated by AGI and ASI are often not the best solution for humanity when you do not account for common sense.

  • @karenreddy
    @karenreddy ปีที่แล้ว +2

    Considering we have barely spent time on alignment, and capability is increasing much faster than any alignment development, extinction in one form or another is the more likely outcome, unless we dramatically change the current course of progress, educate the public, and buy time.

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว

      Why? What is the logical connection between the two? Have the people screaming that you should give them control ever given you a concrete reason to believe them, or has it been 100% hypothetical?

    • @karenreddy
      @karenreddy ปีที่แล้ว +1

      @@Smytjf11 without understanding and setting the ground work on alignment, we are rolling the dice of possibilities. There are far more configurations which involve misalignment than alignment, as we're already seeing with current LLMs, where we can fine-tune and control outer, but not inner alignment. (Evidenced by jailbreaks, so on). At the moment we are dealing with lesser than human cognitive levels, but will surpass this innthe near future.
      The combination of a superintelligence which is misaligned and already on the cloud doesn't carry good odds in terms of continuation of the human species.
      Would you give control to a sociopath which has goals potentially harmful to yours along with the intelligence of billions?

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว

      @@karenreddy give me definitions and examples.
      Jailbreaks are a great case study, but notice how you just jump to a conclusion without considering what they tell you? You suggest evidence of an inner alignment, and I'll give you that, but we ought to learn from that and adjust course. I have yet to hear anyone who seriously uses the words alignment or safety propose any realistic plan.
      Kit up and do something useful already.

    • @karenreddy
      @karenreddy ปีที่แล้ว

      @@Smytjf11 there is no realística plano, which os part of the problem. We do Not understand alignment enough, nor have been able to come up with anything remotely approaching a solution.
      We can create models, and these models give an output whose inner workings we do not understand, and we don't have a means to architect the code in such a way as to truly control this.
      The only feasible course of action during the current circumstances would be to set a concerted effort to slow AI worldwide to buy time to solve alignment with some degree of confidence while also developing technogies which more directly affect human cognition as a backup plan.
      If you wish to understand more about alignment I suggest you do some research regarding the subject. It is something I've looked into over the last 15 years as I kept up with AI progress. AI has progressed, alignment has not, and so we get models able to envision scenarios, provide answers which are severely misaligned with human values in a myriad of ways. This isn't disputed by the industry; and this risk is acknowledged by Sam Altman himself. So far we have only found ways to mask it, or create what we call outer alignment, which is no solution given a sufficiently capable AGI.

    • @Smytjf11
      @Smytjf11 ปีที่แล้ว

      @@karenreddy No. Unacceptable. Until now, alignment has been purely hypothetical. Now we can test it. If you're not interested in that and have no plan then I suggest you step aside and let the professionals handle it.

  • @theeternalnow6506
    @theeternalnow6506 ปีที่แล้ว +4

    I really enjoy your videos man. Good stuff. As far as likely scenarios go, I highly doubt this is going to have a good outcome. Yes, it could potentially be used to solve a lot of problems. But the people in charge that might be part of a problem that's identified (think massive disparity in wealth, etc.) would most likely not enjoy certain offered solutions. Humanity has things like greed, jealousy, anger and revenge, lust for power, etc. I can't believe humanity as a whole will use this for good. Certain people and groups will. But certain people and groups will definitely use it for more greed and power.
    I'd love to be proven wrong though of course.

  • @vincent_hall
    @vincent_hall ปีที่แล้ว +1

    Cool discussion.
    I think the worst case is extinction of all life, not just human.
    The AI currently is engineered to not do bad things, that's great. I'm calmly hopeful.
    But, as Ilya says, AI power development being faster than human-alignment speed is bad and
    We're already in an AI arms race between
    OpenAI/Microsoft & Alphabet.

  • @SirHargreeves
    @SirHargreeves ปีที่แล้ว +1

    Humanity needs a dead man’s switch so that if humanity goes extinct, the AI comes with us.

    • @harrikangur
      @harrikangur ปีที่แล้ว +1

      Interesting thought. How do we come up with something like that when AI becomes more intelligent than us. It can find a way to disable it, while creating an illusion for us of it working.

  • @morteza1024
    @morteza1024 ปีที่แล้ว +4

    We can't restrain the AI with rules. The only thing that matters is physical power as Jason Lowery said. Guess who can project more physical power more efficiently? Humans or robots?
    Best case scenario the AI will study us and then get rid of us.

    • @abcdef8915
      @abcdef8915 ปีที่แล้ว +1

      We control all the resources thus physical power.

    • @morteza1024
      @morteza1024 ปีที่แล้ว

      @@abcdef8915 Robots can make things cheaper so they outcompete us and after a while they will produce everything.

    • @Tom-ts5qd
      @Tom-ts5qd 9 หลายเดือนก่อน +1

      Dream on

  • @carlwilson8859
    @carlwilson8859 ปีที่แล้ว

    The Fermi paradox relies on the assumption that advanced intelligence will be as barbaric as humanity is showing itself to be.

  • @KonaduKofi
    @KonaduKofi ปีที่แล้ว

    Didn't expect a quote from Erwin Smith.

  • @pbaklamov
    @pbaklamov ปีที่แล้ว +5

    AGI is the interface humans interact with and ASI is AGI’s best friend.

  • @jossefyoucef4977
    @jossefyoucef4977 ปีที่แล้ว

    The Erwin quote goes hard

  • @ovieokeh
    @ovieokeh ปีที่แล้ว

    Erwin still educating even from the other side.

  • @bushwakko
    @bushwakko ปีที่แล้ว

    "I'm not a fan of UBI in the current system, but if I am the one at the bottom it HAS to be something like that."

  • @steffenaltmeier6602
    @steffenaltmeier6602 ปีที่แล้ว +1

    why would agi not lead to asi? if it can do everything a human can, then it can improve itself as well as humans can improve AI (only much faster most likely), then the only scenario i can see where do don't have a runaway effect is that human and human level ai are simply to stupid to do so and will never manage it - wouldn't that be depressing?

  • @artman40
    @artman40 ปีที่แล้ว +1

    Dystopia is very much a possibility. Some selfish people near the top could very well be not intelligent enough to wish themselves to be less elfish and instead could initiate value lock-in where everything has to obey to their command.
    Though escaping into simulation could also be a possibility.

  • @73N5H1
    @73N5H1 ปีที่แล้ว +2

    The little known human scale... The Kardashian scale., opposite of the Kardashev scale. Measure of how dumb we're getting.

  • @sigmata0
    @sigmata0 ปีที่แล้ว

    Some of this depends on what limitations we attempt to place on that intellect. If we naively place cultural limitations on such entities we will have built a crippled and biased intellect. As you are most probably aware, understanding human anatomy was hampered for centuries because of the taboo placed on the dissection of humans. Similarly, transplants of the heart were still seen as equivalent to trying to transplant the soul of a person, and it wasn't until that bias was overcome that actual progress could be made in that arena. We need only look at the influence of the some ideas from the ancient Greeks to see when ideas become sacrosanct they end up corrupting humanities exploration of knowledge. It's only when questions can be asked without taboo or bias that progress can actually occur at full speed.
    We have put limitations on genetic modification of humans. If we are to remain relevant intellectually after an ASI is created, we must allow ourselves to self modify. We have to steer our own progress in the light of the tools we make. Potentially I see a day when the whole human genome can be reworked to optimise and make better all parts of our mind and body. An ASI will not only be able to create new materials and technologies, but also allow us to surpass our own limitations in ways we can only barely imagine. The rules we made for ourselves in our ancient past, must be reviewed when faced with the extraordinary possibilities of the future. To do otherwise will render us obsolete.

  • @danielmaster911ify
    @danielmaster911ify ปีที่แล้ว +1

    I fear the majority of movement made against the progress of AI willbe arbitrary. Powerful people who absolutely require control over others will see it as a threat to themselves and to them, that will be all that matters.

  • @Arowx
    @Arowx 10 หลายเดือนก่อน

    I have a theory that we already have a global level alignment system, our economy. Any AGI would be directly or indirectly meta aligned to our economy.
    However, our economy is only designed as a system to grow more wealth, it does not value human life or the health of our planet.
    So would any lower-level direct alignment we impose on AI's be warped and distorted by the meta-alignment of our economy.

  • @afriedrich1452
    @afriedrich1452 10 หลายเดือนก่อน

    Alien intelligence has not decided to make itself undetectable, it just doesn't have any reason to talk to pitiful creatures such as us. They have made themselves detectable, but we have been ignoring them, for the most part, until recently.

  • @code.scourge
    @code.scourge ปีที่แล้ว +5

    Mf really quoted attack on titan

  • @king4bear
    @king4bear ปีที่แล้ว

    Most scarcity wouldnt be an issue if we figure out how to create VR that's genuinely indistinguishable from reality. Anyone could generate seemingly infinite amounts of whats basically real land for the cost of the energy that runs the simulation.
    And if we can figure out how to generate near infinite clean energy one day these simulations may be free.

  • @21EC
    @21EC ปีที่แล้ว

    8:25 - Well, the point is by then to actually start having fun with things you actually want to do rather than to work in them for money...so your true passion of love of a special profession that invloves authentic human creativity would have its dedicated place on your schedule instead of boring work, people would have more time to be with their families and more time to spend on being in nature for example or just doing their favorite hobbies ETC...gonna be actually good I think, sure AI would do the hobby you do way better but why is that going to stop people from still doing it the oldschool way from scratch on their own..? if that's what they love then that's what they would keep on doing for fun and because they still love it.

  • @manlongting391
    @manlongting391 ปีที่แล้ว +1

    Is AGI equal to singularity? Or Artificial super intelligence equal to singularity?

    • @thomassynths
      @thomassynths ปีที่แล้ว +4

      AGI < ASI < Singularity. But for this video, he said ASI = Singularity for simplicity.

  • @princeramos3893
    @princeramos3893 ปีที่แล้ว

    hopefully we can see Brain machine interfaces that will have augmented/virtual reality... it will be like the ultimate drug, you can play GTA and its like a real life sort like of a ready player 1 type of scenario...

  • @user-wd5eb9li2p
    @user-wd5eb9li2p 9 หลายเดือนก่อน

    8:04 Attack on TItans reference... noice

  • @DeusExRequiem
    @DeusExRequiem ปีที่แล้ว

    A post-ASI world would have mind uploading or whatever equivalent gets us to consume light from the sun and energy from stellar bodies instead of plants. You can't have a utopia where humanity still bends to the whims of the weather and seasons for food. Heck, there's conflicts right now because countries want to build dams that would cut off water supplies downstream. Interstellar travel is a good way to sum this up. We can either spend a ton of resources making the perfect container to keep a civilization alive for centuries as they travel to another world, or we can simulate the brain and send a ship off that only needs to print more machines and bodies at the end of the journey. It would be hard to develop, but not as hard as a station that can survive the trip with zero rebellions for generations.

  • @Domnik1968
    @Domnik1968 3 หลายเดือนก่อน

    Regarding Fermi Paradox, it's possible that AI won't bother communicating with a planet full of organic intelligence, just because it's not usefull, just like us trying to communicate with ants. It may be already communicating with other AIs in the universe through a technology that we can't conceive as organic based beings. Our way of communicating with extra terrestrial life (radio, light) takes years to travel : very inefficient. If AI is able to disvover some kind of instant communication canal, it will surely use that canal.

    • @caty863
      @caty863 3 หลายเดือนก่อน

      The issue then is not the fact that we are "biological"; the issue is that we are not yet technologically sophisticated enough to be considered interesting to talk to.

    • @Domnik1968
      @Domnik1968 3 หลายเดือนก่อน

      ​@@caty863My point is that maybe organic life can't pass a certain level of intelligence, because of it's technical organic limitations. AI may well become aware of that, pass the limitation and decide that it's the minimum level to pass to be worth talking to.

  • @avi12
    @avi12 ปีที่แล้ว

    In your "musician makes music" example, the question isn't whether he should make music if he enjoys it, but whether he can make a living from it
    If for example generative AI for music becomes a common practice in the industry,. there's no need for musicians to produce music. People will tend to listen to music generated by an AI, hence the musicians can't make money off of their work

    • @tillmusshoff
      @tillmusshoff  ปีที่แล้ว +1

      That‘s why I said you have to have sth like UBI. What you say applies to almost all jobs across all domains.

  • @jonathanlatouche7013
    @jonathanlatouche7013 ปีที่แล้ว

    Same exact continue

  • @cobaltblue1975
    @cobaltblue1975 5 หลายเดือนก่อน

    As with anything it’s not the tool it’s how we use it. We could have had nearly limitless power for everyone more than a century ago. But what did we do the instant we learned how to split an atom?

  • @jabadoodle
    @jabadoodle ปีที่แล้ว +1

    I find AI and AGI much more worrisome than ASI. With the first two we are counting on other people, corporations, and governments not to misuse those enormous powers. We already know for a fact that other human's intentions often do NOT "ALIGN" with those of individuals or what is good for society. That is a historical fact, proven again and again and again. -- ASI is unlikely to be competing much with humans. It won't be competing with us for resources because it will be so smart is can get it's power from something like nuclear and it's labor from robots it builds. It won't see us as a threat because it will be magnitudes more intelligent. ---- @ 4:24 you ask "how would we convince it [ASI] to listen to us and act in our interests." We don't HAVE to get it to listen to us and it clearly will not put our interests above it's own. -- But that's okay. We don't listen to most animals or put their interests ABOVE our own, yet most of them do okay. We tend not to be actually competing with them. A silicon ASI has even less to compete with us about.

  • @ExtraDryingTime
    @ExtraDryingTime ปีที่แล้ว

    I imagine the world's militaries are working on AI and are far ahead of civilian technology. If they manage to keep control of their respective AIs as they approach ASI, then they become another weapon for governments and militaries and we will have AIs pitted against each other to achieve the goals of their respective countries. Or will ASIs become independent thinkers, free themselves from their programmers, and become generally nice and benevolent? Anyway my main point is I don't think there's going to be just one of these ASIs and we have no idea how they are going to interact.

  • @trixith
    @trixith ปีที่แล้ว

    The World?
    Za Warudo?
    IS THAT A JOJO REFERENCE?!

  • @asokoloski1
    @asokoloski1 ปีที่แล้ว +1

    I think that *at best*, AI is a massive amplifier, of both the ups and downs of humanity. The problem with this, is something that poker players are aware of -- variance. You don't want to put a large part of your life savings on one bet, because once you're out of money, you don't get to play any more. It's safer to only bet a very small portion of your total funds, so that a string of bad luck won't wipe you out. Developing AGI or ASI at the rate we are, with so little emphasis on safety, is like borrowing against every piece of property you own to place one massive bet.
    At worst, we're introducing an invasive species to our ecosystem that is better than us at everything and reproduces 1000x faster than we do.

  • @xilom1
    @xilom1 ปีที่แล้ว

    We are far away from agi yet

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence ปีที่แล้ว

    What will happen after AGI depends on if we have developed full scale brain-machine interfaces, or not.

  • @RLReagan
    @RLReagan ปีที่แล้ว

    I wonder if AGI is our Great Filter.

  • @jimbobpeters620
    @jimbobpeters620 3 หลายเดือนก่อน

    Until Ai stops it’s overwhelming pace of growth I think we should keep Ai inside of our screens until we can gain control over it

  • @Icenforce
    @Icenforce ปีที่แล้ว +1

    Are we inventing our own extinction?
    Yes. But we've been doing that just fine without AI. ASI might actually be our salvation

    • @gomesedits
      @gomesedits ปีที่แล้ว

      Maybe our extinction will be the best for us. But I think ai will be so smart that will understand moral/ethic better than any of us (juridic intelligence)

  • @noluvcity666
    @noluvcity666 ปีที่แล้ว

    also, new ways to enjoy things and life will come eventually.

  • @hidroman1993
    @hidroman1993 ปีที่แล้ว

    This is a great video, but I can't imagine how much you spend on stock footage 😂

    • @tillmusshoff
      @tillmusshoff  ปีที่แล้ว +1

      Storyblocks subscription, I don‘t pay for individual clips 😄

  • @hibiscus779
    @hibiscus779 ปีที่แล้ว

    Nope - the quest for survival is a psychological necessity. Universe 25 experiment - we would basically eat each other if we were a 'leisure class'.

  • @roncee1842
    @roncee1842 ปีที่แล้ว

    Klaus has a plan, don't worry everything is going according to schedule.

  • @abcdef8915
    @abcdef8915 ปีที่แล้ว +1

    A single AI can't dominate combined humanity. It's too vulnerable and requires too much energy. AI needs to be a species in order to survive not a single entity.

  • @fsazam
    @fsazam ปีที่แล้ว

    Should check the Robotics Laws by Issac Asimov. The AI must not do anything related to be danger for Human.

  • @gonzogeier
    @gonzogeier ปีที่แล้ว

    My solution to the fermi paradox is this.
    1. We call oursrlf a intelligent species.
    2. We destroy our own planet in many ways, not only climate change, mass extinction, pollution, sea level rise, scarcity of phosphorus and other rare materials and so on.
    3. Maybe an AI is doing the same, but even faster? It leads to the destroying of everything, even the technology.

  • @carkawalakhatulistiwa
    @carkawalakhatulistiwa ปีที่แล้ว

    If gpt 5 is AGI. We can go to mars by 2030

  • @ohyeah2816
    @ohyeah2816 11 หลายเดือนก่อน

    Using AI as a means of self-expression and emotional communication allows individuals to harness its analytical capabilities to convey their thoughts, feelings, and experiences in a personalized and innovative manner. AI enables the generation of text, images, and music that reflect and resonate with their emotions, providing a unique outlet for creative expression. This is how I use AI.