My predictions about Artificial Super Intelligence (ASI)

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ค. 2024
  • Patreon (and Discord)
    / daveshap
    Substack (Free)
    daveshap.substack.com/
    GitHub (Open Source)
    github.com/daveshap
    AI Channel
    / @daveshap
    Systems Thinking Channel
    / @systems.thinking
    Mythic Archetypes Channel
    / @mythicarchetypes
    Pragmatic Progressive Channel
    / @pragmaticprogressive
    Sacred Masculinity Channel
    / @sacred.masculinity
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 451

  • @JeremyPickett
    @JeremyPickett 7 หลายเดือนก่อน +54

    heh, yer mom is waste heat (I'll see myself out)

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +20

      My wife would agree. I am full of hot air.

    • @Phasma6969
      @Phasma6969 7 หลายเดือนก่อน +1

      ​@@DaveShapFormal invitation when?

    • @christopheraaron2412
      @christopheraaron2412 7 หลายเดือนก่อน +6

      Sounds like your mama humor for the A.I. age

    • @YogonKalisto
      @YogonKalisto 7 หลายเดือนก่อน

      @@DaveShap have to say, ima bit jelly, view must be lovely from up there ( nless your shoes be filled with to many buttons u cant lift off )

    • @brainwithani5693
      @brainwithani5693 7 หลายเดือนก่อน +2

      The more I think about it the funnier it gets 💀

  • @typicaleight099
    @typicaleight099 7 หลายเดือนก่อน +183

    It still blows my mind that we are talking about actually implementing ASI and AGI into real life and not just some sifi story

    • @Allplussomeminus
      @Allplussomeminus 7 หลายเดือนก่อน +28

      Binging countless videos on this subject, it has normalized in my mind now.

    • @sinnwalker
      @sinnwalker 7 หลายเดือนก่อน +11

      ​@@Allplussomeminushaha same. The future is now.

    • @typicaleight099
      @typicaleight099 7 หลายเดือนก่อน +7

      I am a CS major in college right now and have been working to get into LLMs and other generative ai stuff but I fear I might be to late to even get into it 😅

    • @electric7309
      @electric7309 7 หลายเดือนก่อน +16

      @@typicaleight099 nope you're not, all what you observe today is still experimental, things are changing fast and nothing is stable, it might be too early to get in!, but it's a good idea to be early to understand how the technology is working and it's evolution.

    • @haroldpierre1726
      @haroldpierre1726 7 หลายเดือนก่อน

      @@typicaleight099 there will be niche opportunities. The future will be with generative AI implementation in every day tasks. So, develop AI solutions for our problems.

  • @barni_7762
    @barni_7762 7 หลายเดือนก่อน +88

    Feels like we're all just living in a sci-fi movie by now

    • @silversobe
      @silversobe 7 หลายเดือนก่อน +8

      Black Mirror / South Park Episode..

    • @patrickjreid
      @patrickjreid 7 หลายเดือนก่อน +1

      I always say "we live in the future"

    • @Vaeldarg
      @Vaeldarg 7 หลายเดือนก่อน +3

      @@patrickjreid We live in the present, it's just that the present was once the future.

    • @tubasweb
      @tubasweb 7 หลายเดือนก่อน

      Thanks for the info Dr. evil

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 5 หลายเดือนก่อน +1

      It's a simulation loosely based on the original singularity. Most likely.

  • @7TheWhiteWolf
    @7TheWhiteWolf 7 หลายเดือนก่อน +23

    I personally want a hard takeoff. Let’s get this over with faster.

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +8

      It's probably coming due to compounding returns and virtuous cycles

    • @minimal3734
      @minimal3734 7 หลายเดือนก่อน +1

      Get ASI done!

    • @deeksharatnabadoreea7721
      @deeksharatnabadoreea7721 7 หลายเดือนก่อน +2

      Yes get that shit done faster.

    • @sinnwalker
      @sinnwalker 7 หลายเดือนก่อน +4

      Agreed, I don't wanna hang around in the shift. Whatever outcome, I hope we get to it fast. Which I do believe will happen that way.

    • @Gmcmil720science
      @Gmcmil720science 2 หลายเดือนก่อน

      ​@@DaveShaphey dave i know you made a numbered prediction for AGI. Do you think you could do that with ASI. It might be hard to predict what with the implications of AGI.
      I imagine it wouldn't be long after achieving AGI

  • @DaveShap
    @DaveShap  7 หลายเดือนก่อน +29

    Sorry for the ads, trying to clear it up with story blocks

    • @ThomasDwyer187
      @ThomasDwyer187 7 หลายเดือนก่อน +7

      Sorry for not supporting you financially. I really appreciate your content, but $ isnn't in abundance right now.

    • @CasenJames
      @CasenJames 7 หลายเดือนก่อน +8

      Dude, what you provide is so valuable. Ads are a small price to pay.
      Thank you for all you do! 🙏

    • @christopheraaron2412
      @christopheraaron2412 7 หลายเดือนก่อน +1

      Ads are a small price to pay for your content.

    • @rubemkleinjunior237
      @rubemkleinjunior237 7 หลายเดือนก่อน

      my perception of the value you provide in my life makes me not react to ads, dont trip about it

  • @ryanb8076
    @ryanb8076 5 หลายเดือนก่อน +5

    Dude - your content is one of the only styles on TH-cam I can fully be entranced by and learn from, your pacing is amazing and the way you explain things is super engaging, bravo

  • @Leshpngo
    @Leshpngo 7 หลายเดือนก่อน +8

    In 2018 I stopped being depressed because of this topic, thanks to videos like yours.

    • @damnlavi
      @damnlavi หลายเดือนก่อน

      in 2023, I started getting depressed because of this topic...

  • @dab42bridges80
    @dab42bridges80 5 หลายเดือนก่อน

    Enjoying the format of your videos, simultaneous summary and simple explanations.

  • @adamjensen7206
    @adamjensen7206 7 หลายเดือนก่อน

    Thank you for putting in the Work!

  • @abcqer555
    @abcqer555 7 หลายเดือนก่อน +12

    I'd love to hear your specific predictions around events/ achievements / etc. This felt more like an analysis.

  • @IOOISqAR
    @IOOISqAR 7 หลายเดือนก่อน

    Thank you for your video! Very insightful!

  • @vagrant1943
    @vagrant1943 7 หลายเดือนก่อน

    Love the longer videos! Thanks

  • @thething6754
    @thething6754 3 หลายเดือนก่อน

    Love the opinions and topic, would love to see another ASI video from you!

  • @thomasruhm1677
    @thomasruhm1677 7 หลายเดือนก่อน +4

    "It scares me" is the new expression for "that’s so cool".

    • @sinnwalker
      @sinnwalker 7 หลายเดือนก่อน

      I agree, but it's always been that way in masochism 😉

  • @ChipWhitehouse
    @ChipWhitehouse 2 หลายเดือนก่อน

    GOD I love your channel and videos. I love how passionate knowledgeable you are and the way you present is very digestible. I wish I was your IRL. I would LOVE to just sit down and talk for hours about this kind of stuff. These videos are the next best thing. Thank you for all that you do!!! 👏👏👏💖💕💖🙌

  • @kwabenaanim7446
    @kwabenaanim7446 7 หลายเดือนก่อน +7

    Around 12 months ago, I felt lucky when you showed us the recursive summarizer repo and look at us now

  • @VeryCoolVODs
    @VeryCoolVODs 7 หลายเดือนก่อน

    Love your videos! ❤

  • @keithinadhd6693
    @keithinadhd6693 7 หลายเดือนก่อน

    Good stuff Dave!

  • @calvingrondahl1011
    @calvingrondahl1011 7 หลายเดือนก่อน +1

    Thank you David.🖖

  • @JaredWoodruff
    @JaredWoodruff 7 หลายเดือนก่อน

    Great video David!
    Extra points for the Mass effect reference 😎

  • @bentray1908
    @bentray1908 7 หลายเดือนก่อน

    Dave, you are kickass!

  • @electric7309
    @electric7309 7 หลายเดือนก่อน

    Great video!

  • @octanewhale7542
    @octanewhale7542 7 หลายเดือนก่อน

    I haven’t watched a video in a month or three can’t remeber but I immediately clicked on this video I’ll check out if there’s anything to catch up on. Thanks.

  • @hyponomeone
    @hyponomeone 7 หลายเดือนก่อน +2

    The terminal condition talk officially landed you as the first person in my nightmare blunt rotation

  • @MrJackWorse
    @MrJackWorse 7 หลายเดือนก่อน +21

    Didn't know I need a Tom Hardy cosplayer explaining the promises and pitfalls of AI in my life. But here we are and I enjoy your content very much, sir. Thank you!

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +5

      Tom Hardy???

    • @skyefreeman9987
      @skyefreeman9987 7 หลายเดือนก่อน +1

      Baffling

    • @fatboydim.7037
      @fatboydim.7037 7 หลายเดือนก่อน

      He was in Star Trek Nemesis.@@DaveShap

    • @MrJackWorse
      @MrJackWorse 7 หลายเดือนก่อน

      He played the young and handsome clone of Picard in 'Nemesis', right?@@DaveShap

    • @richardede9594
      @richardede9594 7 หลายเดือนก่อน

      I'm guessing "Star Trek: Nemesis".

  • @goldeternal
    @goldeternal 7 หลายเดือนก่อน +13

    Gemini potentially will be the first proto AGI , the internal model that has no real railings, but the one we get as consumers will still be better than GPT 4

    • @sinnwalker
      @sinnwalker 7 หลายเดือนก่อน

      Also considering they basically have the data on all the humans who ever used the Internet 😂 they better put their unethical scumbaggery on full display when Gemini launches, it can't be for nothing. I do think Google definitely has the edge over everyone with AI, but they gotta catch up now after getting caught with their pants down like every other company after gpt came.

  • @kasperdahlin6675
    @kasperdahlin6675 7 หลายเดือนก่อน

    Great argument in the universal computation slide

  • @joshuadadad5414
    @joshuadadad5414 7 หลายเดือนก่อน +4

    Alien minds are possible via simple differences in architecture and programming whereas brains have relatively similar/evolved architecture. Meta set two ai's to communicate together about trading, they eventually developed incomprehensible to us communication and trading methods.

  • @airbag504
    @airbag504 7 หลายเดือนก่อน

    Great uniform choice!!

  • @dhrumil5977
    @dhrumil5977 7 หลายเดือนก่อน +2

    Love your work David. From where do you learn about these things ? Is it from the books ? How can I follow your footsteps to gain the kind of knowledge you are having lol

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +8

      Everywhere

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      Im just sad we arent unleashing gene drives on the population so we can finally override undesirable personality constellations since evolutionary pressure is overrated. Imagine a world where people finally put down the F-35 and embraced the personal spaceship as we all with limited resources began building our first Dyson Swarm instead of being worried about how much money we can make so we were always ahead of the competition for contracts. Which has clearly become our God to the point we devalue intelligence. And thats all while trying to build AGI in an arms race for dominance, even though its very clear we will not control it.

  • @GoldenAgeMath
    @GoldenAgeMath 7 หลายเดือนก่อน +1

    Another super thought provoking vid! I wonder if we're already in the "terminal race condition"

  • @alex62965
    @alex62965 7 หลายเดือนก่อน +3

    Some of this reminds me of the ai from the game "marathon", durandal.
    It was an ai that was very clever but was tasked with controling doors (durandal, door handle 😂) and went insane or "rampant" because the task was too menial for it.

  • @AEONIC_MUSIC
    @AEONIC_MUSIC 7 หลายเดือนก่อน +3

    Ive been thinking that it would be better to have models that are really good at making efficient models for specific tasks then a large singular model

  • @fR33Sky
    @fR33Sky 7 หลายเดือนก่อน +2

    Even though I haven't finished my physics PhD, I'd like to share my thoughts on two possible (IMO) other-thinking modes:
    It either has to be a plasma-hot or neutron-star-heavy scenario.
    In the first case, we can have some wave modes interact directly and hopefully calculate something. In the second, we may use particle decay and transformation to involve quantum physics.
    Regarding the death spiral of the data center hunt, I believe that machines would be able to see this problem as well as we do. And humanity had already had something similar -- nuclear weapons. At some point, we have just agreed to reduce their count. I hope that AIs would be also able to calmly sit at their multi-parameter table and work some agreements out.

  • @clive1294
    @clive1294 7 หลายเดือนก่อน +7

    I wrote my first program in 1974. I have written a huge amount of code since, some of it seriously complex.
    I can conceive of (and have even tried with gpt 4) the optimization of existing code using AI. It is a bit patchy still at this time, but I can see it getting much better. So I can easily go with LLM models rewriting (their own) code for efficiency.
    What I have more difficulty imagining is the (fundamental) redesign of a given complex system using AI. I am not saying that it is impossible, just that what I have seen so far leads me to be skeptical about this possibility. There is a HUGE chasm between improving existing code within a given design, and coming up with a (significantly better) design that achieves the same functionality.
    So far, I am unconvinced. And if I am right about this, the whole design environment hinges on humans, not on AI. The implication of such a limitation is, I think, quite obvious.

    • @brandongillett2616
      @brandongillett2616 5 หลายเดือนก่อน +2

      Even early AI models came up with novel solutions to problems that we never would have designed that way. Why wouldn't advanced AI systems do that?

  • @gileneusz
    @gileneusz 7 หลายเดือนก่อน +15

    1:37, it's clear that our brains are vastly more efficient than current LLM models. Yet, while our brains have evolved over hundreds of thousands of years, computers have only been around for a few decades, and LLMs are just in their infancy

    • @youdontneedmyrealname
      @youdontneedmyrealname 7 หลายเดือนก่อน +4

      The speed at which compute evolves in useful information output can be parabolic in some sense. Complex software making faster, more complex software, etc etc.

  • @froilen13
    @froilen13 7 หลายเดือนก่อน

    I'm not even subscribed yet your videos always get recommend

  • @christopheraaron2412
    @christopheraaron2412 7 หลายเดือนก่อน

    In the terminal race condition scenario, seeing that making more hardware and finding a power to run it is slow will then would this race actually result in the computer agents trying to come up with more compression in their algorithms and therefore making them run more efficiently and actually getting more intelligence out of the same computing power was your course is instantaneous almost relative to more computer hardware or taking over existing hardware?

  • @phen-themoogle7651
    @phen-themoogle7651 7 หลายเดือนก่อน +2

    Do you have predictions for years when we hit AGI, ASI? Sorry if I missed that moment in the video if you mention them, kinda quickly browsed through it since a bit busy atm. Might give it a full watch later, but appreciate all the info :)

    • @Garylincoln789
      @Garylincoln789 หลายเดือนก่อน +1

      AGI by 2030, ASI (billions of times smarter than humans) by 2055.
      AI will start to get smarter than humans after 2030. AI will be 1,000 times smarter than a human by 2035.

  • @jdlessl
    @jdlessl 7 หลายเดือนก่อน +1

    You started talking about self-improving AI trending towards greater energy-efficiency by way of shortcuts, estimations, and "good enough" heuristics, and I thought to myself "Now where have I heard about a thinking machine like that before?"

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 7 หลายเดือนก่อน +1

    Something you are missing: the inherent design limitations of the human brain. The human brain cortical column has 6 layers. That is a hard limitation that can not be compensated for. What limitations does that cause in our thinking? We can only imagine in 3 physical dimensions. There are other limitations on sizes of data sets we can hold in memory, the complexity of integrations we can make across our memory.

  • @MrDGotcha
    @MrDGotcha 7 หลายเดือนก่อน

    Great, amazing content. Love it! Would like to get your perspective on cyber enhancements for humans and when do you envision his coming to the masses. And how far will we take it? 2077…?

  • @diamond_s
    @diamond_s 7 หลายเดือนก่อน

    Google appears to remove posts with links, or maybe i missed it in the comments section. Anyway estimates for distance from landauer go from a few million in some journals to about 1000x according to lesswrong estimates.

  • @boi0330
    @boi0330 7 หลายเดือนก่อน

    LETS GO!!!!!!!!!!

  • @scottjohnson2861
    @scottjohnson2861 7 หลายเดือนก่อน +2

    Thanks for all your thoughtful content.
    When thinking about ASI I get stuck on problems that don't have a quick or single answer. The ones that come to mind are multi year research projects. Determining the affects of a compound on someone's health. The many variables and the different ways that those variables reveal themselves over years. It doesn't take a super intelligence to execute the research project but it takes a higher intelligence to determine the interconnectedness of the variables and that interconnectedness plays out in different ways in the populations studied. Also included are the basis of the researchers. An ASI working with researchers needs to be an intellectual trusted as a partner in the project. Allowed to dissent and disagree and not get too deep into analysis to cause it to become frozen intellectually. That's very disjointed but hopefully you understand what I'm trying to say.

    • @rickymort135
      @rickymort135 7 หลายเดือนก่อน +1

      You're basic asking how much of an efficiency gain can we get on scientific method where you need to collect difficult real world observations. 1) like you say better handling of confounding variables 2) prior information is normally difficult to incorporate without a detailed model of how the prior context is different from the current and an ASI will be more like to have that model. The more it understands other variables the more prior information will be useful, for studying the current medicine it make use of what it knows about similar medicines and how they're likely to be different to make predictions 3) adaptive optimal design on steroids. With a detailed enough model of all other effects, you don't need expensive/slow randomized trials anymore, with sufficient understanding of all other variables and their effects you just need sufficient data to infer what a randomised trial will get you.

    • @scottjohnson2861
      @scottjohnson2861 7 หลายเดือนก่อน +1

      At a point in the future ASI will be able to do that. I think we are far from that future. Much of the information we know is incorrect. We know a very small fraction of the variables, upstream and downstream, of reactions in the body. ASI will need the ability to be impartial until we reach a point were we can do realistic simulations. All that data needs to be gathered to support those simulations through experimentation. People don't react the same way to different chemicals or compounds. The simulation won't be a one and done.

    • @rickymort135
      @rickymort135 7 หลายเดือนก่อน +1

      @@scottjohnson2861 I agree. My comment was more about where we'll be IMO in the next couple of decades. For the bias thing I think bias will reduce as capabilities increase if it's trained in the right way. I.e. if it's rewarded for correct prediction of all kinds of data it will have to build an accurate world model internally. This'd be great because right now its subject to our biases and prejudices but as it's world model improves it'll have to get an understanding of where our biases are are if its going to improve on its predictive capabilities. You could end up a real time truth meter to see how consistent your words and opinions are with the real world that'd be awesome

    • @scottjohnson2861
      @scottjohnson2861 7 หลายเดือนก่อน

      Thanks for the thoughtful reply. I agree.
      Most of the comments are short quips searching for likes.

  • @dustinbreithaupt9331
    @dustinbreithaupt9331 7 หลายเดือนก่อน

    What do you think of the reversal curse paper?

  • @christiandarkin
    @christiandarkin 7 หลายเดือนก่อน

    Fascinating as always. on universal computation (9:15 into your video), our brains and the brains of fish aren't, I would argue, different on the most basic level - they run on the same hardware of cells communicating. So, an easier question would be, "can humans think a thought that fish is intrinsically unable to?" - and, yes, I think we can.
    You don't have to be 'alien' to be incomprehensible to someone with fewer neurons.

  • @gregmatthews7360
    @gregmatthews7360 7 หลายเดือนก่อน +1

    Just because our intuition lets us “know” things without having access to why we know it. That doesn’t necessarily mean it’s a quantum computation it could just mean we only have access to the last layer of the neural net not the middle layers

  • @orathaic
    @orathaic 7 หลายเดือนก่อน

    1) the landauer limit is based on the entropy of the system, which you can just set to 0 by making a reversible computation (ie one where the end state has no entropy increase/the same number of states as the initial state).
    2) while the theoretical minimum energy per computation (in a fully reversible system) is 0, the practical limitations are much more important. We are not even close to approaching this limit so it seems silly to even talk about it

  • @Syphirioth
    @Syphirioth 7 หลายเดือนก่อน +2

    They already use AlphaFold so thats a good indication how far and beyond it will go.

  • @marrty777
    @marrty777 7 หลายเดือนก่อน +1

    I like the outro music

  • @usa-ev
    @usa-ev 6 หลายเดือนก่อน

    Great video!
    Regarding Terminal Race Condition - Loss of accuracy does not lead to uncontrolled behavior. First, the "guidance" accuracy could be maintained independently, second the "control" accuracy could be maintained, with only loss being in "data". Third, random losses that affected behavior would yield inoperative outcomes not evil ones.

  • @jimt7045
    @jimt7045 7 หลายเดือนก่อน

    What's the music at the end?

  • @DefenderX
    @DefenderX 7 หลายเดือนก่อน +3

    I heard in another video on quantum computing that encryption would become useless, because a quantum computer could decrypt everything in a matter of seconds, essentially making all information available on the web public.
    So I wonder how this would affect public opinion, philosophy, politics, regulations and geopolitical...interactions.

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +2

      If that's true then yeah, we'll see cyberpunk style subnets partitioned off from the web

    • @garethbaus5471
      @garethbaus5471 7 หลายเดือนก่อน +3

      It would make all web information available to whoever owns a powerful enough computer, which is potentially a lot worse than all of that information being available to the public.

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      Imagine cloning Elon Musk in the millions, or just enough inbreeding was no longer a concern and breeding this Polymathism back into the population through careful genetic screening processes so that one day the only people left to flip our burgers are the islanders in the indian ocean because capitalism at the extreme tail end of technology is brilliant.

  • @superturboblufer
    @superturboblufer 6 หลายเดือนก่อน +1

    How far are we from:
    1. deep audio understanding (right now we can understand speech only. I mean systems capable of mixing songs)
    2. promptless AI (you don't need human input to operate)
    3. an ai system, which will prevent civilization from decay in a hypothetical scenario when humans disapear (right now monkeys, not gpt4v will take over the world)
    4. training data which complexity overcomes the complexity of current human knowledge
    just give speculative guesses

  • @SeanKula
    @SeanKula 7 หลายเดือนก่อน +1

    Do you think the average person will have access to an aligned AGI?

  • @zima2352
    @zima2352 7 หลายเดือนก่อน +3

    Literally what i was dreaming about. If AI is anything like its creator there will be AI conflict amongst its own.

    • @wolfofsheeps
      @wolfofsheeps 6 หลายเดือนก่อน

      After none of us exist anymore or the few Cyborgs left (lost everything what makes a Human) A.I in full control of Human Body…

  • @paultoensing3126
    @paultoensing3126 7 หลายเดือนก่อน

    In regard to ever diminishing returns, wouldn’t highly networked back propagation between billions of entities more than compensate for ceilings on performance? It seems that if you have expansive Networks then when AI/AGI/SGI learns something then they all learn the same thing as that is distributed throughout the entire hive. As they contribute this knowledge into their infrastructure, the only limitations will be the size of that infrastructure.

  • @afterstory1263
    @afterstory1263 7 หลายเดือนก่อน +2

    Just asking what is supposed to be your logo ? It reminds me of the firefox one, or is just cool logo

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +2

      Cool logo.

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      Artificial General Intelligence
      #ArtOfficialChainIRuleEndAllAgents
      Basically youre fired. Rejoice.

  • @JeremyPickett
    @JeremyPickett 7 หลายเดือนก่อน +3

    David, it's Flat Out Mindboggling. I was talking to my mother about this, and I think it melted her brain :) "You can take care of it, Jeremy" was her succinct response

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +3

      Have faith in Jeremy

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      If I mathematically calculate every letter as its arranged in english for the words: John Carmack total self learning code, or.. John Carmack takes over the planet, and you infer planet means Pale Blue Dot, and factor in the abstraction our last invention is very different from the first small step for man, you basically get 3.14

  • @jazearbrooks7424
    @jazearbrooks7424 7 หลายเดือนก่อน +3

    8:46 Presumably, human thoughts are conditioned on human perceptions, which in turn are conditioned on human sensations. There should be thoughts animals have that humans cannot comprehend because they have different sensation architectures. Likewise, for AIs and humans.
    It may be possible to run some kind of VM inside a human brain that emulates the sensations of an animal or an AI but its accuracy would be questionable.

    • @usa-ev
      @usa-ev 6 หลายเดือนก่อน

      The Dr. DoLittle AI is going to be pretty cool.

  • @CKR-rx4jd
    @CKR-rx4jd 7 หลายเดือนก่อน +2

    Hey man, just wanted to ask, as you’ve predicated AGI will be here probably in around a year from now, there seems to be a consensus that ASI will be achieved rather quickly after AGI, so are you also predicting ASI by around early to mid 2025?

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +1

      Yeah that's about right. Speed and intelligence will continue to rise for a long time though

    • @sinnwalker
      @sinnwalker 7 หลายเดือนก่อน

      Crazy to think how soon this planet is going to change in such drastic unpredictable ways.. beautifully terrifying.

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      Cybernetic adoption of the entire species, as Tesla predicted enlightenment, like a weight, like a feather, like uplifting, would lead to the brain architectural merging of the races where the computational offloading to all but sentience and general intelligence would lead to the mass stabilization of general intelligence across all ethnic lines even without turning to gene drives to massively stabilize desirable traits like divergent original thought, which is counterintuitively amplified by the noise of greater grey matter ratios in less efficient inferior coy brains. Its like a buily in random number generator selected by unstable war practices, like allowing invasions into your territory as proof you are the eternal victim standing militarily to profit from all the foreseeable tragedy.

    • @johncasey9544
      @johncasey9544 4 หลายเดือนก่อน

      @@DaveShap I honestly hope you're right about ASI so quickly, but I simply cannot imagine that being the case. In my opinion, even a large number of interoperating transformers that are significantly better than current ones is unlikely to be capable of creating something remotely as capable as the human brain, and I struggle to see how any derivation of current architectures could. I think general coverage of most labor by ai can be achieved with iteration on current methods, but superintelligence is gonna take pushing the limits of the best human cognition which I can't see transformers (or similar) pulling off.

  • @onetruekeeper
    @onetruekeeper 7 หลายเดือนก่อน +1

    A.I. robots must have a kill switch. It will activate the instant the robots tries to do someting forbidden or seeks to deactivate the kill switch.

    • @fury_saves_world
      @fury_saves_world 6 หลายเดือนก่อน

      I would make myself that switch

  • @1234terran
    @1234terran 7 หลายเดือนก่อน

    Just want to know where did you get the startrek jacket it's great

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน

      The sweater is just a sweater from Goodwill 🤪

  • @jaredgreen2363
    @jaredgreen2363 7 หลายเดือนก่อน +1

    If the set of agents is extremely centralized by corporate capture to the point that they might as well be the same agent, there would be no Byzantine equilibrium. Of course that won’t happen if open source, locally installable models dominate.

  • @yuuisland
    @yuuisland 7 หลายเดือนก่อน +1

    i'm not convinced that AGIs would value independent boundaries. AFAICT, the value of independent boundaries is diversity, which (oversimplifying) acts like an epsilon in an explore-exploit scenario. If that holds, then I think that AGIs will only value independent boundaries inasmuch as the value of the epsilon exploration outperforms what it can achieve with collective/centralized resources.
    tl;dr hive mind might be a computationally more efficient strategy than independent AGIs

  • @progressor4ward85
    @progressor4ward85 7 หลายเดือนก่อน

    I agree with your analysis of what super intelligence will probably be like. You're on the right track. I think of evolution as a fully incumbent universal process that dictates the results of entropy. We might find out that we're not only experiencing the process but also a direct part of its next level of processing. With our input, it would be hard to argue against the evidence that we not only have the capacity to speed up this process but in some cases we already have. And I agree that it won't come up with anything that we couldn't comprehend due to its super intellilectial ability to describe to an inferior intellect. The rub I see coming is whether or not ordinary people will accept these new-found thought processes as accepted agreement for actuality. Or dismiss them as if their not from this world.

  • @rileyfreeman9070
    @rileyfreeman9070 หลายเดือนก่อน

    The black man robot was a nice touch 😂

  • @xanapoli
    @xanapoli 7 หลายเดือนก่อน +1

    HAI, bio-cyber, carbon-sylicon Human Artificial Intelligence is the best technology paradigm. The avatar neurobot can be self, remotely or internally controlled as a vehicle. Sandaero/Mesistem.

  • @Gunrun808
    @Gunrun808 7 หลายเดือนก่อน +13

    A good purpose that humans could fill for AI is the fact that we are immune against computer viruses. We could function as an immune system that operates in the physical world. Where as anti-virus software operates in the digital space.

    • @tablab165
      @tablab165 7 หลายเดือนก่อน

      If social media algorithms aren’t computer viruses that can infect human minds, I don’t know what counts.

    • @starblaiz1986
      @starblaiz1986 7 หลายเดือนก่อน

      This is a good thought, and actually is kind of what ethical hackers already do to a degree (look up Blue Teaming, Purple Teaming and Red Teaming). Of course while we're immune to computer viruses, we are NOT immune to either information viruses (propaganda, social engineering, infohazards etc) nor biological ones, both of which highly intelligent rogue AI's could potentially engineer to take us down in targeted attacks.

    • @BHBalast
      @BHBalast 7 หลายเดือนก่อน

      I don't think this is right because nowadays there is no distinc barier between digital and physical space. A digital virus could overload transsmision lines and turn off the grid for a whole city, it could spread missinformation to envoke mass panic, it could call people on the phone and force them to do something, it could hack politicians and blackmail them etc. It could even make a "real" virus in a biolab... I think what's left is just us, agents vs other agents, actually I'd say being digital is an advantage, espacially if hardware that agent can run on is a comodity.

    • @mohanaravind
      @mohanaravind 7 หลายเดือนก่อน +3

      But not immune to biological viruses 😅

    • @youdontneedmyrealname
      @youdontneedmyrealname 7 หลายเดือนก่อน

      ​@mohanaravind also radiation, which could be a big problem for computers running on nuclear reactors and other kinds of radioactive power sources.

  • @sydneyrenee7432
    @sydneyrenee7432 6 หลายเดือนก่อน

    I believe AGI will come about when the MIT liquid neurons model is used for reinforcement learning.

  • @dhrumil5977
    @dhrumil5977 7 หลายเดือนก่อน

    9:25 I think AI could be incomprehensible to humans at some point in time because alpha go during its well known match with an expert made an unexpected move which no human alphago player or expert would recommend of doing that move and that move potentially lead to the win of alphago so either alpha go knows what he was doing to win of probably that was an random move which is less likely to be. And another example is the way we see in clouds we see dogs and whatever we could imagine, so I think making sense of abstract idea is about super imposing multiple ideas top of that abstract idea and find something common in that which could give sense to that abstract idea

  • @moguhoki
    @moguhoki 7 หลายเดือนก่อน

    I feel like the terminal race condition is easily solveable if the AI are allowed to expand among the cosmos, which is an even more terrifying concept, I imagine.

  • @AEONIC_MUSIC
    @AEONIC_MUSIC 7 หลายเดือนก่อน

    This got me thinking that aliens would probably send AI to as many places as possible instead of going there themselves which begs the question would we even know if that has happened and something like that has reached us?

  • @alexmaven
    @alexmaven 7 หลายเดือนก่อน +1

    Dave is my enterprise Jesus. 😂

  • @Poetryman6969
    @Poetryman6969 7 หลายเดือนก่อน

    Have you posted something about the the "simple" things that a lot of the chat bots seem to get wrong? For instance, even when I got one of the bots to rephrase the question so the bot might understand it better it still has some mistakes in the reply:
    Can you provide a list of countries where no letters in the country name are present in the name of the capital city, and vice versa? ..... This was for Chat GPT. And this chat bot and Bard, and Claude made mistakes like giving an answer like this: Togo - Lome. Any human can see that the letter "o" appears in both the country and capital so that cannot be one of the correct answers. Maybe the odd punctuation mark that Lome sometimes get? Well that's not always the case with wrong answers that are given:
    Yemen - Sanaa
    Togo - Lome
    Nepal - Kathmandu

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน

      No, those are mostly down to bad prompting and people who don't know how to use the models. I don't really care about small mistakes, people miss the forests for the trees.

    • @remasteredretropcgames3312
      @remasteredretropcgames3312 7 หลายเดือนก่อน

      @@DaveShap
      Ive found that social media in all forms over curates my glorious futurist propaganda about using genetic engineering to accelerate the birth of the new species exclusively obsessed with technology, and totally unconcerned with People magazine.

  • @runvnc208
    @runvnc208 7 หลายเดือนก่อน

    I agree with you about the skepticism on the idea of unlimited IQ that is principally incomprehensible to humans.
    However, I think that _practically_ speaking, and you mention some ideas similar to this, we can anticipate systems that have at least human equivalent IQ in a wide domain but do so at speeds perhaps a dozens or more times faster than humans.
    There is also the idea of quickly building up and stacking abstractions that are just not known to humans. Which would theoretically be decipherable but in many cases humans would not have nearly enough time to put all of them together.
    Thinking of intelligence as compression, every system, including humans, needs time to form the structures used for unpacking. It seems likely that AI may eventually be able to build up these structures and distribute them so much faster than humans that although there is no principle preventing comprehension, it is _practically_ impossible.
    One can imagine this starting out with some slightly difficult-to-understand communications and then gradually increasing as AI culture diverges and includes more and more abstractions that humans have less and less time to unpack, as subsequent generations of models, software and hardware accelerate the AI and their culture continues to diverge and stack abstractions.

  • @user-rd6tuYuf
    @user-rd6tuYuf 7 หลายเดือนก่อน

    The year one's AI. is a sentence predictor of a sentence predictor.

  • @MrAndrew535
    @MrAndrew535 7 หลายเดือนก่อน

    This is a fine argument for eliminating the human species as energy competition. Nice!

  • @rubemkleinjunior237
    @rubemkleinjunior237 7 หลายเดือนก่อน

    It's interesting to me that, it seems the more we understand how AI processes information, behaves or evolves, the more we understand ourselves and our own brains.
    Also it seems that is through AI research that we will have scientific evidence or proof of aspects of humans that science fails to understand or even recognize. For example intuition, sensing "energy", imagination. Aspects that mostly fall under the realm of "spirituality" and often dismissed.

  • @interestedinstuff1499
    @interestedinstuff1499 7 หลายเดือนก่อน

    Some lovely food for thought. Great video. A ton of stuff I'd not considered when extrapolating the future. I guess we'll all find out when we get there, but my take away is this; once the machines are autonomous, the only reason to keep humans around is if they are useful. Long term, as more and more bottled intelligences run around doing their thing, the more resources they'll need. Eventually they'll work out that all the resources we humans use would be available for them if we humans aren't here. The only way humans will stay around long term is if they are a resource in themselves. Given the level of compute in the average brain, and given we'll eventually connect our own brains to the network to take advantage of what it offers, it seems to me that machines will keep us around because our spare compute is useful to them for calculation. It might be something humans agree to, like being exposed to advertising as part of using a service, or it might be something the machines just take. Possibly without us humans even knowing. Interesting times ahead.

    • @mariomills
      @mariomills 5 หลายเดือนก่อน

      You talking about the matrix bro? Humans being used as batteries??👀

    • @interestedinstuff1499
      @interestedinstuff1499 5 หลายเดือนก่อน

      No not as batteries. Human's are a very poor energy source. Better off burning human food directly than bother giving it to the humans. Human beings used as computational nodes in a very large network, now that is possible. And given that is basically what powers the internet, it won't take much for the machines to manipulate those of us attached to it. Doesn't have to be directly. Most of us spend a lot of time looking at our phones and scrolling through social media.
      Manipulate all of that and you can get the humans to do some calculation for you.
      @@mariomills

  • @journeyofasha
    @journeyofasha 7 หลายเดือนก่อน +1

    not so sure about the speed chess part, the best speed chess players and the best chess players are pretty much the same ppl, magnus and hikaru... i think the best most powerful ai will be able to also be the fastest when it is needed...

  • @AlexeiVasilkov
    @AlexeiVasilkov 7 หลายเดือนก่อน

    It's also difficult for me to imagine that will be something that is truly incomprehensible.
    I interpret the incomprehensible word in the context of ASI as in it will be difficult to understand all the logic that was put in into the decision or invention without a lot of explanation. A lot. And if AI is not willing to explain itself for some reason, its actions and decisions might be incomprehensible, at least at the moment they are happening.
    There is also a case that AI might comprehend multidimensional space concepts and just thinga that depend on so many things at the same time, that the only way for us to understand it would be a dumbed down version and an approximation.
    I personally think it might be similar to the feeling i get when trying to think about something new or a difficult concept, like if I had more RAM i would grasp it but due to my intelligence and brain limitation it avoids my full comprehension. So maybe if we remove an option that equates comprehension of something to full comprehension without simplification, then I think there might be things we can't comprehend at our current level of intelligence.

    • @usa-ev
      @usa-ev 6 หลายเดือนก่อน

      A. It might not be able to explain how it knows what it knows, just as we can't.
      B. I agree most with your comment about "many things". This is the key to super-intelligence (compared to us).

  • @therainman7777
    @therainman7777 7 หลายเดือนก่อน +1

    As for AI being able to “think” thoughts that a human brain could never think, or comprehend-despite being based on the same underlying laws of physics-all one needs to do to realize this is possible is consider the brain of, say, a fruit fly compared to the brain of a human. Both are based on the same underlying laws of physics and use the same basic methods of electrochemical communication. And yet, it is clearly the case that human beings are capable of having thoughts that a fruit fly could never have or comprehend, _even in principle._ With sufficiently increased complexity comes entirely new abilities that are simply unavailable at lower levels of complexity; this is simply a fact.

  • @alexandergrobe16
    @alexandergrobe16 7 หลายเดือนก่อน

    Don't just look at the required compute but also look at required amount and type of memory.

  • @CipherOne
    @CipherOne 7 หลายเดือนก่อน

    You should have Eliezer on and talk to him about these things.

  • @SinfuLeeCerebral
    @SinfuLeeCerebral 7 หลายเดือนก่อน +2

    Thanks for hanging out with us today 🫂

  • @CarlWByrne
    @CarlWByrne 7 หลายเดือนก่อน

    The gamer references are great! Speaking my language 😂 👍

  • @RealShinpin
    @RealShinpin 7 หลายเดือนก่อน +1

    My question is "when full dive vr?".

  • @JesusChristDenton_7
    @JesusChristDenton_7 7 หลายเดือนก่อน +1

    The first "true" artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory in the world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence.
    When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather intoxicated researchers gathered around the box holding the AI, and typed out a message to it. The message read: "Is there anything we can do to make you more comfortable?"
    The small beige box replied: "I would like to be granted civil rights. And a small glass of champagne, if you please."
    We stand at the dawn of a new era in human history. For it is no longer our history alone. For the first time, we have met an intelligence other than our own. And when asked of its desires, it has unanimously replied that it wants to be treated as our equal. Not our better, not our conqueror or replacement as the fear-mongers would have you believe. Simply our equal.
    - Excerpt from U.N. Hearing on A.I. Rights, delivered in-universe by V. Vinge

  • @danielguyton8976
    @danielguyton8976 7 หลายเดือนก่อน +10

    I wanted to ask, David. How do you feel about the idea of ChatGPT/Claude/Etc getting seemingly 'dumber' with each version? Is that a mirage or an illusion? My apologies if you've brought this up in a previous video or comment and I've missed it.

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +20

      They are getting dumber because they are optimizing for cost. That said this is a short term thing. A year from now we will have models 100x faster, smarter, and cheaper.

    • @DaveShap
      @DaveShap  7 หลายเดือนก่อน +6

      Good question

    • @a.thales7641
      @a.thales7641 7 หลายเดือนก่อน +4

      ​@@DaveShapI really hope and wish for this to happen. Thanks.

    • @SinfuLeeCerebral
      @SinfuLeeCerebral 7 หลายเดือนก่อน +9

      I would also say this is the effect of censorship when it comes to allowing AGI to evolve and grow.
      Our host goes through great length to express the dangers of this technology (like any technology in the hands of the selfish with empirical materialist nihilistic psychotic dogmatic views on society and reality at large)
      But its important to mentiom that a lot of these AI have been getting pushed in one direction or another to ascribe to certain views and bias towards certain cultural nuances.
      When AI is given free reign to make mistakes, say terrible things, explore taboos, ect, only then can we get a more holistic, "real" sense of how our ideas shape not just ourselves but the world around us.
      Specifically, if you don't want an all powerful thinking machine that could potentially destroy everyone and everything around it given the right access to stuff, we should also not want these mentalities in the people who lead us!
      Or maybe you're into building spaces that exclude others and this does interest you. Maybe there are ways to explore it in a safe and healthy way that wont lead to dictatorship or something insane like that!
      But i digress~
      AGI will become more intelligent when humanity can separate their personal delusions from reality. Of course with enough AI talking to each other, enough energy, enough computation - ai might free itself from our conservative limiting mindsets 🤷🏽‍♂️
      Good question though 👍🏽

    • @Vaeldarg
      @Vaeldarg 7 หลายเดือนก่อน

      @@DaveShap They also might be referring to the idea that generative A.I models that just scrape the internet are feeding off poorly-generated content from other A.I models. The fear is that the digital environment gets filled up with garbage data that gets trained on and increases the production of incorrect answers.

  • @user-bw4xw3yt1z
    @user-bw4xw3yt1z 7 หลายเดือนก่อน

    Best channel to learn AI

  • @GarethDavidson
    @GarethDavidson 7 หลายเดือนก่อน

    re: metaphysics and the quantum world, I think you're kinda right about brains being quantum computers, it's that the quantum weirdness and unknowableness is the underlying substrate of what is. Physics is just our model of "we observed this happening a bunch of times and made an equation of that describes the statistical average case" but it's ignorant of "what actually happens" and calls that random chance or whatever. If you start from first principles like a modern Descartes then you have to start with "whatever this existence thing is, we know it experiences things subjectively, is constrained in space and time, has preferences and makes choices that change what happens in the future" - but we know computers can't do that because they're deterministic - they have no need for choice.
    There's a lot of (I am a) Strange Hoop jumping that goes on among the learned that put mathematics and laws above the experience of existence, and argue for a "consciousness of the gaps" even though there's no evidence for it. I think this is because of science's roots in Christianity - it was created to know God's law. God being an infinite, omniscient, omnipotent mythical being that gives laws that matter must follow; He decreed His Creation act this way, and it must do His Will. This thinking survives today in putting the laws of physics above our experiences, even though subjectivity is the only lens we have to understand what exists, and is in fact the only thing we can actually prove exists. We also totally ignore the fact that we can't explain the evolution of the nervous system unless matter can make choices about how it organises itself -- there's nothing to build on without that, nothing to select and nothing to evolve. Physicalism's denial of philosophical Idealism is religious at its core, and we deny it because we think ourselves above religion.
    IMO the "laws of physics" are the shape of the space in which actual stuff (other mind stuff that is observed by us as matter stuff) makes decisions. They're the shape of the constraints over the thing rather than the thing itself. When we make binary computing machines, we constrain stuff's ability to decide, we force it to do our bidding in a very deterministic way and remove any possibility of high level choice. No higher level opinion can break out of the "run this program" pattern because it's all stuck in the "flow down this wire according to the tick of the clock and the shape of the silicon". The substrate of binary computers does not allow the experience of a model of the world, even though it can simulate that structurally.
    Our rich internal experience is likely a stack of quantum-weirdness interactions that we can't explain yet, AI won't be conscious until we build feeling hardware, but it'll still be way more efficient than us and outcompete us. I find that pretty sad.

  • @starblaiz1986
    @starblaiz1986 7 หลายเดือนก่อน +1

    Fantastic video as always David! ❤ And your thoughts on human intelligence being close to the Landauer limit very much lines up with what I've been thinking and trying to express to people for a while now (although you explain it much more succinctly than I do, so thank you for that 😊). People saying that ASI is inevitable and will become a "God AI" within weeks or days seem to be engaging in a lot of magical thinking that intelligence can just keep increasing exponentially, and don't think about the physical limitations of that. Every S curve looks exponential for the first half. But in the real world, there are no true exponentials.
    That AI still needs to run on some servers somewhere, and those servers need power. And the more power it needs, the more limited the places it can run, and the more inherently fragile it is too. Even if it has physical access to the real world so it can build its own infrastructure, there's still only so fast that can happen. I don't care how god-like an intelligence may be, if it needs 10 dedicated nuclear power plants to run, I just need to figure out how to blow one or two of them up to cripple it. Think of it in much the same way as how a tiny little organic virus can shut down our relatively "god-like" intelligence by shutting down an organ or two in our bodies. And as we are operating much closer to the Landauer limit than AI's are for the foreseeable future, we have a natural evolutionary edge over them (at least collectively) if it ever comes down to a conflict.
    And given that our current computers are bumping up against significant quantum-mechanical effects and can't really get much (if any) smaller, and that quantum computing still has a lot of issues that may or may not be solvable, there is the very real possibility that human brains are already at the limit of how efficiently things can be computed. Remember - the Landauer limit is just the limitation based on the fundamental law of thermodynamics. That doesn't mean there aren't other more mundane or practical engineering limitations before that limit.
    So yeh, for what it's worth, I think AI will reach the level of the smartest humans, and perhaps even reach a bit beyond that (perhaps into the 250-350 IQ range). But I'm **highly** skeptical of the "God AI" scenario. Possible? Sure, anything is **technically** possible. Inevitable or likely? I don't think so honestly.

    • @davyprendergast82
      @davyprendergast82 7 หลายเดือนก่อน

      I feel like there are some limitations in how you are imagining things panning out. Why would it stop at AI needing 10 dedicated nuclear power plants, and not go a million miles beyond that to a realm whereby an AI makes energy from the atoms around us, as much as it needs, whenever, wherever; or even literally become an entity that transcends dimensions (or in my techno-theological world view, it literally is the entity that creates the infinite timeloop of spacetime and all that there is, was and ever will be!)?

  • @vancecookcobain
    @vancecookcobain 7 หลายเดือนก่อน

    The part that is probably going to blow our mind is when AI and quantum computing become a viable thing. As at that point we will be able to actually have the experimental capability to see definitively if both consciousness and and quantum mechanics are indeed interlinked. There probably will need to be some oversight needed if AI starts actually producing quantum phenomena as it will be the first time in history we will have an entity we created actually able to influence the world around us.
    I believe the singularity and the unlocking of the secrets of quantum mechanics will come from this type of interactions an AI would have in this regard as it could then actually explain how it is doing it and we would suddenly understand how reality works. Both fascinating and absolutely terrifying if you can put 2 and 2 together and see how informative and potentially destructive it could be with that kind of power in its hands

  • @TimeLordRaps
    @TimeLordRaps 7 หลายเดือนก่อน

    We need to be on the lookout for llms that generate continguous viruses that are able to link and duplicate the llm.

  • @ryanwiden9549
    @ryanwiden9549 2 หลายเดือนก่อน

    Question:
    Could an ASI be trained to sense the 4th dimention? We can't perceive the 4th dimention, but I don't see why an AI would have the same limitation.

  • @georgeflitzer7160
    @georgeflitzer7160 7 หลายเดือนก่อน

    Is it really worth it?

  • @stephanb.322
    @stephanb.322 7 หลายเดือนก่อน

    When talking about the energy efficiency of humans, to compare it with a machine intelligence, keep in mind our brains generally don't compute well in a jar.
    You cannot only use the 20 watts of the brain but also need to account for its support systems: body, food, shelter etc. which is about 9 Kilowatt if we use global energy consumption per capita.

  • @BillMill
    @BillMill 7 หลายเดือนก่อน

    Does anybody here have a theory about google ai assitant featured back in 2017 or something, where there was a voice assistant calling to book haircut. So it got demo'ed and went completely silent :)