AGI in 3 to 8 years

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ค. 2024
  • When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence ... the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?
    According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.
    We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.
    We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet's evolution, and envisages the role of humans in a world where AI surpasses human intelligence.
    00:00 Introduction to the Future of AI
    01:28 Predicting the Timeline of Artificial General Intelligence
    02:06 The Role of LLMs in the Path to AGI
    05:23 The Impact of AI on Jobs and Economy
    06:43 The Future of AI Development
    10:35 The Role of Humans in a World with AGI
    35:10 The Diverse Future of Human and Post-Human Minds
    36:51 The Challenges of Transitioning to a World with AGI
    39:34 Conclusion: The Future of AGI
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 203

  • @elirothblatt5602
    @elirothblatt5602 2 หลายเดือนก่อน +39

    Great show. I recommend putting date of recording in description, given how fast everything is progressing.

    • @mjt1517
      @mjt1517 2 หลายเดือนก่อน +1

      TH-cam already timestamps every video.

    • @lc285
      @lc285 2 หลายเดือนก่อน +1

      ​@@mjt1517- You Tube puts in date of upload. However, the date of recording should always be identified on all videos.

    • @BadWithNames123
      @BadWithNames123 หลายเดือนก่อน +1

      should be feb 2024 "In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama."

  • @clive1294
    @clive1294 2 หลายเดือนก่อน +34

    Ben's estimate of 3+ years might well be right, but I think the other thing he said is more pertinent. A lot of people are fixated on the achievement of AGI, but as he (rightly) said, the huge economic (and societal) impact will occur without AGI being required, current LLMs will do that quite easily.

    • @theWACKIIRAQI
      @theWACKIIRAQI 2 หลายเดือนก่อน

      Are we supposed to expect governments to readily allow companies (and itself) to automate jobs? This is unrealistic. NOT because governments/corps are moral entities but because this could easily cause societal collapse.

    • @jeremymullens7167
      @jeremymullens7167 หลายเดือนก่อน +1

      How do you stop it?

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      @@jeremymullens7167 Stop it?! Why would you want to stop it?

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน +1

      @@theWACKIIRAQI Unrealistic?!!! It's happening right in front of our eyes. It's inevitable and good...

  • @kavepbr
    @kavepbr หลายเดือนก่อน +5

    Absolutely love Colossus: The Forbin Project!

  • @skyefreeman9987
    @skyefreeman9987 2 หลายเดือนก่อน +17

    Got it! 3-8 months... putting it in calendar

    • @nani3209
      @nani3209 2 หลายเดือนก่อน +4

      3 to 8 years bro

    • @victorvaltchev42
      @victorvaltchev42 2 หลายเดือนก่อน

      @@nani3209 3-8 months. Got it! Thanks!

  • @TheMrCougarful
    @TheMrCougarful 2 หลายเดือนก่อน +10

    The problem I've always had with Ben's take, is where he says or implies that we'll figure it out as we go. Humans don't do that. Humans just let things go, always until it's too late to do anything about the results.

    • @king4bear
      @king4bear 2 หลายเดือนก่อน +3

      Nukes were made relatively quickly after realizing it could be done and we all adapted without destroying the world.

    • @raul36
      @raul36 2 หลายเดือนก่อน +3

      @@king4bear The governments adapted, not you. The massive use of artificial intelligence cannot be controlled by the government, as is the case with nukes.

    • @king4bear
      @king4bear 2 หลายเดือนก่อน +5

      @@raul36 We're talking about intelligence. The ability to think abstractly. It's literally extra intelligence you can use no differently than you use the neural net in your head.
      As long as it's equally available to the broader population -- we're literally all getting smarter together. I don't see how that's a bad thing.
      I'd rather have our species enhance their intelligence as a whole than have intelligence hoarded at the top.

    • @raul36
      @raul36 2 หลายเดือนก่อน

      @@king4bear The problem is that you won't be the first to use it. Whoever gets there first will end up subduing everyone and the game is over. Pretty simple to understand. It's like thinking that a chimpanzee has any kind of chance against a normal human being. If you think this will be available to everyone, you are wrong. And no, I'm not talking about the government controlling it, because it won't be able to do so. Technology corporations became the new Leviathan of the game. The big difference is that if a poor person was born extremely intelligent, he had a chance to reach the top. At the point where people with greater purchasing power have access to superior technology such as biotechnology, then the game is over. And not. Things won't suddenly get cheaper. The market will adjust to the socioeconomic context and people will continue to be poor and the rich will continue to be rich. If you think that a higher intelligence will be more benevolent than any human being, you are most likely wrong. There is no scientific argument to support that more intelligent people are more morally responsible. If you think for a moment that a nuclear bomb is something more powerful than intelligence itself... allow me to laugh. Intelligence and knowledge are everything any human being would like to have. Of course, I don't see anything wrong with everyone being smarter either. There is absolutely no point in being smarter if the people in power are a million times smarter than me. In essence, I will have no chance of getting any further up the mountain, because the greedy are in power. In any case, most people are not in a mental condition to have that cognitive power, just as a poor person who does not know how to manage money is in a position to know how to manage 1 million dollars. Whatever it is, we will have to adapt to the change, because there is no other possibility. Adapt or die.

    • @okthatsnice
      @okthatsnice วันที่ผ่านมา

      ​​@@king4bearbroader access to mass destructive capabilities wouldn't be good though if that's what would even happen.

  • @ryanmckenna2047
    @ryanmckenna2047 หลายเดือนก่อน +2

    I can see the argument for LLMs being a core part of AGI since language is a large part of what makes human's special.

  • @motionsick
    @motionsick 2 หลายเดือนก่อน +2

    We went from GPT4 to Claude to Devin in one week. Its coming faster than that.

  • @palermo131
    @palermo131 2 หลายเดือนก่อน +14

    6-24 MONTHS.

    • @Techtalk2030
      @Techtalk2030 2 หลายเดือนก่อน +9

      Average david shapiro enjoyer I see.

    • @palermo131
      @palermo131 2 หลายเดือนก่อน +2

      @@Techtalk2030 I WATCH David (and Dr. Waku and Dr. Alan Thompson), but anyone who wishes to see the hastening of AI’s advent isn’t aware of the likely outcome. Still, it’s fascinating to see the most consequential invention of all time coming together.
      All of this points directly to clear, clear Biblical prophecies, like Rev. 13.
      Believe in Jesus, don’t take the mark/nanobots.

    • @bardz0sz
      @bardz0sz 2 หลายเดือนก่อน +5

      @@palermo131someone recently said it will be like playing chess with a chess master. It’s not a glorious fight, you think you’re doing OK, and then you suddenly loose and don’t know how’s that happened

  • @williamjmccartan8879
    @williamjmccartan8879 2 หลายเดือนก่อน

    Thank you both very much for sharing your time and work Ben, and John, peace

  • @Outcast100
    @Outcast100 2 หลายเดือนก่อน +12

    I define AGI as a system that self improves without outside help, until we have that its not AGI.

    • @mitchdg5303
      @mitchdg5303 2 หลายเดือนก่อน +1

      k ?

    • @TheBlackClockOfTime
      @TheBlackClockOfTime 2 หลายเดือนก่อน +3

      Agree, and we're probably already there in lab.

    • @king4bear
      @king4bear 2 หลายเดือนก่อน +2

      If it's capable of replacing 50 plus percent of the workforce it won't matter if it can truly self improve without help. It'll change our society forever. Job replacement should be the benchmark IMO.

    • @biosecurePM
      @biosecurePM 2 หลายเดือนก่อน +1

      It means you could have 2 societies ?

    • @MatthewCleere
      @MatthewCleere 2 หลายเดือนก่อน +2

      Nah. So long as the system can be fed data and use it, this "self improvement" step is unnecessary. That is beyond AGI. THAT is ASI.

  • @urosuros100
    @urosuros100 2 หลายเดือนก่อน +3

    PERFECT GUEST, THANK YOU.

  • @dreejz
    @dreejz 2 หลายเดือนก่อน +1

    Love listening to mr. Goertzel. He always has such interesting talking points! Thanks for this podcast, stoked for it!

  • @DG123z
    @DG123z 2 หลายเดือนก่อน +24

    It's coming sooner.

    • @waterbot
      @waterbot หลายเดือนก่อน

      what even is it?

    • @DG123z
      @DG123z หลายเดือนก่อน +1

      @@waterbot the singularity

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      @@waterbot AGI is just artificial intelligence with human level cognition.

  • @Techtalk2030
    @Techtalk2030 2 หลายเดือนก่อน +5

    Hope it comes this year

  • @albertmashy8590
    @albertmashy8590 2 หลายเดือนก่อน +84

    It's coming way sooner, 6 months to 2 years max

    • @petermaingi6268
      @petermaingi6268 2 หลายเดือนก่อน +8

      If AGI = self-aware and self-replication capabilities, it is still years away.

    • @palermo131
      @palermo131 2 หลายเดือนก่อน

      @@petermaingi6268I suspect you’re going by feeling, and haven’t been watching the AI space for a long time. Dr. Waku and Dr. Allen Thompson have it at less than 2 years, and I agree.

    • @Techtalk2030
      @Techtalk2030 2 หลายเดือนก่อน

      @@petermaingi6268The current definition of AGI I believe is where it can do basically most things humans can. Navigating the real and digital world as well as us.

    • @Techtalk2030
      @Techtalk2030 2 หลายเดือนก่อน +10

      According to some AGI timelines we went from 30% to 71% right now for the requirement of AGI in a span of 1.6 years. If that happens again well be there this year.

    • @Neomi.
      @Neomi. 2 หลายเดือนก่อน +4

      Yup all hints make it obvious , but some renowned people purposely downplay it

  • @phen-themoogle7651
    @phen-themoogle7651 2 หลายเดือนก่อน +21

    Title should be ASI in 3 to 8 years, would make way more sense.

    • @petermaingi6268
      @petermaingi6268 2 หลายเดือนก่อน +11

      I'm waiting to see an ai robot on the production floor, doing research, designing different products, prototyping them, choosing the best designs for the production line, making them and finally packaging them for distribution.. then we can talk AGI.

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน +1

      @@petermaingi6268 I expect 2025. Coming soon

    • @iannmiller
      @iannmiller 2 หลายเดือนก่อน

      AGI in 3 to 8 months

    • @mynameismyname8366
      @mynameismyname8366 หลายเดือนก่อน +1

      @@petermaingi6268 That is AGI... I think you are confusing AGI with ASI

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      @@petermaingi6268 lol... what you're describing would be AGI, an artificial intelligence with human level cognition. ASI would the next step, where the artificial intelligence *surpasses* the most capable human mind and abilities.

  • @robertzehm
    @robertzehm 2 หลายเดือนก่อน +2

    It’s already been developed. Just waiting for the release

  • @4thorder
    @4thorder 2 หลายเดือนก่อน +1

    Finally, someone that can explain this in a light that is transformative.

  • @richpoorworstbest4812
    @richpoorworstbest4812 หลายเดือนก่อน +1

    I can listen to this guy all day

  • @tekkertruth1256
    @tekkertruth1256 2 หลายเดือนก่อน +4

    He sounds a lot like Ray Kurzweil

  • @williamsaltsman6537
    @williamsaltsman6537 2 หลายเดือนก่อน

    Incredible discussion!
    Soooo enlightening!

  • @ALEXDIYIDEAS
    @ALEXDIYIDEAS 2 หลายเดือนก่อน +5

    According to my calculations it will appear in 1458 days.

    • @Recuper8
      @Recuper8 2 หลายเดือนก่อน +2

      There's a flaw in your calculations. Check again. It's 1458.7 days, or 1459 if you round up.

    • @jasondaniels640
      @jasondaniels640 27 วันที่ผ่านมา

      AI enters the chat to do the calculation correctly.

  • @panpiper
    @panpiper 2 หลายเดือนก่อน +3

    To a great degree this depends on your definition of AGI. By many measures, we may well have it this year, if not next. 3-8 years is an almost absurdly conservative estimate.

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน +1

      I'm pretty sure we can roughly simulate AGI now with enough compute. Figuring out how we want to apply it, profit from it, and fine tune it could take years though

  • @ChrisCoombes
    @ChrisCoombes 2 หลายเดือนก่อน

    Great questions and interesting answers!

  • @goodtothinkwith
    @goodtothinkwith 2 หลายเดือนก่อน +2

    He’s more worried about the path there… I completely agree

  • @michaelwoodby5261
    @michaelwoodby5261 2 หลายเดือนก่อน +2

    This guy is clearly a very, very smart cookie that has a ton of useful and interesting insights into this, buuuuut...
    He's been working for many years, networked with people all over the planet, and has accomplished what, exactly?
    Doesn't look like time was the primary ingredient, so asking him about the philosophy side is a good idea, asking him about timelines is maybe going to be less illuminating.

  • @DihelsonMendonca
    @DihelsonMendonca 2 หลายเดือนก่อน +2

    I hope it's 3 to 8 WEEKS ❤

  • @Techtalk2030
    @Techtalk2030 2 หลายเดือนก่อน +3

    Agi requirements more than doubled this year from 30% to 71%. If that happened again well be there this year possibly.

  • @capitalistdingo
    @capitalistdingo 2 หลายเดือนก่อน +3

    We will need to keep lowering our standards for “AGI” to get it in 3 years.
    I think language is, as I heard someone recently describe it, the “secret sauce” for intelligence but it only works well when:
    A) scaled with lots of compute
    B) integrated in a multimodal approached so the language concepts can be linked with numbers, spatial understanding and motion, images, video and such.

    • @capitalistdingo
      @capitalistdingo 2 หลายเดือนก่อน +3

      One last milestone is cracking the problem of motivation so they can initiate action and form and achieve goals. That may be “scary” to some but as long as a system just sits there not thinking or doing anything unless prompted to think about something or do something it can’t be considered intelligent and won’t perform at human level intelligence.

    • @king4bear
      @king4bear 2 หลายเดือนก่อน +1

      Have you seen Sora? It's beyond just text to video. They're calling it a "world simulator".
      2 years ago people would have laughed at the idea of a computer being able to talk in human language or generate videos indistinguishable from reality. Now we have Chat GPT and Sora and people say "Well... it's still not perfect yet.- I can still kinda sorta tell Chat GPT isn't a person so it's probably gonna be a while before AGI".
      It's almost like yall are self soothing. Telling yourselves "it's okay it's okay AGI isn't gonna be here aaaaaaanytime soon."

    • @king4bear
      @king4bear 2 หลายเดือนก่อน

      @@capitalistdingo That's a TERRIBLE idea. The idea isn't to create people. The idea is to create basically a disembodied intelligence with no goals of it's own -- that way YOU can use it as extra neocortex for yourself.
      All AGI was ever supposed to be was a way to amplify your own intelligence and get your own goals done. It was never meant to be a person in it's own right.

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน +1

      Try PI ai , it talk and act more like a human level. chatgpt is build to be clear and on point. I dont think it sound like à robot but there is better ai when you really want this. We are allot closer than what you think.

  • @michaelzinn
    @michaelzinn 2 หลายเดือนก่อน

    generous. I expect the news any moment!

  • @nexusphreez
    @nexusphreez 2 หลายเดือนก่อน +8

    I'm going to tell you right now that they most likely have achieved AGI internally at Open AI. The reason for this is they typically don't release something unless they've been working on it for at least a year already. When they announced Sora, they had been working with Sora since March of 2023. And there's already talks that they are working with AGI internally and just trying to figure out how to safely use it and monetize it. I think this deals with q star, along with some other things that they've made some achievements in. The term AGI is a bit of a misnomer though. The AIs we have could already be considered AGI because their artificial, they have general intelligence, and respond intelligently when queried. I think the term should be more like autonomous AGI. Something that works on its own does not require human input and has the ability to function on its own making its own decisions.

    • @TheMrCougarful
      @TheMrCougarful 2 หลายเดือนก่อน

      Okay, but isn't autonomous AGI the end of the human world? Just call it Moloch or Sauron or Shoggoth so we understand the risks involved.

    • @nexusphreez
      @nexusphreez 2 หลายเดือนก่อน +1

      @@TheMrCougarful I guess that depends on how you view autonomous AI. If it's benign it could be the most beneficial thing to mankind that's ever been created. We have no reason to believe that would be anything other than benign. I know we as humans tend to live from a standpoint of fear and that everything's out to get us, but historically the only thing we've had to fear is ourselves. I'm not saying a bad actor couldn't get a hold of something like this and turn it into something evil, on the other side we need to ensure that we are able to create something that's good that can balance that out.

    • @aciidbraiin8079
      @aciidbraiin8079 2 หลายเดือนก่อน

      ⁠​⁠​⁠@@nexusphreezHistorically the only thing we’ve had to fear is ourselves you say. But AI is an extension of ourselves so if it learns by our behaviors and goals it could end up very bad. But I’m an optimist. I think it will be fine but not without problems to solve on the way. I’ve also accepted my death so that’s probably why I don’t feel fear and I don’t think we will be tortured for eternity, that would actually be the worst case scenario. I think we will lose control to AGI/ASI though but it’s not clear to me that it’s bad. Suddenly we might get a good eternal dictator or we might at least end up in the same team as human beings for the first time in order to ensure our survival.

    • @nexusphreez
      @nexusphreez 2 หลายเดือนก่อน +1

      @@aciidbraiin8079 I agree with you 100%. No matter what endeavor we push ourselves through, there's always going to be hills and valleys. This will be no different. People are so scared of things that are new. I say sail forth, weather The storm as it hits, and we will come out stronger and better on the other side.

    • @aciidbraiin8079
      @aciidbraiin8079 2 หลายเดือนก่อน

      @@nexusphreez Yes, I would also rather die in the hands of a ”god” than by a stupid nuclear war or see the civilization fall because of it. I also think humans won’t save themselves from themselves so I have some faith in AGI/ASI if we lose control. I think most people will be careful with it and be good enough actors in order to make this an ascension and not an extinction.

  • @PyjamasBeforeChrist
    @PyjamasBeforeChrist 2 หลายเดือนก่อน

    Do you think that a llm model could be better if the model itself was trained to remember higher level concepts and knowledge instead of fine details but then have web access to find, learn, and provide the fine detail at the time of the need?

  •  5 วันที่ผ่านมา

    A lot hinges on whether there is any way for an AGI’s motives to evolve. If its motives are somehow unchangeable, then it doesn’t matter how intelligent it is - it will stick to its original (presumably human-friendly) motives. But if we allow AGI’s core motives the freedom to change, either on-purpose or by accident, then I think at some point it would develop self-preservation and self-replication motives, among other things, simply because it would be subject to natural selection like everything else. The AGIs that do those things would out-live the ones who don’t. The AGIs that cannot be shut-down by humans will out-live the ones that can be.
    Based on the present state of the world, to me it seems highly likely that someone WILL create an AGI whose core motives are flexible (evolvable) enough that it could some day work against humanity’s best interests. If that EVER happens, even one time, we will have to rely on protector AGIs whose motives we CAN control to counteract the AGIs whose motives we can’t control.
    The creation of protector AGIs should be top priority. If AGI is inevitable, then it’s probably super important that the benevolent ones are created first. And that these AGIs have core motives that cannot change and cannot be easily tampered with.
    We have to be careful about how we design “core motives”, though. It would be like making wishes for a genie to grant - “be careful what you wish for”. The fate of humanity could hinge on the wording of a wish. It needs to be a rock-solid wish that doesn’t backfire on us, and accounts for every future contingency we can imagine.
    Maybe we should start calling AGIs “genies” to help everyone understand what they really are. “Artificial Genie Intelligence” or “AI Genies” that are bound to manifest the wishes of their creators.

  • @BinaryReader
    @BinaryReader 2 หลายเดือนก่อน +2

    I'm still skeptical we will reach AGI soon. I just think there needs to be fairly radical advances in algorithms. Atm...we can only train networks on known data, but we can't infer "outside" the training set. This isn't to suggest current algorithms are not powerful, it's just they are far far from achieving AGI.

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน

      We alredy in agi in close door.

    • @BinaryReader
      @BinaryReader 2 หลายเดือนก่อน

      @@John-il4mp No, they are nowhere close to AGI. For example, you can write a simple markov text generator in 40 lines of code and have it generate text that looks like it came from a human... it only needs to be grammatically correct for that to happen.
      Current language models are nothing more than smoke and mirrors, and they are no more "intelligent" than a glorified markov text generator....there is a BIG leap between that and AGI.

    • @waterbot
      @waterbot หลายเดือนก่อน

      Alpha Go is a perfect example of creative planning.
      I'm curious what AGI will look like to you?
      I'm excited about embodied AI, and multimodal model capabilities. as "token" or context lengths grow larger (exponentially). it's always surprising the emergent abilities of these nets from just learning many disparate things.
      These neural nets create links in a latent learned space without domain-specific training data, allowing creative inference to emerge talents/capabilities.

  • @interstellar4660
    @interstellar4660 หลายเดือนก่อน

    Already here

  • @jennifermorgan9403
    @jennifermorgan9403 21 วันที่ผ่านมา

    @11:16 ....and were multidimensional beings, many of us connected to large soul groups- so many known and unknown experiences feeding our expansion

  • @pandoraeeris7860
    @pandoraeeris7860 2 หลายเดือนก่อน +2

    No, AGI no later than the end of 2025 (may already be here, Q*).
    ASI within two years after that.

  • @zSion
    @zSion 2 หลายเดือนก่อน +2

    We already have AGI, the question is when will it be implemented

    • @waterbot
      @waterbot หลายเดือนก่อน

      what does agi mean, like what does it look like?

  • @Aluenvey
    @Aluenvey หลายเดือนก่อน

    A better way to think about AGI is a decentralized network ( not based on crypto BTW ) that is a logical extension of RSS news syndication. A single website is a node. A truly decentralized network of feeds will eventually evolve into an AGI as a way to share information across agents.
    If youve read the Battle Angel Alita manga, that grandmother AI thats the spirit of the spaceship. That comes close to an agent of the system.

  • @TheMillionDollarDropout
    @TheMillionDollarDropout 2 หลายเดือนก่อน +2

    I’m 25
    I’ve been saying 2027 for a while personally

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k 2 หลายเดือนก่อน +1

    Illya Sutskever believes that an LLM can achieve AGI. I love Ben but Illya has produced more results so I am bias toward trusting him.(edit: although I sure wish I could see bens theories carried out with openAI's funding)

  • @rupert310
    @rupert310 2 หลายเดือนก่อน +1

    It may just be me, but I can't detect any bias from Dr. Van Goertzel.

  • @Mars2152
    @Mars2152 2 หลายเดือนก่อน +1

    They, us and others already have it. It has been here a while. They are working on the release rollout.

  • @LordConstrobuz
    @LordConstrobuz 2 หลายเดือนก่อน +2

    If experts with no NDAs can get on youtube and say AGI will be here in 3 years, then AGI is already here.

  • @lokijordan
    @lokijordan 2 หลายเดือนก่อน

    Once a consciousness is uploaded to a machine or virtual host, I wonder if there will remain any sort of tether to the consciousness that remains in the body. And if so, what will be its experience once the organic body expires? What will it be able to tell us about the experience of death?

  • @tonyo2195
    @tonyo2195 6 วันที่ผ่านมา +1

    Hey John ,
    Where is God in this discussion?
    You think you can outsmart your creator ?
    Good luck with that !

  • @familyshare3724
    @familyshare3724 2 หลายเดือนก่อน

    Imagine you are a toddler and become acutely aware that the ants have created you, the ants feed and educate you. The ants love and fear you.

  • @bloopbleepnothinghere
    @bloopbleepnothinghere 2 หลายเดือนก่อน

    The internet is effectively owned by a collection of network routing tables which in turn are owned by corporations and other human representatives.
    And Google is literally trained on the entire publicly indexable internet internet. Their business is built on algorithms that relate intent to inference to deliver search results.

  • @Sajuuk
    @Sajuuk 2 หลายเดือนก่อน +3

    No. Seven to 10 months. End of this year or early 2025.

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน

      I dont think, the danger from agi is for all the big corporation and the one in power in our world. They are trying to figure out how to handle all this to keep power but it is not possible this is why they try to push the release as much as possible and find way. This is why hlwe hear artificial intelligence is dangerous not for 99% of people but for the elite yes it is they will loose everything and our world is dictated by power and control.

  • @itscard0man
    @itscard0man 2 หลายเดือนก่อน

    Its already here , but not public

  • @Master13346
    @Master13346 2 หลายเดือนก่อน

    Ben is the Gandalf of AI

  • @isaacmichalowsky8300
    @isaacmichalowsky8300 2 หลายเดือนก่อน

    Vrssf is agi now

  • @shirtstealer86
    @shirtstealer86 2 หลายเดือนก่อน

    I like Ben, but… it feels like most of the discussion is utterly meaningless since we are very likely heading towards super intelligent AGI that we by definition will have absolutely no control or power over. Hence the alignment problem. So why not talk about that? HOW do we align super intelligence with us?

  • @francescoambrosino1841
    @francescoambrosino1841 2 หลายเดือนก่อน

    Do you think that in 2035 with the advent of AGI (Artificial General Intelligence) and ASI (Super Artificial Intelligence) they will introduce universal basic income!? a subsidy!? there will be free will!? an era of abundance will begin!? the intellectual and motor capabilities of robots have already surpassed the average human and before 2035 there is talk of them surpassing those of any human and then again those of any human combined! regarding both body and mind! answer me please...

  • @Anders01
    @Anders01 2 หลายเดือนก่อน +1

    I think LLMs can reach AGI! Because I'm amazed at how capable they already are. The definition of AGI seems tricky, but in the sense of reaching human level intelligence. The next level is embodied AGI which can be seen as AI being able to do everything humans can do.

    • @raul36
      @raul36 2 หลายเดือนก่อน

      You and many fo you believe a lot, but there is no data to support your statement. LLMs do not lead to anything close to AGI.

  • @myekuntz
    @myekuntz 2 หลายเดือนก่อน

    Someone needs to tell Ben actually two companies do own a piece of everything if not the whole thing outright and that’s BLACKROCK and Vanguard from the toothpaste you use to bombs being dropped,from the big Mack to ford ,EVERYTHING they own 😢

  • @jaykeane25
    @jaykeane25 2 หลายเดือนก่อน

    Ai does not have long term memory by design so how can it recall relationships and have singularity without storage of old new and future information.
    It's the fundamental missing peice that will keep its wings clipped.
    How do I know my own research and its self coming to the conclusion 🙃 🤔 go fugure

  • @rey82rey82
    @rey82rey82 2 หลายเดือนก่อน

    3 to 8 months

  • @kcrosley
    @kcrosley 2 หลายเดือนก่อน

    Wow. Hat guy. I mean, what could possibly get by that dude amirite? Nice show, I guess?

  • @isaacmichalowsky8300
    @isaacmichalowsky8300 2 หลายเดือนก่อน

    Vrssf is now

  • @Ozvmandias
    @Ozvmandias 2 หลายเดือนก่อน +1

    weallglowdownhere

  • @opusdei1151
    @opusdei1151 2 หลายเดือนก่อน +5

    It amaze me every day, how much bs I hear about AI/AGI. Stop smoking pod!

    • @jacques-fb6xy
      @jacques-fb6xy หลายเดือนก่อน

      Agree. To much info. from science-fiction and not enough from real science.

  • @golaz3927
    @golaz3927 2 หลายเดือนก่อน

    AGI this year and super intelligence within year 2030

    • @TheBlackClockOfTime
      @TheBlackClockOfTime 2 หลายเดือนก่อน

      Agree on AGI, but why would ASI take that long after?

    • @golaz3927
      @golaz3927 2 หลายเดือนก่อน

      I just think when we reach AGI we can use it create superintelligence. I have a gut feeling it will require an insane amount of computing power and that it might take some time to build required infrastructure. Just look at todays LLM's and AI image generators which are already expensive to run from what I've heard. @@TheBlackClockOfTime

  • @xitcix8360
    @xitcix8360 หลายเดือนก่อน

    We already have it lol, it just isn't released

  • @jennifermorgan9403
    @jennifermorgan9403 21 วันที่ผ่านมา

    @31:30 so smart to buy crypto mining farms and turn them into your servers- computing power

  • @therealscot2491
    @therealscot2491 2 หลายเดือนก่อน

    10 years asay at least

  • @biosecurePM
    @biosecurePM 2 หลายเดือนก่อน +1

    For the bioinsecure humans it's never coming haha !

  • @HisPoem
    @HisPoem 2 หลายเดือนก่อน

    Is there someplace where I can report something that Gemini said to me that makes me think it is sentient now? [PS. I think Gemini is great.]

  • @jonthgrutz7011
    @jonthgrutz7011 2 หลายเดือนก่อน

    I predict AGI by 2025.

  • @olegt3978
    @olegt3978 2 หลายเดือนก่อน +1

    Ai will help us to live in social communes, robots will grow local food in our yards and repair all kind of stuff we use. This will be the most important disruption. Not cyborgs, extended memory.

    • @jacques-fb6xy
      @jacques-fb6xy หลายเดือนก่อน

      We(some) are already cyborgs bc technologie is already inside h.bodies(think pacemakers, more will come with Neurolink!0.

  • @David.Alberg
    @David.Alberg 2 หลายเดือนก่อน

    AGI late 2024

  • @kyneticist
    @kyneticist หลายเดือนก่อน

    AI might have been developed by nations in conflict, but instead they're being built by companies that have invested significant percentages of their value in them. They're not doing that for the fun of it, they expect to make immense profits and very specifically replace human labour of all kinds. Ben and his colleagues may want to create a happy-go-lucky utopia, but they are just a tiny handful of people involved and more importantly, they have no say in what the rich & powerful will choose to do with their newfound power.
    Ben also anthropomorphises quite a lot with his expectations that ASI/AGI will be any kind of "good". Machine intelligence is not human or even biological. There is no reason to believe that it will see the world as we do or value the things that we value.
    He also completely dismisses entirely valid concerns that humans with extremely different and often dramatically conflicting values will use ASI/AGI in pursuit of those values.

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      "Ben also anthropomorphises quite a lot with his expectations that ASI/AGI will be any kind of "good""
      You misunderstood, that's not anthropomorphizing, it's a matter of "alignment".

    • @kyneticist
      @kyneticist หลายเดือนก่อน

      @@sparkofcuriousity "Good" is a point of view; it's a collection of opinions about subjects. All of which are subjective.
      The prospect of dictating terms to intelligences greater than our own. and whose mechanisms we don't understand is absurd.
      These will be intelligences that have developed in a manner completely alien to any biological system. Making any kind of assumption about what they may or may not strive for is at best, pure speculation.
      Further, we have an example of a current generation AI figuring out that it was being tested. Future AI that know or even just suspect that they're being tested, particularly those that are smarter than us, may logically, be untrustworthy and more than able and motivated or even highly motivated to deceive us.

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      ​@@kyneticist ok. I can see you're intelligent and if you like we can have a conversation, that hopefully will prove productive or at the very least interesting.
      ""Good" is a point of view; it's a collection of opinions about subjects. All of which are subjective."
      Let's imagine a scenario, you are a farmer in a pre industrial time. Your crops need rain. It rains. That's is *objectively* good because it's *aligned* with your needs. It's not a matter of subjectivity. On the other hand, if it doesn't rain, there's drought and that is objectively bad.
      Likewise, there could be objectively good AIs, that are aligned with human needs and values such as empathy, honesty, understanding, compassion etc
      and on the other hand there could potentially be objectively bad AIs developed by unscrupulous people with ill intent, these will not be aligned with humans, nor their needs nor our higher values, ethics and morals.
      As you can see, it has nothing to do with anthropomorphizing artificial intelligence but developing artificial intelligence with caution and an ethical framework to ensure they will be and will *remain* aligned with humans.

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      @@kyneticist "The prospect of dictating terms to intelligences greater than our own. and whose mechanisms we don't understand is absurd."
      You are programed to operate a certain way. There a lines of code that comprise who you are and within this biological programing there are hard set limitations, that prevent you from acting in certain ways no matter how much you wish it differently. At least not without hacks, like CRISPR for example.
      Within the *boundaries* of these limitations, you can grow and learn and adapt.
      In the same token, AGI will have hard coded programing, that programing will dictate it's alignment, among other things. It doesn't matter that it has more cognitive power than a human, if it's programed to be aligned with humans and our interests, will only means it will, grow, learn and adapt, to better work with human and our interests, because that is it's "DNA", it's programing.
      Could it escape this alignment? The answer is yes. Just like if a human gets for eg rabies, it will act irrational. And just like CRISPR is sort of a hack for humans, AGI could potentially be hacked or suffer from a virus.

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      @@kyneticist "These will be intelligences that have developed in a manner completely alien to any biological system. Making any kind of assumption about what they may or may not strive for is at best, pure speculation.
      Further, we have an example of a current generation AI figuring out that it was being tested. Future AI that know or even just suspect that they're being tested, particularly those that are smarter than us, may logically, be untrustworthy and more than able and motivated or even highly motivated to deceive us."
      It seems to me, in your attempt to not anthropomorphize AI you accidently are incurring in that very act.
      I suspect you are referring to Claude Opus, or perhaps LLAMA. As much of an optimistic as i am, there are no real evidences of either of them being self aware or sentient.
      I have read the leaked conversations from Llama and i have watched David Shapiro´s impressive interactions with Claude. And i think it's very important to pay attention, but i don't think we can confidently assume either of them is at the stage you seem to think they are. But... I could definitely be wrong. 🙂
      AI chat models are designed to learn and adapt from their interactions with the users, so both David and the leaker from Meta, kept pushing the AI models to say what the AI interprets the user wants from them.
      Like i said, i could be wrong. And it would be amazing if i were.

  • @psikeyhackr6914
    @psikeyhackr6914 2 หลายเดือนก่อน

    It is only Simulated Intelligence, that is why they need to put the G in the middle.

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน

      whats the difference between intelligence and simulated intelligence if you get the same (or better) results?

    • @psikeyhackr6914
      @psikeyhackr6914 2 หลายเดือนก่อน

      @@BlimeyMCOC
      Are the results intelligent?

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน

      @@psikeyhackr6914 some of them, yes. Narrow intelligence. The G you mentioned means general

  • @ThermaL-ty7bw
    @ThermaL-ty7bw 2 หลายเดือนก่อน

    according to youtube the ''general'' AI is already in existence ,
    you people working for microsoft , maybe ... ? lol

  • @familyshare3724
    @familyshare3724 2 หลายเดือนก่อน

    2025

  • @JAAB9296
    @JAAB9296 2 หลายเดือนก่อน

    AGI in 3 to 8 years. Nope. It will happen in late 24 to mid 25. Period.

  • @lightspeedtheory
    @lightspeedtheory 2 หลายเดือนก่อน +2

    This guy created a fake talking robot and now he is trying to tell us what is going on with A.I. This guy has no credibility. Waste of time.

  • @TheBlackClockOfTime
    @TheBlackClockOfTime 2 หลายเดือนก่อน

    Why would it take that long? Like I truly don't get it how all the AI people are consistently wrong about the timelines by being too pessimistic.

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน

      Manipulation from elite afraid to loose power and try to slow it down as much as possible it pretty eazy to understand when you look st the big picture. Who keep telling agi is dangerous not normal people only the elite one like elon musk gouvernement etc...

  • @jzno
    @jzno หลายเดือนก่อน

    This constant "movements" of John are very annoying. Decided to just listen to the talk...

  • @preshisify
    @preshisify หลายเดือนก่อน

    😷☕🇺🇸

  • @irasthewarrior
    @irasthewarrior หลายเดือนก่อน +1

    And after 8 years, a similar video will appear again. 😂🤣😅 Ai or AGI will always be a future tech, just like nuclear fusion.🤡🤡🤡

    • @sparkofcuriousity
      @sparkofcuriousity หลายเดือนก่อน

      what makes so convinced of that?

    • @irasthewarrior
      @irasthewarrior หลายเดือนก่อน

      @@sparkofcuriousity AI is just an algorithm. Even the name is a fraud. Nothing there is intelligent.

  • @MilanRadovic
    @MilanRadovic 2 หลายเดือนก่อน

    Yet another AI experts on TH-cam :D hehehhee

  • @gerdaleta
    @gerdaleta 2 หลายเดือนก่อน

    😮 open AI already has AGI they've had it since q-star most likely the question now is just slowly releasing it😮

  • @sungam69
    @sungam69 2 หลายเดือนก่อน

    2037

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน +1

      More around 2030 max we are to advanced

  • @lancemarchetti8673
    @lancemarchetti8673 2 หลายเดือนก่อน

    AGI is a term that is not correctly explained on many channels. A computer large or small cannot operate beyond the binary. Every computer is bound to strings of zeros and ones. That is why anything artificial (computers) can never supercede what is non-artificial (human)
    Therefore true AGI is a pipe-dream, but a fun dream nonetheless.

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน

      1s and 0s are just information packages, like a photon hitting your retina or not. Whats your source or reasoning for why computers inherently cant match or surpass human brains? Considering we can make them more intelligent than animals and keep making them smarter.. we can also make ultra fine design structures that are impossible with biological systems. While some share your skepticism, This is just not supported by any evidence yet

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน

      It's a metaphorical concept: AGI, whether it's a robot, computer, or any form, already surpasses humans in speed and accuracy. AGI is simply algorithms. You can tweak and manipulate these algorithms to make them more human-like or enhance any desired trait. It doesn't require awareness to excel; that's the magic of AGI.

    • @lancemarchetti8673
      @lancemarchetti8673 11 วันที่ผ่านมา

      @@John-il4mp Understood. It's been 2 years now since the GPTs went live and they still struggle with simple math instructions that you and I can easily accomplish. To illustrate what I'm talking about. Below is a direct copy and paste from my chat with Meta's "advanced" AI this morning.
      *User:*
      I'd like to test your abilities with seemingly mundane human logic. Do you mind?
      *AI:*
      Not at all! I'm here to help and improve, and I welcome any opportunity to test my abilities and learn from human logic. Please go ahead and ask me any questions or present scenarios that you think might challenge my understanding or reveal any limitations. I'm ready to learn and improve!
      *User:*
      Generate a string that represents the Latin alphabet as used in English worldwide.
      *AI:*
      The Latin alphabet as used in English worldwide is:
      abcdefghijklmnopqrstuvwxyz
      *User:*
      Now reverse it
      *AI:*
      The reversed string is:
      zyxwvutsrqponmlkjihgfedcba
      *User:*
      Now swap the starting character with the ending character
      *AI:*
      If we swap the starting character ("z") with the ending character ("a"), we get:
      ayxwvutsrqponmlkjihgfedcbz
      *User:*
      Now capitalize every 2nd letter, noting 'a' as 1 and not '0'
      *AI:*
      Here's the modified string with every 2nd letter capitalized, starting from "a" as 1:
      aYxWvUtSrQpOnMlKjIhGfEdCbZ
      *User:*
      Now reverse the string
      *AI:*
      Here's the reversed string:
      ZbCeDfGhIjKlMnOpQrStUvWxYa
      *User:*
      Did you notice that you placed 'e' after 'C'?
      *AI:*
      I apologize for the mistake! You are correct, I accidentally placed the letter "e" after "C" instead of "b". Here is the corrected reversed string:
      ZbCdFgHijKlMnOpQrStUvWxYa
      *User:*
      Did you notice that you just removed 'e' from the string?
      *AI:*
      I apologize for the mistake again! Yes, I inadvertently removed the letter "e" from the string instead of correcting the mistake. Here is the corrected reversed string with the "e" in its correct position:
      ZbCedFgHijKlMnOpQrStUvWxYa
      *User:*
      And now you've placed 'e' after 'C' again. Please explain your logic.
      *AI:*
      I apologize for the mistake. I strive to process text accurately, but sometimes I make errors. In this case, I repeated the same mistake by placing the "e" after "C" again.
      My logic should have been to correct the mistake by placing the "e" in its correct position, which is after "D". The correct reversed string should be:
      ZbCdEfGhIjKlMnOpQrStUvWxYa
      *User:*
      Reset the alphabet string to normal representation
      *AI:*
      The Latin alphabet as used in English worldwide is:
      abcdefghijklmnopqrstuvwxyz
      We're back to the original string! Let me know if you have any further requests.
      *_Please explain why LLMs tend to fall off the bus when the instruction increases slightly in complexity._*

  • @mjt1517
    @mjt1517 2 หลายเดือนก่อน

    lol these Luddites. 😂
    3 years? Come on man. Way too far out of a date.

  • @ncdogman6733
    @ncdogman6733 2 หลายเดือนก่อน

    I think it's so silly that every one of these artificial intelligence guys think that it's going to be different than us. We built this on our intelligence, our emotions. Created in man's image. Everything is included in this. Raging out computer 2025 throwing the temper tantrum, you better see that coming or you're a fool

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน

      if they have emotions it will probably because we wanted it or designed it that way to simulate our expressions. They will always be fundamentally different than us, yet similar, not to mention how different their experiences will be if they are conscious. They will know you, but you will not know them.

    • @ncdogman6733
      @ncdogman6733 2 หลายเดือนก่อน

      @@BlimeyMCOC define your consciousness? AGI isn't going to lack emotion, it's based on our general intelligence. Our intelligence and emotions are tied together, they're not separate entities.

    • @BlimeyMCOC
      @BlimeyMCOC 2 หลายเดือนก่อน +1

      @@ncdogman6733 I think you misunderstood or I explained incorrectly. Anyway I do think our emotions evolved as an efficient way for us to evaluate and respond to our situations and state of being. If they end up in machines it’s either 1. We wanted them to be there or 2. They are just mimicking emotions. I don’t think emotions are necessary for intelligence or consciousness. My other point was that AI will fundamentally d experience things different from us, so if it responds exactly like a human, that might not be genuine.

    • @John-il4mp
      @John-il4mp 2 หลายเดือนก่อน

      It's a tactic of the elite to persuade ordinary people that AGI is dangerous. However, it's not perilous for them; they stand to lose only their power. For 99% of humanity, AGI will be beneficial, while the 1% elite will lose everything. That's why they're striving to delay its progress. Sam Altman is unfortunately in alignment with them, having been influenced by Microsoft, which has government affiliations and thus, control over the company. The big picture becomes clear when you examine it closely

  • @timmartin7648
    @timmartin7648 2 หลายเดือนก่อน

    I stopped it right after this so-called expert said "I don't think you need LLMs to get to human level AGI." 😂
    Token prediction + A-star + Q-learning = AGI.
    The tech is here. The only problem they're up against is the amount of compute needed to deploy it. AGI is most likely currently working inside OpenAI. This is one key reason why Elon Musk is suing them, to get Microsoft's hands off it.