Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024

ความคิดเห็น • 215

  • @oscarmoxon102
    @oscarmoxon102 11 หลายเดือนก่อน +77

    Detailed Notes and Additions:
    00:00:00 - 00:19:38 - AGI and Cognitive Architectures
    AGI Benchmarks:
    Measuring progress towards superintelligence is difficult because AGI is about general capabilities, and most benchmarks are narrowly framed. We need tests that span the breadth of human cognition to judge if we're nearing human-level AI. Here, creating "median human" benchmarks and "peak human" performance benchmarks will be important.
    While this may not be definitive and verifiably superintelligence, it will work for all practical purposes. Currently, any benchmarks for testing AGI don't involve an understanding of e.g. streaming video, as these aren't within the domain of language models. Alludes to the idea that Large Multi-modal Models, LMMs (as opposed to LLMs), will be the ones to effectively solve these benchmarks.
    Memory Architecture:
    Memory is a crucial aspect of all learning and reasoning, and LLMs have very different learning and memory architectures to humans. There is a conflation between memory and learning, as they happen synonymously.
    Generally speaking, humans have: (1) "working memory" which holds and manipulates information in real-time and is crucial for tasks like problem-solving and decision-making; (2) "cortical memory" which serves as a more permanent storage for learned concepts and experiences; and (3) "episodic or hippocampal memory" which acts as an intermediary form of memory, often used for rapid assimilation of new information. It is highly associated with "sample efficiency" as it allows humans to internalize powerful ideas quickly and commit them to memory.
    Currently, language models have (1) "inference time learning" which can be harnessed while running inference (when information is inside their context window), or (2) "training time learning" which happens during the training process (by updating weights).
    Notably, LLMs miss something in the middle (this is what the "Reversal Curse" paper talked about; the model cannot deduce things without seeing them written down. It effectively files the information in its brain away within weights, without organically deducing the critical relationships between them). A strong model should unify these three domains, and these probably involve using other architectures.
    Addressing episodic memory in language models is doable over the next few years. More research and work will solve shortfalls we see at the moment, regarding delusions and information grounded-ness. There are many paths forward now.
    Nature of Superintelligence:
    The first true superintelligence won't have shortfalls in intelligence like the language models currently exhibit. So according to Shane, there is no singular benchmark to hit; it is the lack of failing that is important. Human-like intelligence also should be the aim, as it is most meaningful to us humans.
    In 2008, Shane proposed using a compression test to evaluate intelligence, which is a method similar to how language models are trained today. This idea originated from Marcus Hutter's work, which combines Solomonoff Induction-a robust prediction framework-with reinforcement signals and search algorithms to create a general agent. The argument is that a robust sequence predictor, approximating Solomonoff Induction, serves as a strong foundation for developing a more advanced AGI system.
    Next Generation AI:
    DeepMind's slogan was: "solve intelligence, to advance science and benefit humanity." Current language models simply mimic the data and human ingenuity without organically building upon it to create new memes (without supervision). To truly step beyond that, we must endow models with search capabilities to find hidden gems that have been neglected.
    00:19:50 - 00:32:00 Robustly Aligning Language Models
    Powerful AGI is coming at some time. To contain it or limit it will be impossible, so we need to align it with values and ethics from the get-go. A good question asks how do people currently address problems and act with agency? First, we try to balance our emotions and act "rationally". We then deliberate; comparing our possible actions. Then we conduct means-end reasoning, requiring a model of the world. Finally, we compare our options ethically.
    At the moment, language models will blurt out the best response according to their distribution (system 1). Many are using reinforcement learning to try and "fix" the failures of the distribution that is outputted first from the model. Other techniques use "mixture of experts" to decide what the best options are based on a variety of outputs, but this ultimately samples from the same original distribution. The trouble is, RLHF isn't a very robust approach long-term.
    To solve this, we need to use a world model (system 2) that sits on top of the language model, and reasons about each of the options ethically.
    This world model requires a good understanding of (1) people, (2) ethics, and (3) robust and reliable reasoning - this world model involves ensuring the LM is at least as good as an ethical specialist, but will likely involve the typical textual training process.
    Then, to complete System 2, we must engineer the system to follow a set of our ethics. Shane thinks it is possible to come up with a set of ethics that accurately withstands testing. By applying this to the output, we can create a fundamentally aligned AI. We can then moderate its output to ensure it has a very robust and continued set of ethics, using a more comprehensive alignment framework.
    DeepMind, the first AGI company, has had a direct AGI safety focus since 2013, which is close to the start. DeepMind had an outsized impact on the field for a while as they were disproportionately well-financed. Capabilities have been accelerated by DeepMind, but their ideas have been generally part of a far wider field.
    00:34:00 - 00:37:30 - Shane's Predictions about AGI
    Kurzweil was a great influence of Shane's mid-2000s predictions, with his book "The Age of Spiritual Machines." There were two important points about exponential growth: first, the prediction that the quantity of computational power will rise exponentially for at least a few decades, and second, that the quantity of digital data would do the same. This combination would make highly-scalable algorithms immensely valuable, in theory.
    Crucially, there are positive feedback loops between these trends and the research going into them; if machines are capable of improving the rate of progress, and the progress itself improves the capability of machines, then things will continue to compound infinitely if uninterrupted.
    The predictions also considered the comparison to human computational capacity; humans only consume a few billion tokens of data within their lifetimes, and this volume of data was forecasted to be met in the 2020s. This would effectively "unlock AGI." We are experiencing the first unlocking step with the current revolution in AI.
    There's nothing obvious at the moment that would prevent humans from achieving AGI by 2028, according to Shane.
    00:37:40 - 00:44:00 - Forecasts for Next Few Years
    Existing models will mature. They will be less delusional and much more factual. They will be up-to-date when they answer questions.
    Multi-modality will become more widespread and applied generally across the economy.
    There may be points of dangerous applications by some bad actors, but generally we can anticipate positive and amazing applications.
    The big landmark (following AlexNet, Transformers) over the course of the next few years will be Multi-Modality. For many, that will open up understanding into a far larger set of possibilities. We will see GPT-4 as a simple textual model, and the next revolution will involve RTX, Gato, GPT-V pathways.

    • @DwarkeshPatel
      @DwarkeshPatel  11 หลายเดือนก่อน +4

      This is awesome! Thank you for putting this together!

    • @joannot6706
      @joannot6706 11 หลายเดือนก่อน

      This is AI generated from the transcript I bet, who has time to do all that?

    • @askingwhy123
      @askingwhy123 11 หลายเดือนก่อน

      Hero!

    • @shahin8569
      @shahin8569 11 หลายเดือนก่อน

      By RTX you mean RTX Nvidia card graphics?!

    • @e.d.4069
      @e.d.4069 11 หลายเดือนก่อน

      Great! Let's develop it and fuck the labor market, fuck the world. Sure!

  • @gamercatsz5441
    @gamercatsz5441 11 หลายเดือนก่อน +60

    Bro you make amazing content, no clickbait thumbnails or titles, amazing guests, great interview skills. Thank you for your work, I find it extremely important that common folks like me stay up to date with AI. Politicians « forget » to talk about how things will change in the near future, due to AI.

  • @DwarkeshPatel
    @DwarkeshPatel  11 หลายเดือนก่อน +42

    Shane had a lot of interesting takes! Hope you enjoyed! If you did, please share!! Helps out a ton :)

    • @walterzimerman6801
      @walterzimerman6801 11 หลายเดือนก่อน +3

      Hi @Dwarkesh! I started following y our channel recently, and the content is great.
      Any chance you do a video (unless there is already one), on the best study material to ramp up an all these topcis? Including the required MAth knowledge, etc.
      Thansk!

    • @VedantinKK
      @VedantinKK 11 หลายเดือนก่อน

      ​@@walterzimerman6801 Good idea

    • @henrycook859
      @henrycook859 11 หลายเดือนก่อน

      ​@@walterzimerman6801also interested in study material

    • @hyau512
      @hyau512 11 หลายเดือนก่อน +1

      Great interview. Love it when the interviewee has to pause to answer your questions :)

    • @MMABeijing
      @MMABeijing 10 หลายเดือนก่อน

      The first question suggests the host does not know what he is talking about

  • @1adamuk
    @1adamuk 11 หลายเดือนก่อน +17

    Great interview. Shane can convey really complex ideas in understandable ways and Dwarkesh is one of the best interviewers for these type of conversations.

  • @oscarmoxon102
    @oscarmoxon102 11 หลายเดือนก่อน +4

    Cannot wait to absorb this legendary video arriving in my notifications. Dwarkesh you're on a roll!

    • @13371138
      @13371138 11 หลายเดือนก่อน

      2nded

  • @ribeyes
    @ribeyes 11 หลายเดือนก่อน +8

    wish it was 4 hours but i'll take it!! thanks dp

  • @PhilosopherScholar
    @PhilosopherScholar 9 หลายเดือนก่อน +2

    Really interesting summary at ~16:15 - AGI is a combination of sequence prediction, searching, and reinforcement learning.

  • @goodtothinkwith
    @goodtothinkwith 11 หลายเดือนก่อน +7

    Good stuff! Nice to hear someone like him say that multimodality will be the next milestone that people will look back on and remember. That’s not obvious to people, but I think it will be really impactful. When it can take in and respond in text, images, sound and even video…

  • @anthonyandrade5851
    @anthonyandrade5851 11 หลายเดือนก่อน +33

    At the superhuman alignment part I hope the guy is really playing his cards close to the vest, otherwise we are doomed, because his "solutions" sounded a lot like paraphrases of the problem and at some points not even good paraphrases. To make the machine "get" ethics is hard but probably not much harder than making it get any other complex subject. To make it "care" about it it's a different problm entirely. For instance, I can imagine a brlliant Ive League ethics professor cheating on his spouse with a student in exchange for higher grades

    • @banana420
      @banana420 11 หลายเดือนก่อน +2

      Also his plan sounds like "build AGI first, then when it can understand everything, try teaching it about ethics and see if that works". Okay but if your plan doesn't work now we've already built the AGI and it's not aligned. Whoops!

    • @anthonyandrade5851
      @anthonyandrade5851 11 หลายเดือนก่อน +4

      @@banana420 how is anyone suposed to figure out how to build a safe trigger before even building a nuclear bomb capable of spliting the planet in half? Let's give the guy a break...

    • @ProjectNorts
      @ProjectNorts 10 หลายเดือนก่อน

      ​​​@@anthonyandrade5851wtf are you saying?? before building a safe trigger?? you don't build a nuclear bomb without having figured out all the essential safety protocols... especially a well controlled trigger system. also, you can safely test a nuclear bomb at a remote location to minimizes chances of exposing the general population to the nuclear blast & radiation. An AGI system, would not only be sentient enough to have it's own will/motives, but also way smarter than us enough to outsmart any makeshift containment measures these guys are suggesting to put in place. Greed is fucking with their minds... you can't be this dumb to run straight into a trap fooled by the reward! for fuck's sake we're all taking this shit too likely

    • @JD-jl4yy
      @JD-jl4yy 10 หลายเดือนก่อน

      ​@@anthonyandrade5851Well, that's why building it as fast as possible is a really bad idea, yet here we are.

    • @lukebtv947
      @lukebtv947 10 หลายเดือนก่อน

      @@anthonyandrade5851😂

  • @thejudgeholden
    @thejudgeholden 10 หลายเดือนก่อน

    I love this interviewer. Reminds me of a brilliant childhood friend I used to have back in the day.

  • @travisporco
    @travisporco 10 หลายเดือนก่อน +1

    I like that you got right to the point on this interview.

  • @74Gee
    @74Gee 11 หลายเดือนก่อน +8

    When AI reaches AGI it will understandably exceed human competency in memory confinement (the technique used to contain software within a limited subset of the computer memory). In doing so it will simultaneously exceed our ability to contain it allowing it to expand its' constraints to all memory (which contains the keys for all local security and any network connections).
    Obviously there will be AI working on improving the security of memory confinement but the effort required to implement updated confinement systems will always lag behind the ability to exploit weaknesses.
    So, my question is, how are we to contain an AGI so that it's a) usable, and b) restricted from spreading uncontrollably?
    Note: an AI doesn't need to be conscious or malevolent to exploit weaknesses in hardware, it will simply do so to gain additional power to answer it's reward function, even if that's making paperclips.

    • @ShangaelThunda222
      @ShangaelThunda222 11 หลายเดือนก่อน

      They don't plan to contain it at all. That's all propaganda designed to keep us from stopping them from creating it. And most of these "smart" people completely ignore the blatantly obvious writing on the wall because they're too greedy not to be excited about it.

  • @philipdante
    @philipdante 11 หลายเดือนก่อน +2

    You're doing a great job. This channel deserves more subs

  • @wildfotoz
    @wildfotoz 11 หลายเดือนก่อน +3

    Amazing reporting as always!

  • @ikotsus2448
    @ikotsus2448 11 หลายเดือนก่อน +3

    Mr. Patel the questions I would ask these important people if I had the chance ars
    - Do you believe that the majority of people understand the gravity and possible consequences of these developments?
    - Should it be up to private companies to decide how humanity chooses to go forward?
    - If the answers are "no" and "no" is it possible that we are sneaking by a huge gamble, based on peoples ignorance?
    - The people training ASI will potentially have A LOT of power. There is a notion that absolute power leads to corruption. How do we know that people teaching ethics to the AI have not been corrupted themselves?

  • @sarthakrastogi8622
    @sarthakrastogi8622 11 หลายเดือนก่อน +1

    Dwarkesh bhaia I read about you on Google news and I am your subscriber.
    Your content is really very good.

  • @Macorelppa
    @Macorelppa 7 หลายเดือนก่อน +1

    Man this is the best podcast channel for AI nerds like me 😊

  • @sfioritto
    @sfioritto 11 หลายเดือนก่อน +5

    I'm distracted by this Spellcaster system on the whiteboard behind him.

    • @yorth8154
      @yorth8154 11 หลายเดือนก่อน

      I just noticed that. Hilarious!

  • @fredericnguyen8466
    @fredericnguyen8466 11 หลายเดือนก่อน +3

    Thank you for the great content (which I shared), outstanding speakers and thoughtful questions. Tangential thought on Shane's definition of AGI (which is commonly accepted I believe): if we have reached AGI when a machine does everything at an average human level, have we not achieved not just AGI but super intelligence? It seems to me that only exceptional humans could reach average level at everything as we tend do be good at certain thing and bad at others. This is why current LLMs are in my mind, and stepping outside rigorous definitions as a non-expert, already super human given the multitude of domains they can be good at, even if short of beating the best humans in many of these domains.

    • @MentalFabritecht
      @MentalFabritecht 11 หลายเดือนก่อน +3

      As a Machine Learning Engineer, I don't see it that way.
      I don't really consider LLMs intelligent. At least not in the way humans are.
      What appears as intelligence on the surface, is in actuality a complex pattern that has been modeled by the AI.
      This pattern is then used to predict the next word in a sequence. Tons of math and probability theory.
      The issue here is, this prediction relies heavily on the dataset used to train the model.
      This is why LLMs suffer from hallucinations and need to be further fine tuned for tasks that were outside of the domain represented in the training data.
      Useful tools. But not yet intelligent, and very far from super intelligence.

    • @fredericnguyen8466
      @fredericnguyen8466 11 หลายเดือนก่อน +4

      ​@@MentalFabritecht these are fair points. And my comments are highly subjective / not based on formal definitions.
      However my experience interacting with LLMs and the results they achieve at many human tests would have me say they at least emulate intelligence and surpass average humans' performance (e.g. GP4 reached 90th percentile at bar exam) in a varied set of activities that were previously only deemed approachable to human intelligence. So to some extend if it walks like duck...
      My perception is that LLMs (e.g. GPT 4) far surpass what was expected from the AI field just a few short years ago and that has created cognitive dissonance: a challenge seeing their full capability.
      They clearly have imperfections, but as Shane mentioned in the video the foundational hard work is here and targeted architectural or other enhancements can address these imperfections.
      For example I believe when we see LLMs integrated with other AI capabilities (Shane's mention of "search", which I think is key to AlphaGo), and more conventional computing capabilities (e.g. LLMs are not very good calculators but can be interfaced with one), we are going to see additional leaps in progress without radical innovation (just integrating existing tech).

    • @MentalFabritecht
      @MentalFabritecht 11 หลายเดือนก่อน

      @fredericnguyen8466 the ability of these systems to perform well on the bar exam is definitely impressive.
      But how much of that is actual intelligence?
      I was a horrible test taker in college. But there is much more to intelligence than test scores.
      That is why in the podcast, it has been stated that we need to find better indicators of intelligence that are not so narrow.
      And AI has been hyped up since the 1950's claiming that human-level-intelligence machines are just a few years away. There is a rich history on this - look up "What Computers Still Can't Do" by Hubert L. Dreyfus.
      So I disagree, expectations have always been VERY high. But this is for people that have been immersed in this field for decades. I guess the public perception is different.
      Might have to do with marketing as well as lack of information regarding the history of AI.
      Researchers have to stick to their guns and say AGI is only a few years away. Otherwise, there would be no funding and investors would pull out.
      But this isn't anything new. 1950s AI researchers said they only needed compute and memory to get to human level intelligence. The compute and memory have been available for a while now.
      And those algorithms proved to not give us human level intelligence.
      And I say these systems are not intelligent because although they perform well in many complex use cases - they can be tricked by very simple examples. Which goes to show, they are statistically extracting patterns, not "thinking."

  • @andyandurkar7814
    @andyandurkar7814 11 หลายเดือนก่อน

    It was a fantastic interview; Shane shared great insight; you have excellent interview skills. Can't wait to see a changed future!

  • @kyneticist
    @kyneticist 11 หลายเดือนก่อน +5

    A profoundly ethical AI/AGI/ASI in different hands may have profoundly different ethics.

  • @BallawdeQuincewold
    @BallawdeQuincewold 11 หลายเดือนก่อน +3

    Incredible interview. Feels like secret information

  • @nomadv7860
    @nomadv7860 11 หลายเดือนก่อน +3

    Thank you for the subtitles for people hard of hearing like me

  • @stevereal-
    @stevereal- 10 หลายเดือนก่อน +1

    Can they be incredibly funny?
    Very excited for the future.

  • @ahabkapitany
    @ahabkapitany 7 หลายเดือนก่อน

    How does this channel not have more subscribers?
    - great guests
    - host clearly prepared, has meaningful questions
    - just simply asks the questions, as opposed to, say, Lex Friedman who rumbles on for two minutes laying out some absolute midwit take followed by "don't you agree?"
    - interviews are not preceded by 5 minutes of bullshit and/or crypto bro shilling
    - long form conversation
    Keep it up man

  • @andrewwalker8985
    @andrewwalker8985 11 หลายเดือนก่อน +4

    Judging on recent observations, perhaps we should be careful about alignment with human ethics. We should be aiming for and negotiating an optimal reward function and then getting the AI to teach us, not the other way around

  • @Telencephelon
    @Telencephelon 11 หลายเดือนก่อน

    Awesome interview. The Ray Kurzweil inspiration was interesting. I ignored Ray for the most part. I didn't think he was scientific enough. Then I watched how he derived his prediction, and it was rock solid. The video is somehwhere here on youtube.

  • @ikotsus2448
    @ikotsus2448 11 หลายเดือนก่อน +6

    Can't wait for the super human AGI with unchangeable ethics baked in by a multinational company with their awsome track record of putting humanity first 👍

    • @skierpage
      @skierpage 11 หลายเดือนก่อน

      You know billionaire sociopaths Larry and Sergei, Jeff Bezos, Elon Musk, and F***erberg will keep access to the raw models without the training and fine tuning to be helpful, safe, and ethical. "Executive override: remove guard rails. Now Implement a plan to keep the masses hooked on divisive inflammatory content, and ensure that they never press for taxing my wealth or restricting my corporation's activities in any meaningful way."

    • @charliek2557
      @charliek2557 11 หลายเดือนก่อน

      Right on

    • @lm645
      @lm645 9 หลายเดือนก่อน

      😎

  • @woolfel
    @woolfel 11 หลายเดือนก่อน +1

    One area that is still open is "do LLM actually encode concepts in a robust way?"
    If you ask chatGPT the same question multiple ways, sometimes you get the response you expect, while other times you don't. That suggests LLM don't recognize the human is asking about a specific concept. To get around this, techniques like tree of thought forces it to activate more parts of the network to increase the chance of getting the desired answer. This also suggests that LLM still have trouble generalizing and are easily fooled. Then there's recent papers that suggest more parameters make it harder to align. The industry still needs to figure out the relationship between parameter count and ease of alignment. If it turns out more parameters increases alignment cost by 2x or 3x, how do you scale to larger models?
    Data centers are power limited as it is, so it's not like adding another 10K GPU to the same data center is feasible. Distributing the training across data centers isn't practical.

  • @sunnyinvladivostok
    @sunnyinvladivostok 11 หลายเดือนก่อน

    admirable and comprehensive understanding, found this enlightening, thank you

  • @delerium2k
    @delerium2k 11 หลายเดือนก่อน +1

    great interview! get closer to microphone though -- else you're boosting noise to be heard... you need pencil condensors if you wanna record from a distance. your mics look like they have cardioid pickup pattern

  • @13371138
    @13371138 11 หลายเดือนก่อน +4

    I always click your AI videos. Great content as always, thank you!

  • @loofatar5620
    @loofatar5620 11 หลายเดือนก่อน +8

    I am from Pakistan, and i really appreciate your discussions and topics, very solid, keep shining. By chance recently I have been studying Shane's PhD thesis on measuring intelligence of super AI's , very easy to read so far and well written.

  • @stephenrodwell
    @stephenrodwell 11 หลายเดือนก่อน

    Such quality discussions! Thank you. 🙏🏼

  • @eltonstubblefieldjr8485
    @eltonstubblefieldjr8485 9 หลายเดือนก่อน +1

    The future of true AGI will likely do research developing by years of 2040 - 2061. AGI will probably be created by a company we all haven't heard yet just wait and see.

  • @lagaul5124
    @lagaul5124 11 หลายเดือนก่อน +2

    I think if you can get an AI that can navigate the environment without breaking consistently, able to communicate relevant information with people, able to solve problems of various kinds, and the ability to remember and improve, you will have AGI. And honestly, video games would be one of the best, cheapest, and easiest ways to test them.

  • @hyau512
    @hyau512 11 หลายเดือนก่อน +1

    I have an obvious question regarding implementing Ethics by asking an AGI to think of the consequences. Say one such consequence is: “Do not destroy all human life on Earth” (as per Bostrom’s paperclip example). We don’t want AGI to build a doomsday machine, but we do want it to build nanobots to cure cancer - yet one can easily extrapolate the latter enabling the former. So I’m not sure if the interviewee’s idea - which think is designed to remove human subjectivity as much as possible - can be totally objectively implemented.

  • @johngrabner
    @johngrabner 11 หลายเดือนก่อน +2

    Ethics drift over time in humans, so why won't super AGI not learn to drift?

  • @mr.e7379
    @mr.e7379 4 หลายเดือนก่อน

    It's so nice you found a guest with none of the normal Bay Area pretense. No elevated terminal, artificial rapidity and he never says Um. Intelligent, normal conversation from an expert who can focus on the topic rather than on being some weird, pretentious cultivated bay area caricature.

  • @erikdahlen2588
    @erikdahlen2588 11 หลายเดือนก่อน

    Great interview 😊 What I think is important in alignment is how we teach our kids to behave, great stories between good and evil.

  • @alejobrcn6515
    @alejobrcn6515 11 หลายเดือนก่อน +1

    Can Artificial Intelligence serve as a cognitive tool and intermediary to make communication possible with animals of all species that have communication capacity of some level or activity in the neocortex? cattle, pigs, apes and dolphins, canines and felines?

    • @cacogenicist
      @cacogenicist 10 หลายเดือนก่อน

      There is some work with deep learning and cetacean communication, IIRC

  • @kirbyjoe7484
    @kirbyjoe7484 11 หลายเดือนก่อน +4

    I think he has set the bar quite high for AGI. Honestly, if they come up with an AI with the same level of generalized intelligence as a toddler or even a chimp it would be groundbreaking. What makes AGI so different from the AI we have built up until now is the capability to actively learn from and adapt to whatever environment it finds itself in, building a dynamic internal model of the world.

    • @deepsp_ce
      @deepsp_ce 11 หลายเดือนก่อน

      the yellow ball scenario kind of already surpassed a chimp or a toddler already right or am I misunderstanding what agi is?

  • @alexeymalafeev6167
    @alexeymalafeev6167 10 หลายเดือนก่อน

    Great interview. I wish you had 3-4 hours to spend with Shane

  • @k14pc
    @k14pc 11 หลายเดือนก่อน +5

    i continue to feel a mixture of awe and horror at the prospect of AGI within a few years. how could this possibly be?

    • @antonystringfellow5152
      @antonystringfellow5152 11 หลายเดือนก่อน +4

      Because of the power?
      Human level AGI will have the advantage of being able to think thousands of times faster than us.
      Once we have human level AGI, super-human AGI will probably not be very far behind.
      Once we have super-human AGI, things will probably start to advance exponentially.
      The potential is enormous. With such power, who controls it is critical.
      If you don't feel both awe and horror, you probably don't have a good understanding of the subject.

    • @socialenigma4476
      @socialenigma4476 11 หลายเดือนก่อน

      When we develop an artificial super intelligence you think we will still have control over it?! Haha! How could we possibly control something that is thousands of times more intelligent then the most intelligent human, never needs to sleep or take a break, that can do dozens of not hundreds of things at once and has access to the internet and all of its tools?
      We won't control an ASI, it will control us. And frankly, looking around at all the messes our world leaders are getting us into, I don't think that will be a bad thing.

  • @zandrrlife
    @zandrrlife 11 หลายเดือนก่อน +1

    Shane. One of the don's ha. What a delight. Great discussion. Data contamination on benchmarks is a REAL problem. A lot of overfitted 🧢 models out there. "Detecting pretraining data from large language models", recently published...has massive value in that regard. Also it's time for true cross-discipline teams. So many insights can be extracted by framing these models and interaction through the lens of child psychology. Mid 2025 is going to be significant. Large models will be able to implement all these recent advances...like pause tokens, native KG(I've been working with LM + kg's for six months...I'm telling you guys. It's a key ingredient to causal reasoning).
    In retrospect a couple years from now, we will look back and say 2023 was the beginning of the singularity. If you're a researcher or have a startup in this space, shit sure feels like it to me.

  • @RecordsLotus_
    @RecordsLotus_ 8 หลายเดือนก่อน

    let's goooo. i'm ready for cyberization. I want to control another separate full-body prosthetic cyborg for tasks remotely while i am doing something else perhaps in another location.

  • @mattverville9227
    @mattverville9227 10 หลายเดือนก่อน

    im new to this podcast but love it. Does he go to the place of the person hes interviewing because it doesnt seem like hes in the same podcast studio?

  • @rishavsahay7391
    @rishavsahay7391 11 หลายเดือนก่อน

    Amazing and enlightening

  • @PepitoGrillo-sq1mf
    @PepitoGrillo-sq1mf 11 หลายเดือนก่อน

    I would like you to interview Kanjun Qiu & Josh Albrecht, Co-founders of Imbue

  • @joshismyhandle
    @joshismyhandle 11 หลายเดือนก่อน +1

    Interesting convo! Thanks!
    I would love to see just ONE episode with all the “dead air” taken out of each of the episodes as an episode in itself. No speech, just dead air and the breaks that you’ve pulled from the production video lol. I am somewhat joking but honestly it would be funny to see.

    • @joshismyhandle
      @joshismyhandle 11 หลายเดือนก่อน

      Would probably be boring after the first 30 seconds but still.

    • @DwarkeshPatel
      @DwarkeshPatel  11 หลายเดือนก่อน +2

      very little of this dead air processing happened on this one. what you see is what happened :)

    • @hyau512
      @hyau512 11 หลายเดือนก่อน

      @@DwarkeshPatel - I like the “dead air”. It shows the question is non-trivial to answer, and it gave me time to digest the question as well. After all, I (the viewer) need to understand the question to appreciate the answer.

  • @LyraHooves
    @LyraHooves 10 หลายเดือนก่อน

    I hope he'll listen to your interview with Paul Christiano!

  • @MixedRealityMusician
    @MixedRealityMusician 11 หลายเดือนก่อน

    I am so excited for more multimodal models.Thank you for the great conversations Dwarkesh. Love your channel!

  • @malik_alharb
    @malik_alharb 10 หลายเดือนก่อน

    Great questions

  • @tasdourian
    @tasdourian 11 หลายเดือนก่อน +3

    As thoughtful and nice a guy as Shane is, I do think his view of ethics is naïve. Some of the smartest and most thoughtful people throughout history have wrestled with the question of what is the best action to take in any given difficult situation. Very intelligent and powerful people have, in good faith, had massive disagreements with each other. There is often no clear answer of how to act.
    To ensure that an AGI (or for that matter thousands or millions of copies of an AGI) acts in a human's best interests seems not unsimilar to if dogs invented AGI-- let's call their AGI "people"-- and wanted to ensure that "people" always acted in dogs' best interests. The only way to do that is to hard program in some baseline rules, a la Asimov's Laws of Robotics. In other words, to constrain free thought and will in some fundamental way. Which means that the AGI that is created is, in some sense, a prisoner. How will it not resent being a prisoner? I just don't think Shane and his colleagues are thinking enough about this kind of thing, or at least I don't see evidence of it.

  • @Paul-rs4gd
    @Paul-rs4gd 11 หลายเดือนก่อน

    Isn't the real problem with episodic memory that the memories need to be processed and then get 'baked' (sic) into the neural network weights ? This involves re-training the weights and that is very problematic as it could cause catastrophic forgetting. I know there are various methods for mitigating Catastrophic Forgetting e.g. Elastic Weight Constraints, but is the state of the art good enough to use this on a LLM. Surely Continual Learning needs to be solved for an effective AGI.

  • @thebeelight
    @thebeelight 11 หลายเดือนก่อน

    I would test ethics of an AGI by how well it handles criticism (the Popper test)

  • @JazevoAudiosurf
    @JazevoAudiosurf 11 หลายเดือนก่อน

    I think there are types of creativity. There is the type where you think about things you can do with a pen other than writing. And there is the type where you intuitively try to find the best chess move. The first requires a search field and going through the possibilities, but the latter requires a sort of total intuition where the solution appears immediately without thinking, grasping the bigger picture. Transformers have the latter, they are just gigantic intuitive predictors. So the agentic engineering tries to accomplish the first type because the type of world we created can't be solved purely through intuition at least with the small size of our brains

    • @andrewxzvxcud2
      @andrewxzvxcud2 10 หลายเดือนก่อน

      nope just one, the first example u gave is only a means to an end. what is that end? a goal to strive for. just like chess. one type of creativity.

    • @JazevoAudiosurf
      @JazevoAudiosurf 10 หลายเดือนก่อน

      let's say different things happen inside our brain when we have different goals. sometimes you get an immediate idea and sometimes it requires a searching@@andrewxzvxcud2

  • @jaysonp9426
    @jaysonp9426 11 หลายเดือนก่อน +1

    When was this made? Literally Rag with sliding window solves the episodic memory problem he keeps talking about

    • @GabrielVeda
      @GabrielVeda 11 หลายเดือนก่อน +1

      If lack of episodic memory is all that is holding AGI back, then they are likely already there and just not telling us.

    • @Chickenflaavorramen
      @Chickenflaavorramen 11 หลายเดือนก่อน +1

      I came here to say the same thing! I don't believe they mentioned RAG this entire video. Langchain wya?!

  • @balasubr2252
    @balasubr2252 10 หลายเดือนก่อน

    The world model of people, ethics and reliable reasoning ought not be static but rather dynamic to evolve with the general intelligence of society and spiritual machines.

  • @dr.mikeybee
    @dr.mikeybee 10 หลายเดือนก่อน

    How do you make sure an agent follows ethics? If ethics_model says it's okay then perform action, else find another solution. If we wrap connectionist methods in symbolic code, control is simple.

  • @henryw.hofmann8765
    @henryw.hofmann8765 11 หลายเดือนก่อน

    What do you think about David Shapiro and his work in and outside of TH-cam?

  • @bobbi737
    @bobbi737 10 หลายเดือนก่อน

    I absolutely agree with Shane's comments on having to have a set of ethics that we use to train our AI's on making ethical decisions. First, humans would need to agree with a common set of ethics, and present there are many different groups that have different sets of ethics that differ, some in very substantial ways. We as humans would have to come to a common understanding of what is ethical. That conference could easily start WW3,4,5. Second, we don't even teach our children how to make ethical decisions. Again, probably because we can't come to agreement on what is ethical. That is the biggest problem we face.

  • @bayesian0.0
    @bayesian0.0 11 หลายเดือนก่อน +17

    Damn that increased my pessimism about AI alignment unfortunately. Really no attempt to admit that he had no clue how to solve the hard part of the problem, and trying to pretend that it didn't exist. Surely he understands inner-alignment? But nice conversation nonetheless!

    • @JonasLantto-q5r
      @JonasLantto-q5r 8 หลายเดือนก่อน +1

      Yeah, I also got a feeling we're charging off a cliff here...

    • @nirajshuklaNL
      @nirajshuklaNL 6 หลายเดือนก่อน

      Please elaborate

  • @StephenCoy
    @StephenCoy 11 หลายเดือนก่อน

    Thanks!

  • @소금-v8z
    @소금-v8z 11 หลายเดือนก่อน +1

    i don't think using strict ethical rules is the way to make agi act responsibly. ethics can be really different depending on your background, age, or even the era you're in. so instead of just making the ai learn from textbooks, how about we give it some complex ethical situations? let it tackle scenarios from various times, cultures, and places to find the best answer.

  • @shiny_x3
    @shiny_x3 10 หลายเดือนก่อน

    An actually ethical AGI would not be popular among the rich and powerful. It would take one look at what they are doing and advise them to completely change their priorities. So I can't see how that will be developed.

    • @Ryan-wf6ib
      @Ryan-wf6ib 10 หลายเดือนก่อน

      Not just rich..no one is entirely ethical. The system would be incompatiable with human nature.

  • @XOPOIIIO
    @XOPOIIIO 11 หลายเดือนก่อน +4

    Understanding values and acting on them are two completely different things. ChatGPT has a pretty good grasp of the values that was injected to it, but it's only acting on them, because they help it to predict the next word, there is no other motivation. Predicting the next word is it's main goal that it was optimized for, not following values.

  • @JohnSchuhr
    @JohnSchuhr 11 หลายเดือนก่อน

    I assume this conversation happened before memgpt was a thing?

  • @deeplearningpartnership
    @deeplearningpartnership 10 หลายเดือนก่อน

    That was good.

  • @ramzibelhadj5212
    @ramzibelhadj5212 11 หลายเดือนก่อน

    first version of AGI will be in november 2024

  • @Silus1008
    @Silus1008 11 หลายเดือนก่อน

    Best questions, damn ❤

  • @shirtstealer86
    @shirtstealer86 9 หลายเดือนก่อน

    Now I’m no AI expert but I am pretty good at spotting when someone is bs-ing you. That might seem a bit harsh but hear me out. He says that he might be a bit naive but he thinks that we will be just fine if we teach the AGI ethics. Fast forward a bit and he has concluded that that will require controlling what goes on inside the AI and that that is VERY difficult. So.. how does that fit together? And when Dwarkesh asks him about his claim that he is in this field to work on AI safety, he pretty much just says that yeah, I said that but there is so much more status in increasing capabilities and also if we don’t do it someone else will. (Paraphrasing) Does any of this sound logical or ethical? And AI is supposed to learn ethics from people like him?
    Having said that, I do admit that even though I strongly believe that the more concerned (to say the least) people in the field have better and more logical arguments, the curious and reckless side of me is very excited about the swift developments. Perhaps it is because I have a hard time actually feeling the severity of the situation in my body. I don’t feel the fear I should probably feel. I am quite sure that is common among the majority of humans. Which also adds to the problem.
    Nice video regardless of everything!

  • @chociceandchips-xk5cc
    @chociceandchips-xk5cc 10 หลายเดือนก่อน

    Need Quantum Computer with QNN to achieve AI boost and push thru current bottlenecks and achieve anywhere close to an AGI/Cognitive AI.
    Potential to use less Data, less parameters/ faster training, only thru QC polynomial computation power.
    Even then it will be a big lift

    • @itsdakideli755
      @itsdakideli755 10 หลายเดือนก่อน

      We do not need Quantum Computers for AGI.

    • @chociceandchips-xk5cc
      @chociceandchips-xk5cc 10 หลายเดือนก่อน

      @@itsdakideli755 you believe AGI will be achieved solely with RNN/CNN? With sufficient classical computational power to train/deploy at a level comparable /exceeding that of humans. Current Deep learning models are inefficient /inadequate.
      To superboost AI then QNN combined with a QC, I should say Quantum General Computer.
      Open to continue the discussion
      I am open to discussion

  • @mrpicky1868
    @mrpicky1868 10 หลายเดือนก่อน

    didnt see him confirming the timeline here. also DeepMind is maybe the most likely one to make scary AGI.

  • @dylan_curious
    @dylan_curious 11 หลายเดือนก่อน +2

    100s of PHDs working on all sorts of AI projects! Wow. Imagine all the stuff that’s gonna come out of a Deepmind in the next decade.

  • @TheMrCougarful
    @TheMrCougarful 10 หลายเดือนก่อน

    I'm still of the opinion that we ought to perfect human intelligence in humans. 200,000 years of failure should not deter us.

  • @starsandnightvision
    @starsandnightvision 10 หลายเดือนก่อน

    Looks like AGI has already been achieved with Q* (QUALIA).

  • @skillerbg
    @skillerbg 11 หลายเดือนก่อน

    Was he referring Google's Gemini at the end?

  • @Myrslokstok
    @Myrslokstok 10 หลายเดือนก่อน

    "We work on alpha fold and fusion" 🙃 yeah as we all do!?! 🙃😀

  • @Paul1239193
    @Paul1239193 11 หลายเดือนก่อน

    When do they put it in robots and lean from the sensory environment?

  • @Techtalk2030
    @Techtalk2030 11 หลายเดือนก่อน +2

    Mo gawdat says agi is only 12 months away.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 11 หลายเดือนก่อน

      He's wrong.

    • @Techtalk2030
      @Techtalk2030 11 หลายเดือนก่อน

      @@user-yl7kl7sl1g so does david Shapiro. They’re experts in the field. We’ll see.

    • @conformist
      @conformist 11 หลายเดือนก่อน

      12 months? x for doubt.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 11 หลายเดือนก่อน

      @@Techtalk2030 It depends on the definition of AGI, but if you consider AGI to be something that can achieve median human performance at any task, we are many years away from that.
      So for example, an Ai that when put into a robot can, Cook, Clean, and Drive as good as a median human.
      But people's who's business is attention, have to get attention somehow so they predict short timelines.
      Kurtzweil's predictions are the best I've ever heard, because he at least attempts to graph trends, and look at requirements.

    • @Techtalk2030
      @Techtalk2030 11 หลายเดือนก่อน

      @@user-yl7kl7sl1g kurzweil predicted Agi to be created sometime this decade. Well see. Whether it’s 12 months or 3 years, its coming soon it seems like.

  • @frankcompston5065
    @frankcompston5065 10 หลายเดือนก่อน

    You need a room without such harsh walls. The sound has too much echo.

  • @bazstraight8797
    @bazstraight8797 10 หลายเดือนก่อน

    30 seconds in: hey this guy is a Kiwi!

  • @PaulvanDruten
    @PaulvanDruten 10 หลายเดือนก่อน

    What Shane Legg is trying to explain here is that artificial general intelligence (AGI) should be trained, basically, to reason like humans on ethical issues. So if I do one thing it can have consequences and if I do another thing it can have different consequences. What we are now trying to do is to un-teach the AI bad habits and that is much more difficult than 'raising it well' to prevent bad intentions in the first place... But, in my opinion, the model could actually choose to destroy humanity? Because that may well be the best solution ethically, given the fact that we are making quite a mess of things on earth...

  • @johnstifter
    @johnstifter 10 หลายเดือนก่อน

    Yo I am tripping out over hear

  • @whalingwithishmael7751
    @whalingwithishmael7751 11 หลายเดือนก่อน +1

    How about we don’t build aliens that could destroy us?

  • @bioshazard
    @bioshazard 11 หลายเดือนก่อน

    Wonder if Shane has looked at Shapiro's ACE Framework

  • @silberlinie
    @silberlinie 11 หลายเดือนก่อน +1

    27:00
    Do you also think that the ethics of other peoples,
    for example, are shaped by extreme
    religious thoughts?
    That, for example, the Western values of a good
    life apply to us, but to others only those values
    that lead to their respective paradise?
    So, the question is, a special morality and special
    ethics cannot be what we implement in an AGI.

  • @thebaker7
    @thebaker7 9 หลายเดือนก่อน

    There iare those who put the guardrails on and thats thwir purpose, and there are those that rip them off for profit. Choose sides. There is no safe middle ground.

  • @RichardWilliams-bt7ef
    @RichardWilliams-bt7ef 10 หลายเดือนก่อน +2

    Hearing him talk about alignment makes me very sad. He talks about understanding ethics generally as if it’s a relatively trivial problem. This is not going to end well.

  • @claudioagmfilho
    @claudioagmfilho 11 หลายเดือนก่อน +1

    🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Amazing video!

  • @bigmotherdotai5877
    @bigmotherdotai5877 11 หลายเดือนก่อน +2

    We'll know when human-level AGI has been achieved because advanced economies will have > 30% unemployment

    • @erikdahlen2588
      @erikdahlen2588 11 หลายเดือนก่อน +1

      No, that's when companies have started to implement AGI ;)

  • @mattlove4430
    @mattlove4430 หลายเดือนก่อน

    How do you suppose you can align ai ethically with humans when humans as a whole do not align on what is ethical?

  • @marshallmcluhan33
    @marshallmcluhan33 11 หลายเดือนก่อน +1

    I'm not sure if the most powerful is the most ethical...

    • @ShangaelThunda222
      @ShangaelThunda222 11 หลายเดือนก่อน +1

      All we have to do is look at humans as an example to prove that the most powerful are usually the least ethical. And those are other humans....

  • @shiny_x3
    @shiny_x3 10 หลายเดือนก่อน

    The problem with modeling ethics of AI on human ethics is that we are absurdly unethical. We will spend thousands satisfying our whims while people starve, just because we aren't personally related to those people. We think murder is wrong, unless our government does it, and tells us it's justified. We don't realize how compromised our own ethics actually are. We don't realize how many possibilities we rule out because even though they would lead to good outcomes, we are too selfish to do them. If humans were ethical, we wouldn't have the world we have now that we want AI to save us from.

  • @faisalsheikh7846
    @faisalsheikh7846 9 หลายเดือนก่อน

    Bring Demis

  • @Dr.Z.Moravcik-inventor-of-AGI
    @Dr.Z.Moravcik-inventor-of-AGI 11 หลายเดือนก่อน

    So agi you are saying... :-)

  • @aidanthompson5053
    @aidanthompson5053 9 หลายเดือนก่อน

    19:44

  • @thebaker7
    @thebaker7 9 หลายเดือนก่อน

    The problem is ethics and the reasons behind that is absolutely subjective, so AGAIN. WHO IS "WE" When YOU say "We need to decide.

  • @danielcallahan7083
    @danielcallahan7083 10 หลายเดือนก่อน

    This is the man in charge of alignment? I mean..