General Intelligence: Define it, measure it, build it

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ย. 2024

ความคิดเห็น • 54

  • @jb_kc__
    @jb_kc__ 13 วันที่ผ่านมา

    probably the best presentations i've seen on AI

  • @humnhumnhumn
    @humnhumnhumn 2 หลายเดือนก่อน +7

    ''You are confusing the output of the process with the process itself''
    Very nice!

    • @aimorethanyouaskedfor
      @aimorethanyouaskedfor หลายเดือนก่อน

      The argument against functionalism. It amazes me how many think that these are the same thing.

  • @jackswitzer5569
    @jackswitzer5569 27 วันที่ผ่านมา

    Incredible talk Doctor Chollet!

  • @peterdayton6796
    @peterdayton6796 2 หลายเดือนก่อน +3

    Excellent talk. Chollet deserves a lot of credit for developing a benchmark before the GPT craze that is so simple and yet for which GPT scaling has only led to modest improvements on. Even if it does fall soon to ad-hoc approaches, lasting as long as it has is no small feat.

  • @DustinHunt-x2y
    @DustinHunt-x2y 3 หลายเดือนก่อน +10

    Excellent and succinct distillation of the current state of the industry and the path forward. Very inspirational!

  • @PaulTopping1
    @PaulTopping1 2 หลายเดือนก่อน +7

    Hey, that's the back of my head I see at the bottom right! Good presentation at a fun conference.

    • @binig.4591
      @binig.4591 2 หลายเดือนก่อน +2

      nice head

  • @samkee3859
    @samkee3859 2 หลายเดือนก่อน +12

    Chollets perception and articulation are unmatched

  • @juliocesar-io
    @juliocesar-io 2 หลายเดือนก่อน +1

    Refreshing view! ❤

  • @dizietz
    @dizietz 2 หลายเดือนก่อน +3

    I've been a big fan of bongard problems and the Arc test tests similar problem space as bongard problems do. I do think that approaches like Greenblatt's dynamic generation of solutions will push the state of the art forward. I also think that the kind of encoding data the current models use and are trained on isn't fully conducive for this task, and more fine tuned models might help.
    Also, I think that Arc puzzles should be abstracted further out to three dimensions and more complicated transformations to encode patterns and transformations that even 150+ iq test takers have difficulty recognizing in

  • @johnkintree763
    @johnkintree763 2 หลายเดือนก่อน

    One definition of intelligence is the ability to solve problems in complex situations. There are many problems from the local to a global level that are not being solved well with the current level of intelligence. Our highest priority is creating collective human and digital intelligence.
    Language models in agentic workflows can extract entities and relationships from text, and merge that knowledge into graph representations. Keeping the human in the loop is important to catch and correct mistakes made by the speech recognition and language models.
    People can select parts of conversations they have with digital agents to be merged into a global shared graph representation. A global platform can be built pretty much with today's technology that can merge selected parts of millions of simultaneous conversations into a shared world model by the end of this year.

  • @oysterboulevard6623
    @oysterboulevard6623 2 หลายเดือนก่อน

    Great Video !

  • @imad1996
    @imad1996 2 หลายเดือนก่อน

    Throwing a million dollars into an AI puzzle fuels substantial momentum toward AI research. And that is awesome.
    Thank you for differentiating between the output of the process and the process itself. Unfortunately, trending concepts become marketing slogans, such as Samsung Galaxy AI :).

  • @NitsanAvni
    @NitsanAvni 2 หลายเดือนก่อน +1

    20:20 Should we also measure agents with less / more shots? How many shots needed to get an agent to a score of 90?

  • @G1364-g5u
    @G1364-g5u 2 หลายเดือนก่อน +4

    # AI Progress and Generalization: A Critical Review
    ## Chapter 1: The AI Hype of Early 2023
    **Timestamp: **0:00** - **1:57****
    - Overview of the peak AGI hype in early 2023.
    - ChatGPT, GPT-4, and Bing Chat were perceived as revolutionary, with claims that AI would drastically increase productivity and replace many jobs.
    - Despite the hype, the actual impact on employment has been negligible, and fears of mass unemployment were unfounded.
    ## Chapter 2: Limitations of Large Language Models (LLMs)
    **Timestamp: **1:58** - **3:38****
    - LLMs, including ChatGPT, have inherent limitations such as failing to understand context and falling into pattern matching rather than true comprehension.
    - These limitations are tied to the fundamental architecture and approach of current AI models, showing little progress over time.
    ## Chapter 3: Problems with Task Familiarity and Generalization
    **Timestamp: **3:39** - **6:31****
    - LLMs struggle with unfamiliar tasks, performing well only on tasks they have memorized.
    - Performance issues arise from the extreme sensitivity to phrasing and the inability to generalize from known tasks to new, similar ones.
    ## Chapter 4: The Misconception of AI Intelligence
    **Timestamp: **6:32** - **13:51****
    - Intelligence should not be equated with task-specific skill; true intelligence involves the ability to handle novel situations.
    - The speaker argues for a shift from task-based AI evaluations to measuring generalization and adaptability.
    ## Chapter 5: Redefining Intelligence and Measuring Progress
    **Timestamp: **13:52** - **19:15****
    - Intelligence should be viewed as the ability to synthesize new solutions and adapt to new situations.
    - The current benchmarks, based on human exams, are inadequate for assessing AI’s true generalization capabilities.
    ## Chapter 6: The Abstraction Reasoning Corpus (ARC) and Generalization Benchmarking
    **Timestamp: **19:16** - **24:22****
    - Introduction of ARC, a dataset designed to measure an AI's ability to generalize and perform novel tasks.
    - ARC aims to control for prior knowledge and experience, emphasizing the need for AI to infer solutions rather than relying on memorization.
    ## Chapter 7: The Role of Abstraction in AI and Human Intelligence
    **Timestamp: **24:23** - **30:36****
    - Abstraction is the key to generalization; intelligence depends on the ability to recognize and apply abstract patterns.
    - LLMs are currently limited to low-level abstraction and lack the capability to synthesize new models on the fly.
    ## Chapter 8: Integrating Type 1 and Type 2 Thinking for AGI
    **Timestamp: **30:37** - **37:41****
    - The next step in AI development involves combining Type 1 (intuition, pattern recognition) and Type 2 (logical reasoning) thinking.
    - Human intelligence excels because it merges these two forms of thinking, and AI needs to follow a similar path.
    ## Chapter 9: Combining Deep Learning with Program Synthesis
    **Timestamp: **37:42** - **42:48****
    - Future AI advancements will likely involve merging deep learning (Type 1) with program synthesis (Type 2) to handle complex, novel tasks.
    - This approach could significantly improve AI’s problem-solving capabilities and generalization.
    ## Chapter 10: Practical Applications and the Future of AI Development
    **Timestamp: **42:49** - **45:12****
    - Practical strategies for improving AI, such as using LLMs to generate and refine programs, show promise in advancing AI generalization.
    - The importance of innovative thinking and diverse approaches in overcoming current AI limitations.
    ## Chapter 11: The Need for New Breakthroughs and Intellectual Diversity
    **Timestamp: **45:13** - **48:36****
    - The speaker emphasizes that progress towards AGI has stalled due to a lack of new ideas and intellectual diversity.
    - The speaker suggests that the next breakthroughs are likely to come from outsiders rather than big tech labs.
    ## Chapter 12: Future Directions and Closing Thoughts
    **Timestamp: **48:37** - **53:34****
    - The development of AI tests and challenges like ARC 2 is discussed, aiming for more sophisticated and dynamic assessments.
    - Insights from observing human cognitive development, particularly in children, could inform AI research and the creation of more generalizable AI systems.

  • @techsuvara
    @techsuvara 2 หลายเดือนก่อน +2

    Excellent reality check for AI. I bet this video won't have 1 million views like the hype videos.

  • @FamilyYoutubeTV-x6d
    @FamilyYoutubeTV-x6d 2 หลายเดือนก่อน

    This is really good!

  • @TooManyPartsToCount
    @TooManyPartsToCount 2 หลายเดือนก่อน +1

    It is not entirely clear that we humans perform some computational magic that is beyond merely relying on pre-learnt patterns. And perhaps it just 'feels like' we reach beyond the training set when we have those eureka moments, or imagine that we just came up with a solution to a novel problem? perhaps in fact we have such a tightly woven mesh of prior training examples that those moments when we arrive at 'eureka' are in fact just inevitable and only appear novel to us because we are unconscious of the combined results of all our prior training?

    • @ggir9979
      @ggir9979 2 หลายเดือนก่อน +2

      That's not really the point that the arch challenge is tackling.
      This issue is that LLMs cannot do recursions, they cannot do step by step reasoning.
      When they write down code for a computer program, they cannot interpret it and run it. Everyone that is working with a coding assistant will tell you that they routinely produce code that will not run. Very simple stuff, like simple loops that are easy to unfold. That's why you see a lot of neurosymbolic approaches beeing tried out nowadays ( as explained in the presentation).
      Humans, on the contrary, can interprate and execute computer programs. Tedious, slow, but doable. We built airplanes and rockets before we had computers, someone had to run these algorithms.

    • @TooManyPartsToCount
      @TooManyPartsToCount 2 หลายเดือนก่อน

      @@ggir9979 Point about the Arc challenge taken. I was raising what at least to myself is an interesting meta question, that being that our current state of understanding about what constitutes intelligence could be lacking? especially as concerns our (human) particular brand of intelligence.
      There is no denying that LLMs are far from what I would call intelligent though! more like data compression artifacts. Most claims that LLMs are more than that are just part of some marketing strategy (think Microsoft's 'sparks of AGI' paper/lectures).

    • @ggir9979
      @ggir9979 2 หลายเดือนก่อน

      ​@@TooManyPartsToCountGood point!
      I don't know if you know ben goertzel, he has a lot of interesting things to say about the theory of mind and what is intelligence that goes further than what you could see in this video.

    • @TooManyPartsToCount
      @TooManyPartsToCount 2 หลายเดือนก่อน

      ​@@ggir9979 This - th-cam.com/video/D8wxThDlVBc/w-d-xo.html
      And some other vids are where it started for me. Basically went back to school thanks to BG and JB :)
      Every time I see a YT video on AI with Sam Altman in the thumbnail I think 'why not Ben Goertzel?!' or at least Ilya or Andrej or Cholet or Bengio or......

    • @ggir9979
      @ggir9979 2 หลายเดือนก่อน +1

      ​@@TooManyPartsToCountThe rarest of things, a nice and civilized exchange of ideas on the internet :-)
      I will have to agree 100% with you, any of these speakers are much more interesting to listen to than Sam Altman.
      I am not familiar with JB's work, so I'll chexk it out next, thanks for the tip!

  • @way2on
    @way2on 2 หลายเดือนก่อน

    we are not providing the same tools back to the core architecture. such as realtime inference

  • @JanSmetana
    @JanSmetana 2 หลายเดือนก่อน

    16:00 "If you know hot to drive only in very specific geo fence adreas - thats intelligence".
    Isnt this a SKILL how you defined that?
    An INTELLIGENCE is ability to drive anywhere (left, right, town, country ...), and not just at my own 3x3 streets box? (which would be learned skill, where you already tried every possibility)

    • @micheldominic5963
      @micheldominic5963 2 หลายเดือนก่อน

      he said "you know thats less intelligent" not "thats intelligence"

  • @Maximooch
    @Maximooch 2 หลายเดือนก่อน +3

    “See you on the leaderboard for ARC-AGI” is quite a way to end it

  • @danecjensen
    @danecjensen 2 หลายเดือนก่อน

    Chapters (Powered by @danecjensen) -
    00:00 - Intelligence, benchmarks, and AI hype in 2023
    03:26 - LLMs autoregressive responses
    07:36 - LLMs inability to solve CSR ciphers
    10:14 - ML models rely heavily on human labor
    11:41 - Minskystyle AI vs Makatsstyle AI
    15:25 - Intelligence spectrum skilled, operational, efficient
    16:42 - AI models should not be evaluated using human exams
    17:29 - AIs next level of capabilities and efficiency
    18:53 - RKGI AI benchmark for human intelligence
    20:49 - Data sets Kaggle, RKGI, ARK
    21:45 - Zach offers 1m RKGI solution competition
    24:58 - Physicists describe intelligence and abstractions
    29:09 - AIs ability to master tasks efficiently
    30:50 - Two types of abstraction valuecentric and programcentric
    33:48 - Decentralized program search forAGI
    34:27 - Program synthesis PS vs machine learning
    35:30 - Program synthesis overcomes combinatorial explosion, LMS limitations
    36:46 - AI combines chess and discrete search techniques
    41:17 - Deep learning components in discrete programs
    41:56 - Deep learning for ArcGi program synthesis
    43:19 - Program embedding for efficient search
    43:45 - Python ArcGGI pipeline improvement
    44:55 - LLMs fall short of GI, need breakthroughs
    47:50 - Breakthrough in ArcGGI likely to come from outsider, not big labs
    51:09 - Experiential learning and causality in childrens learning
    52:16 - Humans are capable of few short program synthesis
    53:32 - Human cognition works on fundamental level

  • @immortalityIMT
    @immortalityIMT 2 หลายเดือนก่อน

    Do not we have enough A.I. to brute force the model we are needing?

  • @dot1298
    @dot1298 2 หลายเดือนก่อน +2

    but what about AlphaProof, AlphaGeometry 2, Q-star, the upcoming Claude 3.5 Opus and Gemini 2 Ultra (aka gemini-test in lmsys) ?

    • @dot1298
      @dot1298 2 หลายเดือนก่อน +1

      and the 2025 project of a massive AI super-computer-cluster?

  • @aaronbeach
    @aaronbeach หลายเดือนก่อน

    My AI professor in college (back in 2003) Christopher Riesbeck said that the definition of AI was a moving target (he had already been doing AI research for 30 years at that point) - he said "Artificial Intelligence is the search for the answer to the fundamental question: Why are computers so stupid?" Even the ARC-AGI description on the page for this prize defends it by saying "It's easy for humans, but hard for AI."
    I have been convinced that the question of AI has (and will always be until I see differently) the question of why a computer that can do many/most computation tasks orders of magnitude faster and better than a human still fails at certain basic (seemingly simple) human activities. A simple hypothesis yet to be rejected by the science is that a human brain is not a turing machine - that although we use the analogy of computation to correlate what humans and computer do, they seem to be doing two different things.
    One simple example of this (that is probably wrong) is that the human mind is a quantum process of some sort and that this explains why computers using pattern extrapolation require exponentially more parameters to achieve linear improvements in their ability emulate it. But that's just one example of how the human mind could be something fundamentally different and AI is just using another process ("compuation") to simulate.

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 2 หลายเดือนก่อน

    I took a look at the ARC Prize... it... it doesn't make sense? A very smart human scores 22% on ARC... the top AI score is 46% on the leaderboard ALREADY? so... what are you trying to measure here? who is best at 'guess the algorithm I'm thinking of'?

  • @techsuvara
    @techsuvara 2 หลายเดือนก่อน +1

    There's IQ tests for that :D . This also goes to show how flawed IQ tests are. Especially when the children of those who wrote the tests seem to score the highest. (True story)

  • @imad1996
    @imad1996 2 หลายเดือนก่อน

    Somebody from space is listening, "Oh, poor humans, They still refer to abstraction as a blurred image, hahahahahahah." Abstractions have their own use, but they could be a large source of limitations if we try to imitate that from humans. Abstractions could be human limitations in trying to understand matters. Unless we refer to the AI model as the abstraction layer. I don't know how relevant that could be.

  • @rtnjo6936
    @rtnjo6936 2 หลายเดือนก่อน +11

    We don't need AGI, we need tools, not gods

    • @FamilyYoutubeTV-x6d
      @FamilyYoutubeTV-x6d 2 หลายเดือนก่อน

      AGIs will not imply divine qualities worthy of worship. Ironically, the anti-AGI people who make this kind of comment are the ones who will be hyping future intelligent agents to the level of gods, while the people who are interested in using the technology for a variety of purposes will see them as tools.
      I see your point, though.

    • @spagetti6670
      @spagetti6670 2 หลายเดือนก่อน +1

      Ok bro!!!!

    • @tack3545
      @tack3545 2 หลายเดือนก่อน

      agi would be such a good “tool” that it would transcend the standard meaning of the word. why settle for basic tools when so much more lies just beyond the horizon

    • @rtnjo6936
      @rtnjo6936 2 หลายเดือนก่อน

      @@tack3545why settle for getting a cat when you could go all out and buy a full-grown tiger? Why choose the basic option? Here’s the reality: I’m in control of my environment. I’m the dominant species, and I understand this world with a high degree of predictability.
      with current technological advancements, I can expect even better tools, a more efficient society, and more property. so why on earth would I create another agent that could surpass me in every dimension?
      people who praise AGI for its potential to make the world a better place overlook the inherent risks, once it’s unleashed, there’s no turning back, and the very power that promises great things could also bring catastrophic consequences.
      now, we’re caught in an AI rat race. different labs, companies, and even countries are fiercely competing to develop the most advanced AI,
      maybe, in the far future-when humanity is more advanced, when global cooperation is so high that we can make unified decisions, and when our understanding of AI is no longer trapped in today’s black-box methods-then, and only then, might it make sense.
      the people making decisions about such tremendous technology today, with their average IQ of 130, are incomparable to the future generations who will possess far deeper wisdom about the consequences and complexities of such advancements.
      for now, there’s no need to gamble with uncontrollable power when we can continue to improve humanity with the tools we have, advancing our standards of living, morality, rights, and freedom.

    • @rtnjo6936
      @rtnjo6936 2 หลายเดือนก่อน

      @@tack3545 lmao, they deleted my comment, let's post it again:
      why settle for getting a cat when you could go all out and buy a full-grown tiger? Why choose the basic option? Here’s the reality: I’m in control of my environment. I’m the dominant species, and I understand this world with a high degree of predictability. With current technological advancements, I can expect even better tools, a more efficient society, and more property. So why on earth would I create another agent that could surpass me in every dimension? It’s like making a deal with the devil. People who praise AGI for its potential to make the world a better place overlook the inherent risks of such immense power. Once it’s unleashed, there’s no turning back, and the very power that promises great things could also bring catastrophic consequences.
      Right now, we’re caught in an AI rat race. Different labs, companies, and even countries are fiercely competing to develop the most advanced AI. This race often leads to rushed decisions and prioritizing breakthroughs over safety. Maybe, in the far future-when humanity is more advanced, when global cooperation is so high that we can make unified decisions, and when our understanding of AI is no longer trapped in today’s black-box methods-then, and only then, might it make sense. The people making decisions about such tremendous technology today, with their average IQ of 130, are incomparable to the future generations who will possess far deeper wisdom about the consequences and complexities of such advancements. For now, there’s no need to gamble with uncontrollable power when we can continue to improve humanity with the tools we have, advancing our standards of living, morality, rights, and freedom.

  • @ceilingfun2182
    @ceilingfun2182 2 หลายเดือนก่อน

    Is this guy for real?

    • @randylefebvre3151
      @randylefebvre3151 2 หลายเดือนก่อน +5

      yes why? The presentation is on point

    • @wege8409
      @wege8409 2 หลายเดือนก่อน +1

      Yes, this is the creator of Keras, which is the software that TensorFlow is based on.

    • @spagetti6670
      @spagetti6670 2 หลายเดือนก่อน +2

      Yes he is and probably he's way smarter than you are

    • @0113Naruto
      @0113Naruto หลายเดือนก่อน

      You think we already have AGI?

  • @tankieslayer6927
    @tankieslayer6927 3 หลายเดือนก่อน +6

    There is no viable path to AGI with the current hardwares and algorithms.

    • @blahblahsaurus2458
      @blahblahsaurus2458 2 หลายเดือนก่อน

      It's a very good talk and helps me understand how I went wrong overestimating AI a year ago. But there are problems assuming that the entire field is stuck and predicting that it will remain stuck for years.
      For one thing, no one person knows the 'current state of the industry', much like no one person knows the locations of every nuclear warhead in the world, or how it's unlikely that any one person knows the locations of all the major meth labs.
      We know a lot about what OpenAI and the rest are publishing. We can even say that these companies are motivated by keeping their stock prices high, and by selling their products to average people, so in general they all have a strong incentive to reveal and sell their technology as soon as they can. And they all have employees who can leak information anyway.
      At the same time, we know that all of them do billions of dollars of R&D that they are able to keep secret. We are used to discounting this unknown R&D, because it's been a while since there's been a major perceivable improvement in AI capabilities. It's hard to believe in the promise of Q* when we've been hearing nonsense about it a whole year. However, in 2021 most experts thought it would be decades before something with the capabilities of generative AIs would arrive. We won't know until we know!
      But more importantly, people make the mistake of forgetting about countries and their resources and assuming that the _public_ state of the art is the _actual_ state of the art. It is virtually guaranteed that every major military power is experimenting with AI. Countries have very different incentives than companies. We can argue all day about what the potential of AI is and what the timeline is, and such arguments are reflected in the stock market, which drives companies to seek short term, and relatively likely returns on investment. And if everyone believes AI is bogus, investors will cash out, and private sector AI development will slow down. A large country on the other hand doesn't care if it spends a billion dollars on a top secret project and nothing comes of it. Every country understands that the *maximum potential value* of AI is very very high, and that the maximum potential *cost* of allowing your enemy to build an army of drones before you do is also very very high. Thus, a billion dollars is a small cost to pay for peace of mind, even if ten years pass and AGI fails to materialize. And almost as bad as losing the AI race is allowing your rivals to learn the state of your AI research.
      Because it's so difficult for anyone to find out if and when a different country creates AGI and starts building drones that build drones, a country will want to delay revealing their own AGI until the last possible moment.
      Suppose it's 2030, 2040, or December this year or whatever, and the US, China, and Russia all have big AI drone armies. Russia would be wise to keep its own AI secret, allow the US and China to fight, causing them to waste resources and reveal their technology, and then go on a blitz to try to conquer the world (or at least damage other countries' capacity to make AI and drones). But if Russia does that, how do they know if Saudi Arabia or Brazil or Jeff Bezos haven't been hiding even more powerful weapons?
      We can ask "why make weapons at all, why wouldn't countries cooperate". But it doesn't matter if 90% of militaries have benign intentions. The existence of that 10% of evil dictators with major resources will make everyone scared, paranoid, suspicious, and determined to defeat potential enemies as soon as possible.
      So we don't know the state of the race, but we do know that when chatgpt made news it was the starting signal for everyone to enter the race. Or, if these militaries were paying attention, they entered the race after image generators, or GPT-3, or even AlphaGo. These countries may be behind, on par with, or ahead of the private sector. They are going to extreme lengths to hide what they have as well as what they don't have. No one will want to reveal its drone army first, BUT, if you wait you also risk letting a very evil dictator or billionaire get a lead in the race.
      We will probably start finding out what all the militaries have been doing when the private sector makes significant achievements, whenever that may be. If robots start washing dishes competently, or driverless cars become widespread, or various professions become automated, it will no longer make sense for countries to pretend they don't have weapons. We will probably enter a phase where countries make displays of military power to discourage their enemies from attacking, but these displays will not reveal everything, and they may be bluffs.
      Eventually someone - US, China, some guys in a basement - will reach the conclusion that they are currently ahead in the race, or that they will soon fall behind in the race, or they'll just get nervous, and make the first attempt to defeat their rivals. This will then devolve into a global war.
      We can say - well we've had nukes for 80 years, but very little conflict between nuclear powers. Yes of course, because no one wins a nuclear war - everyone is harmed by nuclear retaliation, or radiation. No one wins a nuclear war, that is, unless they're in a bunker deep underground and have an army of drones that build drones that can survive radiation.

    • @coolcool2901
      @coolcool2901 2 หลายเดือนก่อน

      Extropic Thermodynamic Chips (Stochastic Processor)
      The deal breaker.