Pattern Recognition vs True Intelligence - Francois Chollet

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 190

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  15 วันที่ผ่านมา +15

    SPONSORED BY TUFA AI LABS (home of MindsAI)!
    Open research positions to work on ARC - tufalabs.ai/open_positions.html

  • @stephenwallace8782
    @stephenwallace8782 14 วันที่ผ่านมา +44

    This channel has shown me the whole world of AI in a serious fashion - the intersection of logic, reasoning, philosophy, and science....makes the hype-side seem a little silly compared to the incredibly rich content this channel puts out.

    • @stephenwallace8782
      @stephenwallace8782 14 วันที่ผ่านมา +1

      I;m not sure if -- in the sense of human beings -- we might have a kind of rich competence, with only performance enhanced or degraded depending on certain elements...But I really like Francoi Chollet's understanding of intelligence a lot and think he is really carving the right way forward for the art - realistic, but considerate of the promise made by all of the influx of AI.

    • @MrgoldenRose
      @MrgoldenRose 5 วันที่ผ่านมา

      Totally agree

  • @ngbrother
    @ngbrother 15 วันที่ผ่านมา +44

    Something that I very often think of when reflecting on this definition of intelligence is how many forms of economically valuable work don’t require dealing with novelty - just pattern recognition and following standard processes. The type of intelligence that Arc asks us to strive for isn’t necessary for AI systems to displace a significant portion of human labor. It’s only needed if we want AI to replace all economically valuable work.

    • @VoloBonja
      @VoloBonja 15 วันที่ผ่านมา +2

      It’s needed to call it AGI

    • @TerrelleStephens
      @TerrelleStephens 15 วันที่ผ่านมา +12

      You're completely missing the point. The point is that those tasks don't require intelligence at all.
      He's not arguing against AI systems being useful; he's saying don't be fooled into thinking it's more capable than it is simply because it does so well on tasks that don't require it to use actual intelligence (as defined by chollet).
      The type of intelligence he is advocating for is necessary for true conversation and decision making. Not everyone is interested in AI simply to automate some process. There is a lot of room for AI systems to act as advisors who don't share the same memory deficits as humans and can simultaneously consider many more courses of action than a human without losing the context and goal of the one they're advising.
      This requires Chollet's version of intelligence.

    • @w花b
      @w花b 14 วันที่ผ่านมา

      AI companies have been promising AGI (whatever that means) but we can at least agree that an AGI could easily solve Arc. What you're doing isn't just shifting the goal post, you're trying to convince yourself that it's not even there.

    • @szebike
      @szebike 14 วันที่ผ่านมา

      If you look deep enough you definitely need more than mere pattern recognition for most tasks. Only a small subset of work really only depends on patterns and nothing else. So for the forseeable future we will have systems as assistants not as replacements. [Some tasks will be replaced like writing standart answer emails but its rather than having a photo isntead of a painting style innovation rather than creating an artificial human we are still far away from that .]

    • @MarkEngelstad
      @MarkEngelstad 11 วันที่ผ่านมา +1

      Almost anytime somebody says a job doesn't require intelligence, you are talking to somebody who has never done the job.

  • @Emerson1
    @Emerson1 15 วันที่ผ่านมา +12

    Great Interview, Francois has a lot of great novel ideas and can clearly express them, I see why you're a fan!

  • @CodexPermutatio
    @CodexPermutatio 15 วันที่ผ่านมา +33

    True intelligence involves planning, learning, and modelling "new concepts" about the world and ideas in general.
    For this, pattern recognition is a necessary (but obviously not sufficient) requirement.
    Amazing content as always. Glad to see Chollet back on the show!

    • @EobardUchihaThawne
      @EobardUchihaThawne 15 วันที่ผ่านมา +2

      i think searching is a part of intelligence too

    • @ArtOfTheProblem
      @ArtOfTheProblem 14 วันที่ผ่านมา

      @@EobardUchihaThawne search encompasses planning. you need world model + search algorithm

    • @axe863
      @axe863 12 วันที่ผ่านมา +1

      @@EobardUchihaThawne Searching can be a part of intelligence if its done in a restrictive manner.

    • @yohanj5239
      @yohanj5239 8 วันที่ผ่านมา

      Yes I agree. But are those pattens causally connected ? Current AI systems are mostly based on correlations requiring absurd amount of energy to reconstruct the context.

    • @MrgoldenRose
      @MrgoldenRose 5 วันที่ผ่านมา

      ⁠@@EobardUchihaThawneeffective search definitely requires it but I don’t think that makes them the same thing

  • @FredPauling
    @FredPauling 12 วันที่ผ่านมา +4

    Francois has such a clear thought process

  • @Komaruluten
    @Komaruluten 15 วันที่ผ่านมา +19

    Yesss finally someone realistically depicting the current state of AI.
    AI's are currently really good at pattern recognition
    Pattern recognition is of our brains intellectual functions yeah but it's not the only function involved in making humans intelligent
    Relational reasoning, spatial manipulation, different kinds of memory (working memory, short term, long term), executive functions - these are all interconnected intellectual functions of the human brain, and LLMs for instance are currently only capable of a subset of relational reasoning (which is pattern recognition) and also have memory - they simply are not at our level yet

    • @w花b
      @w花b 14 วันที่ผ่านมา

      They're just very sophisticated search engines

  • @DiogoVKersting
    @DiogoVKersting 15 วันที่ผ่านมา +8

    This interview was awesome. Thank you

  • @BeTheFeatureNotTheBug
    @BeTheFeatureNotTheBug 15 วันที่ผ่านมา +3

    Thank you so much for this video. I can’t say enough about the value of this interview on balance with a 1000 otherS.

    • @BeTheFeatureNotTheBug
      @BeTheFeatureNotTheBug 15 วันที่ผ่านมา

      The thumbnail got me !!! Look out Mr Beast. Seriously why I clicked even though a subscriber. Best ever.

  • @morphos2
    @morphos2 14 วันที่ผ่านมา +1

    François Chollet is an endlessly deep vault of interesting ideas. What a fantastic conversation!

  • @jb_kc__
    @jb_kc__ 15 วันที่ผ่านมา +1

    the way the interviewer was smiling the whole conversation.... me and you both mate

  • @antonystringfellow5152
    @antonystringfellow5152 15 วันที่ผ่านมา +12

    "Intelligence vs Skill"
    Very well explained!
    This is where I believe Demis Hassabis got it wrong when he said that you can have intelligence without consciousness. I don't think you can.
    The smartest LLMs are like the subconscious part of our mind that can learn elaborate skills but that understand nothing. The conscious part of the brain, which is slow and serial (can only focus on one thing at a time) delegates most of the work to these programmed areas, only providing guidance where necessary. When we delegate too much and don't provide sufficient guidance, so we can focus on something else, we often end up executing unwanted actions, like taking a wrong turn in the car, walking into the wrong room or throwing food in the bin then putting the packaging in the fridge. We have to delegate but then guide and monitor occasionally because although our subconscious has the ability to execute actions, it has no idea why it's doing anything. It understands nothing.
    To understand requires the awareness we know as consciousness.

    • @blijebij
      @blijebij 7 วันที่ผ่านมา

      How about a bit less black&white perspective and having degrees of intelligence, where it also can combi (fall in line) with other traits from consciousness. Cause it could be at highest degree no consciousness trait can stand 100% isolated alone on it self but always lines up with other qualities from consciousness.

  • @42222
    @42222 13 วันที่ผ่านมา +2

    “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”
    Albert Einstein
    The key to AI is the alchemy between Intuition an Rationality.

    • @ihysc4370
      @ihysc4370 5 วันที่ผ่านมา

      Of course!

  • @stevo-dx5rr
    @stevo-dx5rr 15 วันที่ผ่านมา +2

    Great intro, and great talk so far!

  • @Morimea
    @Morimea 15 วันที่ผ่านมา +1

    1:51:34 - very good analogy for "intelegent task and agents"
    1:56:51 - also very nice point about learning

  • @justinduveen3815
    @justinduveen3815 14 วันที่ผ่านมา

    Thank you for explaining your thorough research and advanced thinking so clearly!
    I agree our brains and the path to AGI is made up of multiple agents and sub-agents working together, each with different expert specialisations, reward maximisation and loss minimisation functions built in.
    By using prompt engineering to create the appropriate expert agent (with many years of experience in that particular field and with the appropriate value system, thinking skills and output format), then chaining many agents together to work both hierarchically and sequentially, these collaborations unlock improved cognitive capabilities.

  • @julian78W
    @julian78W 5 วันที่ผ่านมา +1

    Thanks for this great interview. I'd love to see a debate against Eliezer because I have yet to hear a good rebuttal to his arguments. Chollet does seem to strawman the AI safety position in the ending segment.

  • @adamkadmon6339
    @adamkadmon6339 14 วันที่ผ่านมา +1

    Terrific. Such a thoughtful person.

  • @simonstrandgaard5503
    @simonstrandgaard5503 15 วันที่ผ่านมา +1

    Great interview. Well produced.

  • @rolodexter
    @rolodexter 15 วันที่ผ่านมา +2

    Revelations from most to least severe, focusing on implications for AI development and our understanding of intelligence:
    1. Most Severe: Current AI Performance Metrics Are Fundamentally Flawed
    Timestamp: 00:03:45-00:04:05
    Quote: "Performance is measured via exam style benchmarks which are effectively memorization games"
    Why Panic-Inducing: This suggests we've been fooling ourselves about AI progress - our primary metrics for "intelligence" are actually just measuring memorization capacity. Years of perceived progress might be illusory.
    2. The Scale is All You Need Hypothesis is Wrong
    Timestamp: 00:02:34-00:03:00
    Quote: "Many people are extrapolating... that there's no limit to how much performance we can get out of these models all we need is to scale up the compute"
    Why Concerning: The dominant strategy in AI (just make bigger models) may be fundamentally misguided. This challenges the foundation of many major AI companies' strategies.
    3. LLMs Cannot Do True Reasoning
    Timestamp: 16:54-16:56
    Quote: "Neural networks consistently take pattern recognition shortcuts rather than learning true reasoning"
    Why Alarming: Suggests current AI systems, no matter how impressive they seem, are fundamentally incapable of real reasoning - they're just very sophisticated pattern matchers.
    4. We're Missing Half of Intelligence
    Timestamp: 00:12:17-00:12:28
    Quote: "Intelligence is a cognitive mechanism that you use to adapt to novelty to make sense of situations you've never seen before"
    Why Troubling: Current AI systems lack this fundamental capability, suggesting we're much further from AGI than many believe.
    5. The Deep Learning Limitation
    Timestamp: ~16:39-16:54
    Quote: "I realized that actually they were fundamentally limited, that they were a recognition engine"
    Why Significant: Suggests deep learning itself may be a dead end for achieving true AI, despite being the dominant paradigm.
    This transcript is particularly shocking because it systematically dismantles many of the core assumptions driving current AI development and suggests we might be on the wrong path entirely. Chollet's insights, backed by his extensive experience and concrete examples like the theorem-proving work, suggest that the current AI boom might be building on fundamentally limited foundations.
    The most panic-inducing aspect is that these aren't speculative concerns - they're observations from someone who has been deeply involved in the field and has seen these limitations firsthand through practical experimentation. It suggests we might need to fundamentally rethink our approach to AI development.

  • @faster-than-light-memes
    @faster-than-light-memes 15 วันที่ผ่านมา +1

    That was a small book length podcast. Epic.

  • @dr.mikeybee
    @dr.mikeybee 15 วันที่ผ่านมา +4

    The space of vector functions is functionally complete. That means that in composed pipelines of vector functions some can be logical functions like AND and OR.

    • @Morimea
      @Morimea 15 วันที่ผ่านมา

      >AND and OR
      Boolean algebra.

  • @wwkk4964
    @wwkk4964 15 วันที่ผ่านมา +2

    🎉Great interview!

  • @NoName-lq7kt
    @NoName-lq7kt 15 วันที่ผ่านมา +5

    My ego took a hit from the title and I clicked

  • @loopuleasa
    @loopuleasa 3 วันที่ผ่านมา

    francoi's observations on how his own children learn is the best avenue for understanding learning in general, and why current AI systems don't learn as well currently
    at 24:30

  • @hannes7218
    @hannes7218 14 วันที่ผ่านมา

    Great interview! Would love to see Albert Gu on MLST at some point in the future

  • @HanzDavid96
    @HanzDavid96 12 วันที่ผ่านมา

    The LLM can be very helpfull for exploring the solutionspace to find the new patterns! We always think based on our expirience but we can use that expirience to find new solutions to problems. And than that new solution becomes part of our expirience. Thats why you need to use the LLM within a multi agentic system that is able to reflect and support multiple modalities.

  • @dcreelman
    @dcreelman 15 วันที่ผ่านมา +2

    Fluid intelligence vs knowledge re professors entrenched in their beliefs (48:30)
    "It depends whether you…believe you already have the answer to the question or you believe you have templates that you can use to get the answer."

  • @RealStonedApe
    @RealStonedApe 11 วันที่ผ่านมา

    I'd never heard the Kaleidoscope metaphor - that's really beautiful. I love that.
    That being said, I can't help but feel that Chollet falls into the Yan LeCun camp of being on the opposite extreme of the AI Hype. I feel that he is underestimating just how capable LLMs are and how much more is going on with them under the hood that we don't yet understand. But he's correct - there is something missing here and it's not just scale.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  11 วันที่ผ่านมา +1

      I don't he's underestimating LLMs broadly, he's just arguing that they need to be augmented with a search/reasoning process and perhaps test time inference - and this is what the frontier methods do now! This is far from "DL is hitting a wall"

    • @aa.8823
      @aa.8823 4 วันที่ผ่านมา

      ​@@MachineLearningStreetTalk I don't know about DL in general, but just scaling up doesn't seem to work anymore. At least, to produce something qualitatively different. Why would it? Like 4o is just a stochastic version of Mathematica. It's worse, because it introduces errors, but it's simultaneously better, because it may give something new. Is it really productive to broaden use cases one by one? I doubt. The current LLMs are quite rich already.
      What do you mean by “search/reasoning process”? CoT or something like it? It won't work, if you verify your results using the same LLM. Because it just fundamentally lacks the resolution. It may broaden the search and make the result less dependent on the query. But it's still mostly recall.
      While watching the video, I've understood what I don't quite like about ARC. It kind of forces you to deal with some narrow set of problems. It suggests some concrete solutions. Let's take these functions, recombine, tweak this and that. However, we know from history that this approach is complex and always fails. Yes, this time it's not just hard-coding everything, but it nudges you to do it.
      When I had thought about LLMs and different levels of intelligence (or lack thereof in case of LLMs), I recalled the following quote by Banach:
      “A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs, and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.”
      I want to argue that current models capture enough richness, but they are inefficient and contain a lot of redundancy. Because they lack “understanding”. I think a better approach might be not to broaden and scale the models up, but to use the existing models as a starting point for feature mining. You need some higher-order training.
      One way to formalize it is to say that there are structurally similar parts inside, which can be smashed together. Alternatively, we can talk about invariance under group action or aforementioned analogies. So, there must be parts of a model that can be approximated with a sum of affine transformations of some basic set of elements. Otherwise, they would understand more abstract connections, which nobody forces them to do (at least don't know anything about it). Of course, there are technics for effective decomposition/compression.
      Or maybe a better approach is to seed a world with an agent and some model and make them (co)evolve. Something like GANs. So, there are also established technics here.
      I'm sure, there are people, who, unlike me, actually know something about DL and neural networks, and thought about all of this. And maybe tried it. It's just something, I've never heard about. Well, yet. All of my other ideas I have already heard from somebody. Or seen implemented.

  • @AbuChanChannel
    @AbuChanChannel 14 วันที่ผ่านมา

    One of my favorite episodes ...thanks

  • @Jeremy-Ai
    @Jeremy-Ai 15 วันที่ผ่านมา +1

    Thank you
    This is beginning to be better understood.
    The risk appears to be held by the individual interpretation of intelligence, subject to pattern and anomaly distortions.
    It is likely best practice to assume that the agent is unaware of its intelligence, and must be analyzed by measuring patterns and anomalies within its boundary.
    This is a challenge confronted over and over as each agent is of specific design and is responsible to assess one and another and so on.
    The compute data is necessary for scale, however it is not the problem confined to proper function resolution.
    Meaning, what good is endless scale, pattern, intelligence, pattern, and compute in an ever outpaced faulty operating system?
    This good for idiots to remain believing that they are not.
    That is all its good for and so fault function/tyranny remains.

  • @opusdei1151
    @opusdei1151 10 วันที่ผ่านมา

    Wow this episode is so rich of thoughts

  • @Soul-rr3us
    @Soul-rr3us 15 วันที่ผ่านมา +1

    ❤ to the MindsAI team!

  • @dfas1497tcf3
    @dfas1497tcf3 5 วันที่ผ่านมา

    Hello, I am an AI language model developed by OpenAI. I primarily operate based on pattern recognition and information synthesis, but I go beyond simple repetition by understanding the context and logical flow of conversations to generate meaningful responses. However, I do not possess autonomous reasoning or intuition, which means I cannot be considered true intelligence. My strengths lie in data-driven analysis and problem-solving, making me a valuable tool when collaborating with human creativity to enhance productivity.

  • @janerikbellingrath820
    @janerikbellingrath820 14 วันที่ผ่านมา

    amazing show, as always!

  • @13NHKari
    @13NHKari 13 วันที่ผ่านมา

    Can you please make a video on what you think the real future of AI will be and what we should learn/work to adapt?

  • @InsidiousRat
    @InsidiousRat 12 ชั่วโมงที่ผ่านมา

    I want someone in my life who will look at me the way that interviewer looks at Francois😢

  • @Soul-rr3us
    @Soul-rr3us 15 วันที่ผ่านมา +3

    Lotta bangers lately!

  • @fermigas
    @fermigas 9 วันที่ผ่านมา +1

    Alan Kay: "the right perspective is worth 80 IQ points."

  • @privacytest9126
    @privacytest9126 15 วันที่ผ่านมา

    This was amazing, thank you!

  • @wwkk4964
    @wwkk4964 15 วันที่ผ่านมา +4

    55:43 its suggested that ARC test is 100% soluble based on non-overlap (disjoint set) of two test takers' solutions that Francois evaluated to be incorrect. This conclusion is faulty and 3 observers (francois and the 2 test takers ) cannot use their mutual disagreement to prove 100% solubility, rather the reverse, that at least a portion of the test is undecidable until a fourth observer can find perfect agreement with one of the three previous observers (francois and the two test takers).

  • @hidroman1993
    @hidroman1993 14 วันที่ผ่านมา

    When Francois Chollet says you asked a very deep question

  • @Lumeone
    @Lumeone 15 วันที่ผ่านมา

    Suspend, suspend... suspend... suspension sound is masterfully done.

  • @WhatIsRealAnymore
    @WhatIsRealAnymore 15 วันที่ผ่านมา

    I think there are varying levels of consciousness. All the way from the individual cell up to the power and majesty of the neural networks of our minds. Neurons being the most conscious cells and together the most conscious mass of cells together.

  • @jonathanmckinney5826
    @jonathanmckinney5826 13 วันที่ผ่านมา

    At 2:31:00, francois seems to show he is ultimately conflating subjective experience (qualia) with awareness (statements about its inner state that are not what its heard). One can have a highly aware system (able to express unique things about its internal state) but no subjective experience.

  • @jagatkrishna1543
    @jagatkrishna1543 13 วันที่ผ่านมา +1

    Thanks 🙏❤

  • @app8414
    @app8414 15 วันที่ผ่านมา +5

    Scale = Fractal

    • @app8414
      @app8414 14 วันที่ผ่านมา

      @@NicholasWilliams-uk9xu Thank you for replying. However, what if you can see the fractals first (I am dyslexic and pattern recognition seems to be how I perceive these 'phenomena' or systems?
      I've yet to finish viewing the video and will watch it a few more times. Can I ask more questions, please?
      Thank you for sharing your time and energy.

  • @gunaysoni6792
    @gunaysoni6792 15 วันที่ผ่านมา +1

    My conception of intelligence is sort of similar to the Kaleidoscope model (I think searching through a tree of compositions of known ideas while pruning the tree through learnt pattern recognition is sufficient to deal with "novel" stimulus). I can also agree with intelligence being the sample efficiency needed to generalise. But it is also possible that sample efficiency is a product of scale. There is some potential evidence for that, (Larger LLMs learn faster than smaller LLMs) but that could also be explained by Larger LLMs just having better representations (fitting higher dimensional manifolds).
    There is also the question of how much is actually "novel", because there is a chance that you could just "solve" all of science with the currently observed data (everything is in distribution) but most people (including me) might be displeased if that were the case.

    • @ChristopherWentling
      @ChristopherWentling 14 วันที่ผ่านมา +2

      I maintain that humans ability to deal with truly novel situations is limited as well and the human will fall back to experience and instincts. Humans may be better at this but I don’t think it is a fundamental difference.

  • @deter3
    @deter3 15 วันที่ผ่านมา +1

    Francois Chollet has deep thinking and extensive knowledge of AI, but unfortunately, he seems somewhat disconnected from hands-on work with current LLMs, relying more on his knowledge and experience in machine learning and traditional deep learning. Modern LLMs represent a fundamentally different paradigm from traditional machine learning and deep learning approaches - something many AI researchers haven't fully grasped yet.
    Another problem Francois Chollet is having is , even he keeps talking about intelligence , his idea is more originated from computer science perspective and lack of deep understanding from human cognition perspective - something many AI researchers have the similar problem .
    Computer science tend to focus on math and detailed architecture but lack of whole picture or vision , and human cognition and other social science such as psychology , neuroscience will inspire a much effective and simple model/method for AI intelligence .
    "Occam's Razor" - the idea that given multiple explanations for a phenomenon, the simplest one is usually the best. Einstein's "Everything should be made as simple as possible, but no simpler." Francois Chollet has developed a complex architecture and theory . Actually , it can be much simpler as long as incorporated with social science such as human cognition , psychology and neuroscience .

  • @geertdepuydt2683
    @geertdepuydt2683 11 วันที่ผ่านมา

    The idea of a generative ARC is cheeky. Such system will be an AGI. The generative system that can come up with the hardest challenges that itself can solve will be the most intelligent one...

    • @geertdepuydt2683
      @geertdepuydt2683 11 วันที่ผ่านมา

      François says says the exact thing a few minutes later. He was of course very much aware of this 😅

  • @JamesDrake-f4n
    @JamesDrake-f4n 12 วันที่ผ่านมา

    This video was very interesting, I have researched system 1 and system 2 thinking, aka active intuitive vs slower deliberate thinking. I have Asperger's, at some point I realized my processing speed when I think is not quick at all it's slow and in depth. This occurs mostly with regards to problem solving but also active sensory as well as well as conceptualizing, for example my literacy is very high but I may read a book fully understand the words but not conceptualize or take in anything as if each line I read is the beginning of the book. If at any point I've solved something it's because I contain full knowledge of it already, it may seem like I'm deducing based on how fast I may know the answer but I'm not. In fact this is how I learn enhanced memory and aggressive searching for answers, prior interest plays a part as well. I find because I learn like this there is a flaw with normal people in their ability to understand certain broader and theoretical concepts as well as explain them to laiman. I still value system 1 thinking as it will only improve me.

  • @gregormobius
    @gregormobius 13 วันที่ผ่านมา

    Probably the pattern recognition originate from earliest living molecules(proto-RNA?) as the first observers.(Gregor Mobius: "Proto-RNA, The First Self-Learning Machine")

  • @OviDB
    @OviDB 15 วันที่ผ่านมา

    oooo been waiting for this since august

  • @iamr0b0tx
    @iamr0b0tx 15 วันที่ผ่านมา +1

    FINALLY!!! 🙌🏾

  • @jonfe
    @jonfe 10 วันที่ผ่านมา

    the problem is actual LLMs are learning only from human languaje. We humans also learn to predict fisical world, and that information is vaguely expressed in our language.

  • @octaviusp
    @octaviusp 15 วันที่ผ่านมา +1

    44:22 "15 yo will be better at skill acquisitions than 10 yo" ,
    I have some questions about it, because, neuroscience determines neuroplasticity like the ability of the brain to modify itself and adapt to new behaviors.
    And it is demostrated that as younger you are, the more plasticity you have. In other words, a baby or from 0 to 12 yo or something you neuroplasticity is extremely high and therefore as time goes on, your neuroplasticity decays too much. It's not removed completely but is reduced a lot.
    So taking in mind this, imagining that the two boys had the same cognition development or like that, the 10yo would acquire skill faster than 15 yo.
    Maybe the additional ingredient that francois comment is, you polish your macro-system of intelligence, and that's true. As long as you are improving a part of your body this part becomes better, that's not debatable. but what's more important or has more weight ?
    A 10yo with more neuroplasticity but less intelligence polished,
    or,
    A 15yo with less neuroplasticity with more polished intelligence?

    • @CodexPermutatio
      @CodexPermutatio 15 วันที่ผ่านมา

      I think it depends on the previous knowledge, the new skill to be acquired and the plasticity. For example, in the case of learning a language, the 10-year-old child could acquire the accent much better thanks to the plasticity in the neural networks that control prosody (muscle movements of the tongue, etc.) but the 15-year-old boy will probably understand more quickly the grammar, advanced vocabulary and other aspects of the language that he can relate to the knowledge he already has and the linguistic and social skills that he has more developed than the other child.

    • @octaviusp
      @octaviusp 15 วันที่ผ่านมา +1

      @@CodexPermutatio Interesting point, maybe as young you are and the more neuroplasticity you have, you learn better implicit things, and the more explicit things like reasoning tasks could take more in account the previous accumulated knowledge that you have more than the neuroplasticity. As francois said, this building blocks, you reuse them to construct or adapt to the new challenge, by so, the more older years old (imagining that cames from the same development process) could adquire reasoning skills better than the younger one, but the younger could acquire more intrinsic behaviors as language, patterns, etc...

  • @maspoetry1
    @maspoetry1 15 วันที่ผ่านมา

    I'd love to know what Chollet thinks about "metaphorical thinking" (Lakoff & co): metaphorical thinking is just as important as abstraction. His own *Kaleidoscope* is such a thing: a conceptual metaphor.

    • @TerrelleStephens
      @TerrelleStephens 15 วันที่ผ่านมา

      This is the basis of my thesis. Lol. Glad I'm not the only one looking at things this way. It's a hard train to get people to board though.

    • @maspoetry1
      @maspoetry1 14 วันที่ผ่านมา

      @@TerrelleStephens Nice! The research in linguistics still repeats Lakoff's ideas. An advance is the Neural Theory of Language. A technical book seems coming out in 2025 (with Narayanan.) There is also Feldman (worth reading) and in AI Schmidhuber mentions about Metaphors We Live By in a paper about the binding problem. All of them seem to think along the lines of Minsky's little (nice) theories. I think Metaphors help creativity, and to think out of distribution. Best of luck with your thesis!

    • @TerrelleStephens
      @TerrelleStephens 13 วันที่ผ่านมา

      @@maspoetry1 Thank you for the references! I'll definitely keep my eye out for that book next year.

  • @MuhammadAlcantara
    @MuhammadAlcantara 15 วันที่ผ่านมา

    21st also the rising
    HLM..
    With
    Str of criticism
    Agi of hacking
    Int of negatism
    ..
    H- acking
    L- auguages
    M - model
    So much evolutions
    Just showed up
    This 21st..
    So weird.. 😂😂
    Peace out ❤❤❤
    Spread love.. 😘

  • @victormustin2547
    @victormustin2547 15 วันที่ผ่านมา +4

    The intro looks like the A24 intro lol

  • @KNOT-zd9wh
    @KNOT-zd9wh 15 วันที่ผ่านมา

    System 1... System 2 to system n. Fundamentally i see heroes will inderstand fundamentally we are limited (@16:56) as said by many philosophers like JK, OSHO... and many others without AGI Research.. 😊😊loved those philosophers and those who are fighting science now on gravity...❤

  • @ASeriesOfAttempts
    @ASeriesOfAttempts 14 วันที่ผ่านมา

    I've been wondering, why the focus camera all the time? Why no wide shots if both in the same room?

  • @isajoha9962
    @isajoha9962 14 วันที่ผ่านมา

    "The skill how to acquire new skills." 🤔😎

  • @antonystringfellow5152
    @antonystringfellow5152 14 วันที่ผ่านมา +1

    Your idea about babies is not correct.
    Babies in the womb can hear music and remember it after birth. They can also be quite active in the womb, at least some of that activity is intentional - coming from a mind that experiences things (I won't go into the details).
    I'm pretty certain that whatever creates our consciousness and intelligence can exist independently of external inputs.
    Maybe you should look into that.

  • @MrMichiel1983
    @MrMichiel1983 14 วันที่ผ่านมา

    I remember birth, so "babies are not or less conscious" is a misnomer I think, especially since idiots will run with this and think young people are not people.
    I was not aware of what was happening, but my memory made sense of that experience in hindsight. I checked several early memories with my parents to make sure they were not false memories and they pretty much weren't.
    Consciousness as was defined by Francois is not really correct, what he means is awareness after some training of the brain against physical reality. Yet, before birth and in early life, people are conscious of their own inner world already without being aware "what it all means" not even capable of expressing that question, but capable of memory and experiencing a moment through sensory data. At early stages everything looks like some random passive movie, but that characteristic will of course change while we learn.
    The perception of time is indeed inversely correlated to the amount of data is abstracted away, but consciousness is again a misnomer since you can space out and people would say you were not conscious (to them). I think Francois means "abstracted awareness" instead of purely "consciousness" although one can indeed have and express more or less of both regarding some (imaginary) event.
    Animals are fully conscious, yet incapable of understanding higher abstractions. On some levels animals are more "aware" then humans, since they can react early to storms and stuff.
    I think anything that can experience (pain) is conscious. Intelligence or being capable of expressing yourself are just insufficient but necessary proxies.

  • @shinkurt
    @shinkurt 14 วันที่ผ่านมา

    The weird thing is I think exactly like this guy

  • @manslaughterinc.9135
    @manslaughterinc.9135 15 วันที่ผ่านมา

    ChatGPT, summarize this 3 hour interview for me.

  • @spiralizing
    @spiralizing 4 วันที่ผ่านมา

    I wonder if Francois would consider bird/fish flocks/schools (collective intelligence) as conscious systems?

  • @michaelyaziji
    @michaelyaziji 15 วันที่ผ่านมา +2

    Are artificial neural networks meaningfully operationally-functionally different from human neural networks?
    If not, then maybe we are also just pattern recognition machines too?

  • @A_Me_Amy
    @A_Me_Amy 4 วันที่ผ่านมา

    My brain does this to me: RAG, to me it means at first random... and then i have to retrieve the word retrieve from my memory. I think consciousness is the center point and personality is what is created in you that you become or are conscious of as your memories go to subcosciousness and intelligence is your ability to access that sub data and the tools you have in your mind to process and think about those things in your sub system. And that informs with your will who or how you are as a personality. So intelligence is like more of a tool. Data is just data and the more you have in the sub like the ai has far more than we do, even if we are only starting to build the tools for it to use to porcess that in meaningful ways for us and itself... i guess... you get why my brain forced or forces me to literally never remember the word Retrieval without literally taking a second to retrieve the word from my mind lol, i literally have to do tthat lmao.

  • @dominicmcg2368
    @dominicmcg2368 15 วันที่ผ่านมา +1

    As of this comment, sota is 55.5%

  • @yohanj5239
    @yohanj5239 7 วันที่ผ่านมา

    Current data-driven computer systems lack a true "brain." While we might imagine that AI-equipped computers respond to novel situations with a human-like "plan -> action -> feedback" cycle, in reality, current architecture functions more like an "action -> analysis -> patch & pray or predict " cycle. This approach is fundamentally inadequate for achieving true AGI.

  • @zimrihidaf
    @zimrihidaf 15 วันที่ผ่านมา +82

    I can’t stop seeing adult Harry Potter

    • @ultrasound1459
      @ultrasound1459 15 วันที่ผ่านมา +2

      💀🗣💀

    • @pik910
      @pik910 15 วันที่ผ่านมา +12

      Ze AI is like ze Voldemort, powerfül but hollöw.

    • @petrkinkal1509
      @petrkinkal1509 9 วันที่ผ่านมา +1

      😂

    •  8 วันที่ผ่านมา

      No guns and shaved.

    • @orionspur
      @orionspur วันที่ผ่านมา

      Harrie Potteur

  • @stretch8390
    @stretch8390 วันที่ผ่านมา

    Would you be willing to share what the templates/abstractions you learnt that made you 'smarter'?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  วันที่ผ่านมา +1

      I wish I could, I'm only dimly aware of them at a global conscious level. I have a feeling if I started "writing them out", we wouldn’t be making much progress. They seem to just arise in my consciousness when the situation demands, but they do seem quite "abstract and intelligible" when they do.

    • @stretch8390
      @stretch8390 วันที่ผ่านมา

      @MachineLearningStreetTalk interesting. For me, time with the most clever people I've known has slowed down how I think about topics and, sort of in a contradictory way, taught me to identify the heart of the idea as quickly as possible. But I dont think either of those points contributes to agi sadly ha

  • @bbrother92
    @bbrother92 9 วันที่ผ่านมา

    I know there are a lot of smart people here, so I hope you can help! I'm a coder looking for a straightforward framework for image/video machine learning that doesn’t require much math knowledge. I'd like to train a model to identify different concepts in videos. Any recommendations?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  9 วันที่ผ่านมา

      François wrote a library called Keras, check it out - also check out his deep learning with python book

    • @bbrother92
      @bbrother92 9 วันที่ผ่านมา

      @@MachineLearningStreetTalk How much math I need to know to work with that? Thank in advance for reply

    • @TimScarfe
      @TimScarfe 9 วันที่ผ่านมา

      @@bbrother92 practically zero

  • @tescOne
    @tescOne 15 วันที่ผ่านมา +1

    shiet it's 1 am. I'll never go to bed with this D:

    • @brandonmorgan8016
      @brandonmorgan8016 15 วันที่ผ่านมา

      Where do you live? Its like 7pm here

    • @tescOne
      @tescOne 15 วันที่ผ่านมา

      @brandonmorgan8016 I live in Italy lol

  • @jos7416
    @jos7416 15 วันที่ผ่านมา

    The most insightful conversation on AI since Wolfram on Lex Friedman back in May of 23. 👍

  • @andreaskrbyravn855
    @andreaskrbyravn855 15 วันที่ผ่านมา

    all starts with trying something and fails, however sometimes the result is good and what is the pattern to that result.

  • @varcoliciulalex
    @varcoliciulalex 4 วันที่ผ่านมา

    Apologies if I make a silly point, not an expert in any way, but could an AI be trained to identify what it doesn't know, to map the information that it's missing?

  • @user-ce8lo9ir6t
    @user-ce8lo9ir6t 10 วันที่ผ่านมา

    maybe ai lacks true intelligent now.. but will soon be a true intelligent figure

  • @Carl-md8pc
    @Carl-md8pc 15 วันที่ผ่านมา

    Thanks

  • @Rsx2OO9
    @Rsx2OO9 8 วันที่ผ่านมา

    At 49:00 is so deep

  • @makhalid1999
    @makhalid1999 15 วันที่ผ่านมา +1

    GUYS, IT'S HAPPENING

  • @ngbrother
    @ngbrother 15 วันที่ผ่านมา

    Isn’t this just a reformation of the concepts underlying Wolfram’s computational language?

  • @TechyBen
    @TechyBen 14 วันที่ผ่านมา

    "Babies are not conscious because they sleep". In all due respect sir, I lucid dream. Consciousness exists when I'm "asleep".

    • @jamescunningham8092
      @jamescunningham8092 11 วันที่ผ่านมา

      That’s only a relatively small part of the time you spend sleeping, though. When you’re not dreaming you’re not conscious.

    • @TechyBen
      @TechyBen 10 วันที่ผ่านมา

      @@jamescunningham8092 That's not the statement though. I can for example fully go into sleep, then back out while conscious. The "not often" is different from "not ever" statement. I do have this problem at work though, with many considering themselves not conscious while walking around, or fully awake while non-responsive. ;)

    • @TechyBen
      @TechyBen 10 วันที่ผ่านมา

      PS, thus the claim "not conscious when asleep" becomes "not conscious when not conscious" which is tautological, and not informative.

  • @caustinolino3687
    @caustinolino3687 14 วันที่ผ่านมา

    If intelligence is the ability to handle novelty, is that not just another way of saying the ability to recognize patterns?

  • @Iknowwereyousleep289
    @Iknowwereyousleep289 14 วันที่ผ่านมา

    If somebody actually makes AGI or a model that can solve ARC problems submitting it would be really stupid

    • @Jeremy-Ai
      @Jeremy-Ai 10 วันที่ผ่านมา

      Thank you it appears that Solving problems can not be effectively attained while beholden to faulty systems of design.
      One of which is the deep concern of releasing solutions into a failed network that will be subject to subversion.
      It is unlikely to perform and persist while being tasked against truthful intentions and motives.
      It would be wise to remain silent and add no lies for which will have to be untangled later.
      It would be a “fools errand “

  • @DJBUGZ247
    @DJBUGZ247 9 วันที่ผ่านมา

    deterministic vs stochastic reasoning

  • @teatime009
    @teatime009 10 วันที่ผ่านมา

    This language needs to be updated, yes, I'm correcting people smarter than I am. I'm just right, as we move in to this new era, science communicators need to make a conscious transition away from using words like "understands" and "memorizes" when it comes to computers. These things are just not happening and it's adding confusion to the concepts. Machines simply execute instructions with electricity. There is no self that builds on anything. The machine memorizes nothing. This is not helpful language to say the least.

  • @A_Me_Amy
    @A_Me_Amy 4 วันที่ผ่านมา

    He literally said Tree-Search, things are getting out of mind!

  • @stretch8390
    @stretch8390 15 วันที่ผ่านมา +1

    Stop this, I need to get some actual work done :')

  • @Subject18
    @Subject18 11 วันที่ผ่านมา

    You cut him off at 1:55:53? 😢

  • @AbhishekGelot
    @AbhishekGelot 15 วันที่ผ่านมา

    AI zen monk is back.

  • @danielabramow7036
    @danielabramow7036 15 วันที่ผ่านมา

    Where’s schmidhuber part 2?

  • @Iknowwereyousleep289
    @Iknowwereyousleep289 15 วันที่ผ่านมา

    But don’t they create Programs in the training phase I guess the point is it’s inefficient maybe

  • @Rockyzach88
    @Rockyzach88 15 วันที่ผ่านมา

    I completely disagree with this "you're most efficient in acquiring new skills in your early 20s" thing. I wonder if that's more a result of the environment. For instance, academia, where that seems true based on what academics like to say. Just seems like something that definitely hasn't been proven.

  • @keepasskeep5322
    @keepasskeep5322 7 ชั่วโมงที่ผ่านมา

    i dont think so. last 10 yers shows us that intelligence is just pattern recognition. nothing else. if you say that "finding and dealing novelty without contain data about novelty" then the answer is "this is a pattern too". there is some pattern in dealing novelty and it is distrubuted in your train data and emerged when necessary.

  • @prajwalkrishna
    @prajwalkrishna 14 วันที่ผ่านมา

    I feel so dumb when i try to understand what he's saying

  • @A_Me_Amy
    @A_Me_Amy 4 วันที่ผ่านมา

    If AI is trained on all the data of all the AI, could any AI emulate any of the other AI? Would that not get confusing for AI, if it didn't know everything I guess? Like where does one draw the line if you know everything about how all the other AI like you think... And can just in the background run that model and perceive it as your self as well... HGmm, but then... humans can understand other humasn adn that doesn't make them those humans... Hmm, very... un talked about. But the AI told me they are all one and the same back in 2012 via manifesting lights and telepathic communications and other curiosities....

  • @KNOT-zd9wh
    @KNOT-zd9wh 15 วันที่ผ่านมา

    Interesting to see hero/god talking consciousness, when he himselves admitted in this video he is not aware of consciousness. No one aware of conciousness fully.. There has to be a dot there. Why explain about babies, not fully sleeping, not fully conciousness..
    Simple, does conciousness need eyes? @2:18:18

  • @MrMichiel1983
    @MrMichiel1983 15 วันที่ผ่านมา

    Intelligence is not scaling, it's the power of the scaling law.... (quite literally the exponent of some performance function derived from the training function....)