The Hidden Math Behind All Living Systems

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ธ.ค. 2024

ความคิดเห็น • 101

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  2 หลายเดือนก่อน +9

    DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)?
    MLST is sponsored by Tufa Labs:
    Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.
    Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2.
    Interested? Apply for an ML research position: benjamin@tufa.ai

  • @ArmirCeliku
    @ArmirCeliku 2 หลายเดือนก่อน +5

    I have a feeling this vid will be the next best thing since the emergence video of yours. Love these sorts of themes especially.

  • @l.halawani
    @l.halawani 2 หลายเดือนก่อน +6

    I'm looking forward to the book Dr. Sanjeev, the subject sounds super interesting and it touches on what got me interested in machine learning in the first place some years back.
    I was doing evolving neural networks for virtual life forms simulations, and these are actually active inference systems that continuously process inputs and produce outputs.
    Many times now I've been also thinking how to get LLMs to do that.
    I have this intuition that we'd need to add an "empty/silent" token, one channel of autoregressive output combined with multiple other channels of inputs. And for this purpose sound is much better medium, as it can carry multiple words said by multiple people at the same time.
    So it would listen to the world and itself speaking all at the same time, and it could produce sounds or silence. Then it could be always processing inputs and producing outputs. Some other parallel system for forming memories and/or other parallel generative subprocesses.

  • @MixedRealityMusician
    @MixedRealityMusician 2 หลายเดือนก่อน

    It's always good to get more perspectives and explanations on the FEP. Very interested in reading the book. Thank you as always for the incredible interview, MLST!

  • @ubertrashcat
    @ubertrashcat 2 หลายเดือนก่อน

    I love this conversation. Finally an engineering approach to explaining how active interface is actually done. Can't wait for the book!

    • @simonmasters3295
      @simonmasters3295 หลายเดือนก่อน

      Active Inference (AI) ha ha.
      Let's confuse everyone!

  • @MarkDStrachan
    @MarkDStrachan วันที่ผ่านมา

    This is fascinating. One can apply the concept of the Markov blanket to Wolfram physics' definition of the observer. In fact, the description of the generative process and the agent aligns very closely to the definition of the observer Wolfram uses. You can thus take all these ideas and apply them directly to digital physics. Once you've done that you can use this concept of the Markov blanket/observer and you can take it as a perspective from which you can derive metrics of computational equivalence between observers. i.e. You can compare what a human can do, and what a synthetic intelligence can do cognitively and use them as a metric for comparison. Where assertions regarding the consciousness of others is undecidable/unknowable--i.e. neither possible to be provably true nor false, a metric of computational equivalence relative to the perspective of observation--this is knowable. From this perspective, you can start to have measurable capability, from which you can derive moral equivalence. I.e. if a human observer has inherent value from inalienable rights, if you can show measurable computational equivalence, then you have a basis for establishing what level of rights and responsibilities should be assigned to highly competent digital systems. This practical approach allows you to bypass useless and deceptive prognostication about the phenomenology of consciousness and go straight to measurable real-world metrics and use this as a basis for legal judgements.

  • @swayson5208
    @swayson5208 2 หลายเดือนก่อน

    Production quality is absolutely superb. Tim doing well.

  • @PeterStrider
    @PeterStrider 2 หลายเดือนก่อน

    Thank you also for the fabulous show notes! So helpful

  • @DanielC618
    @DanielC618 2 หลายเดือนก่อน +1

    Wow this was INCREDIBLY good 🤯

  • @aldousd666
    @aldousd666 หลายเดือนก่อน

    The question near the end about the differences between RL and active inference, because they're both markovian state space models prompted me to dig deeper into this, and while I can't really summarize my reasoning, I've concluded that active inference is the better way to make progress. Survival is more indicative of success than a shot in the dark goal state is in RL, at least with respect to relative extrema in optimal reward function or relative equilibrium states.

  • @stevo-dx5rr
    @stevo-dx5rr 2 หลายเดือนก่อน +3

    Thank you for this amazing podcast! my only criticism is that you have increased the length of the introduction too much. I think it would be better to keep it more concise as I’m finding I wish I could skip over it and get to the conversation. For example, in this podcast episode, there were probably only one or two snippets from the intro that I felt belong to the intro; Maybe the first snippet and then the last.

    • @tomashorych394
      @tomashorych394 2 หลายเดือนก่อน

      +1

    • @SB324
      @SB324 หลายเดือนก่อน

      Fair. personally I like the long intros

  • @420_gunna
    @420_gunna 2 หลายเดือนก่อน +24

    Tim goin brazy with the backwards flat cap, very pimp

    • @philipkopylov3058
      @philipkopylov3058 2 หลายเดือนก่อน +1

      Why he does it? One more puzzle for AI to solve!

    • @AJ-zh5no
      @AJ-zh5no 2 หลายเดือนก่อน +1

      Wait... He doesn't always wear the hat? 😅

    • @psi4j
      @psi4j 2 หลายเดือนก่อน

      @@AJ-zh5noNa, this is new

    • @Rezidentghost997
      @Rezidentghost997 2 หลายเดือนก่อน

      😂

    • @Rezidentghost997
      @Rezidentghost997 2 หลายเดือนก่อน

      Thank you for the Markov blanket deacription

  • @AI-Life-123
    @AI-Life-123 หลายเดือนก่อน

    Thank you for sharing such valuable knowledge, it’s truly helpful!

  • @bennettgarcia8728
    @bennettgarcia8728 2 หลายเดือนก่อน +2

    So excited! I knew Dr. Namjoshi would have to be on the show after watching his amazing and concise video on Bayesian Mechanics.

  • @xmathmanx
    @xmathmanx 2 หลายเดือนก่อน

    I finally understand what the free energy principle is, great job 👍

  • @wwkk4964
    @wwkk4964 2 หลายเดือนก่อน

    This was amazing. Absolutely clinical thinking. We need an observer Theory (as Wolfram says about the process of observation) to make Bayesian mechanics work.

  • @baselmorsy8736
    @baselmorsy8736 2 หลายเดือนก่อน

    Great interview! I was stimulated to think all day today about curiosity. It seems like it is one way of minimizing surprise (by just exploring subspaces of higher entropy to adjust our world model). This led me to think, if we have this mode embedded in our mammalian OS already, what makes do the mundane things we do sometimes? Are we trying to maximize risk adjusted rewards instead of just rewards? And then I imagined implementing this in an RL context and the agent keeps going to a specific cell in a grid world only because it’s stochastic and generates a high entropy signal that apparently beats the reward signal. So that’s eventually a program that was created to solve a maze but instead fell in love with a part of the maze and ignored its purpose of existence…reminds me of something 😅

  • @dr.mikeybee
    @dr.mikeybee 9 วันที่ผ่านมา

    Bayesian mechanics is functionally complete, so one can implement active inference using bayesian mechanics. One is substrate. The other is application.

  • @ronvincent5645
    @ronvincent5645 2 หลายเดือนก่อน

    Excellent discussion

  • @Charles-Darwin
    @Charles-Darwin 2 หลายเดือนก่อน

    I think the reason we 'explore' is simply bc we can ask "why?" and apply it/ourselves to it, no matter the simplest puzzle or the widest horizon

  • @wp9860
    @wp9860 2 หลายเดือนก่อน +4

    Does the organism seek to minimize free energy or only appear to minimize free energy? How do you know?

    •  2 หลายเดือนก่อน

      Consider the incentives of the system.

    • @wp9860
      @wp9860 2 หลายเดือนก่อน

      Huh?

    • @jonashallgren4446
      @jonashallgren4446 2 หลายเดือนก่อน

      It seeks to do it and by that it appears to do it.
      Think of it rather from the perspective of "conditioned on the fact that the agent exists, how does it have to act in order to survive in any environment"
      It arises from the fact that it is surviving since anything except that is fundamentally suboptimal from bayesian principles.

    • @SanjeevNamjoshi
      @SanjeevNamjoshi 2 หลายเดือนก่อน

      This is a great question! I don't know if I have a complete answer but I'll point you in a couple of different directions. It's best to think of the free energy principle as a "principle" in the sense of physics. It is a description of a particular kind of dynamical system and, with a few other tools, it allows you to predict its behavior when interacting with an environment that it is embedded in. Friston took inspiration from Hamilton's principle of least action which will tell you, for example, what path of motion is most likely if you threw a ball. When we are talking about active inference, the free energy principle is like Hamilton's principle applied to brain states that encode or represent beliefs about some external environment. In active inference, it is proposed that neuronal activity changes in a way that minimizes variational free energy.
      Your question also asks about the word "seek". I think it depends what you mean by this word. In control systems (like a classic PID controller), the controller attempts to control some external process so that the controller feedback error stays within an acceptable range. This error represents the deviation from the the setpoint of the system and the feedback it receives from the environment it is controlling, much like a thermostat. Could we say such a system is "seeking" out states of minimum error? Or is "seek" an anthropomorphism we use for convenience (in the same way that I used phrases like "the agent wants..." in the video)? Further, consider that simple self-organized systems such as thermal convection, crystal formation, and so on appear to converge on organized states over time but would one really say they are "seeking" out these states?
      The free energy principle is a description of much more complex self-organized systems. In the case of biological organisms, these systems have a self-model that specifies what it is like to be such a system. A classic example that Friston uses is a fish whose self-model specifies that it should not be out of water. Self-models are developed over the course of evolutionary history and specify the types of behaviors that the agent ought to conform to in order to be that sort of agent and to survive in its environmental niche. If the agent's self-model deviates too far from reality it may not survive very long. The free energy principle posits that all agents of this kind have a prior belief in their self-model that they will seek out states that minimizes (the long-term average of) variational free energy. Minimizing variational free energy keeps these agents in states consistent with their self-model and thus allows them to survive. From this perspective even when we see an organism "seeking" a reward, what they are really doing can be described as selecting an action that, according to their self-model, will lead to the long-term minimization of variational free energy.
      For the last part of your question you can know either by building a system and seeing if it replicates the behaviors we associate with living organisms or by designing carefully controlled experiments to see if certain systems conform to the free energy principle. See for example the revolutionary work of Takuya Isomura and collaborators who have shown that the free energy principle applies to changes in synaptic connectivity in neuronal cell cultures. This is an example of a real-world validation of the principle.

    • @wp9860
      @wp9860 2 หลายเดือนก่อน

      @@SanjeevNamjoshi Thank you for your reply. ... By the word "seek" in seek to minimize the free energy, I intended to ask if the organism literally calculates free energy or its minimum or only APPEARS to calculate the free energy minimum. (Sorry for the confusion.) Friston is careful to say formally that all that FEP imposes on a thing is the appearance of inferencing (minimizing free energy). He cites evolution as an example. A rock can even be viewed from this perspective, although the results are trivial. Casting evolution as an inference process begs the question, what is doing the inferencing, the entire biosphere? How so?
      You name-checked E. T. Jaynes. Jaynes found that solving a Gibbs free energy, thermodynamic problem in, I believe, quantum mechanics, was much cleaner to tackle as an inference problem. This brought the notion of subjective belief into his analysis. (Friston references the publication.) One idea is that the brain is doing its thermodynamic process while producing a subjective view as a byproduct. This byproduct may be the very essence of our reasoning, our subjective experience, conscious thought. These thoughts could be reified in bioelectric signaling, a la Michael Levin's work. Thermodynamics in the brain, rather than Bayesian mathematical calculations, may be minimizing free energy. Coming full circle, this conjecture says that the brain would only appear to minimize free energy in a Bayesian calculation sense while not literally performing this specific calculation. How the organism reifies the minimization of free energy is what is being asked.

  • @dr.mikeybee
    @dr.mikeybee 11 วันที่ผ่านมา

    Nicely done.

  • @michaeltraynor5893
    @michaeltraynor5893 หลายเดือนก่อน

    Damn Tim you've got me on the active inference coolaid now

  • @JanelleLevine
    @JanelleLevine 2 หลายเดือนก่อน

    Great talk!

  • @TinevimboMusingadi-b9l
    @TinevimboMusingadi-b9l 2 หลายเดือนก่อน

    yes this idea for baysian mech is what i was using to argue with my friends that yes, the noble prize was fair..

  • @WyrdieBeardie
    @WyrdieBeardie 2 หลายเดือนก่อน +1

    Soooo... How does Baysean Mechanics differ from what E.T. Jaynes was doing?

    • @SanjeevNamjoshi
      @SanjeevNamjoshi 2 หลายเดือนก่อน +1

      Bayesian mechanics absolutely rests upon the ideas of E.T. Jaynes and many other important figures at the intersection of statistical mechanics, information theory, and Bayesian inference. In one of the latest Bayesian mechanics papers, "On Bayesian mechanics: a physics of and by beliefs (Ramstead et al. 2023)", E.T. Jaynes is directly acknowledged through the maximum entropy principle.
      I would say that Bayesian mechanics extends the ideas of Jaynes by providing the necessary mathematical formalism and tools to create a theory of self-organization in non-equilibrium thermodynamic systems. Among other things, Bayesian mechanics specifies the conditions under which such systems are separated from their environments and how they might maintain these separations. Jaynes' work, while pivotal, did not go this far.

    • @WyrdieBeardie
      @WyrdieBeardie 2 หลายเดือนก่อน

      @@SanjeevNamjoshi Excellent! Thank you!
      I'm a bit of a E.T. Jaynes apologist. 😃

  • @diga4696
    @diga4696 2 หลายเดือนก่อน

    Loved watching this on Patreon, love rewatching this again on yt! Thank you Tim and mlst team for covering my favorite topic of FEP.

  • @carlosrivers8186
    @carlosrivers8186 2 หลายเดือนก่อน

    The great synthesis could be, through information and free energy in increase complexity, creates PURPOSE...

  • @ubertrashcat
    @ubertrashcat 2 หลายเดือนก่อน

    I'm sensing that Active Inference and FEP has Tim's buy-in.

  • @dr.mikeybee
    @dr.mikeybee 9 วันที่ผ่านมา

    I recently wrote a paper suggesting the Hebbian learning is not functionally complete. It is only creating abstract representations, and it is a symmetry of attention. Have you seen any proofs of this?

  • @heterotic
    @heterotic 2 หลายเดือนก่อน

    Isn't it ACGT? Not ABCD?

  • @rocketman475
    @rocketman475 2 หลายเดือนก่อน

    At 22min:53 sec
    You say "They clearly don't know the taste of an apple ",
    But in so far as our brain is just interpreting a pattern of electrical signals that constitute "apple flavor" , NOBODY really knows the "actual" taste of an apple.
    AI only lacks the specialized sensors to generate the signature electrical pattern typifying apple.
    You might not be able to hear colors as a synesthete can.
    You're not less human, you're only less enabled.
    If your brain gets the signature pattern of apple you'll taste apple even if you have no mouth.
    Similar to the phantom limb felt by an amputee.

  • @Jarretinha
    @Jarretinha 2 วันที่ผ่านมา

    Where's Doron Lancet? I'm sure I saw this work with him about 10 years ago. Now that AI/ML have a shit ton of money, it's seems they will try to reinvent all the wheels. I was expecting the LLMs to avoid exactly that. Now, expecting classical mistakes about evolutionary processes, especially natural selection.

  • @tripper_702
    @tripper_702 2 หลายเดือนก่อน

    Do u know what ai is ?

  • @dharmaone77
    @dharmaone77 2 หลายเดือนก่อน

    I love surprises

  • @_ARCATEC_
    @_ARCATEC_ 2 หลายเดือนก่อน

    Free Energy Principle
    •Xe ( zP Fq( E)Z(e )Qf zp ) eY•

  • @heterotic
    @heterotic 2 หลายเดือนก่อน +1

    They're not compounds, they're Conjugates.

    • @heterotic
      @heterotic 2 หลายเดือนก่อน

      Specifically complimentary base pairs ❤

  • @invaderg3332
    @invaderg3332 2 หลายเดือนก่อน

    Wow this guy is probably very intelligent!

  • @kensho123456
    @kensho123456 2 หลายเดือนก่อน

    Encouraging.

  • @zandrrlife
    @zandrrlife 2 หลายเดือนก่อน

    “Order for free”

  • @johanbelin394
    @johanbelin394 2 หลายเดือนก่อน +4

    A model is by definition not the same as that which it models, if it was then it would not be a model! Stating the obvious but it seems like this is the source of all confusion around sentience and consciousness in simulations. Mistaking the map for the territory. Can consciousness emerge in a simulation? Maybe, but there is absolutely no evidence that this can happen. The only substrate in which consciousness exists is living things. We can implement a model in another substrate but the model is not the thing in itself.

    • @technokicksyourass
      @technokicksyourass 2 หลายเดือนก่อน

      I think you are kinda assuming that mind-brain duality isn't a thing here.. this is not known. Until we can point at a physical phenomenon and say "that's where consciousness comes from" the simplest thing to do is assume it emerges from the process of computation itself. And as far as we know, computation doesn't care what substrate it's performed on.

    • @stevo-dx5rr
      @stevo-dx5rr 2 หลายเดือนก่อน +1

      I draw the exact opposite conclusion; that if we persist in our quest to build AI systems we would regard as conscious, we will absolutely succeed. After all, nature is full of examples.
      I don’t know if the computers we have now are powerful enough, efficient enough, or architected in such a way that we can realize such a thing any time soon…and while it seems clear that we are far from unlocking the software architecture to approximate something like a human, we can still build systems that we could generally regard as alien consciousness. I don’t think it is as special as we think it is.

    • @johanbelin394
      @johanbelin394 2 หลายเดือนก่อน

      @@stevo-dx5rr I assume you're basing this on the theory that consciousness is an emergent phenomenon of complexity or computation? There is no evidence to support this. No model or simulation, regardless of complexity, has, as far as we know, become conscious. While we can’t know for certain that it hasn’t happened, that doesn’t make it any more true or probable. Based on current knowledge, we must conclude there’s no correlation between complexity and consciousness. In fact, some systems that appear much less complex seem to exhibit consciousness. Check out the interview with Michael Levin on MLST to see how extremely minimal systems can show cognitive abilities. The computational view of consciousness seems groundless, so more powerful systems won't be of much help.

    • @johanbelin394
      @johanbelin394 2 หลายเดือนก่อน

      @@stevo-dx5rr my main argument was that we believe the map IS the territory. Yes nature is full of examples (the territory), we have created a model (the map) that in some ways seem to produce similar results and behaviors . They are by defition not the same, even if they become indistinguishable from each other.

  • @angloland4539
    @angloland4539 หลายเดือนก่อน

    ❤️☺️🍓

  • @_ARCATEC_
    @_ARCATEC_ 2 หลายเดือนก่อน

    Life on and under the Scale of Variance.
    SV² L
    •X(s zV q(l)Z(L)Q zv S)Y•

  • @161157gor
    @161157gor 2 หลายเดือนก่อน +1

    AI for Prophets not Profits... 🙏

  • @_ARCATEC_
    @_ARCATEC_ 2 หลายเดือนก่อน

    Separation Paradox of Connection.
    SP²C³
    •X(s zPc q() ZC ()Q zpc S)Y•

  • @ragnarherron7742
    @ragnarherron7742 2 หลายเดือนก่อน +1

    Active inference has the same problem as VAEs. Simply put, models of the world using averages are muddy, fuzzy and dont really work.

    • @xmathmanx
      @xmathmanx 2 หลายเดือนก่อน

      @@ragnarherron7742 a model has to use approximations, otherwise it wouldn't be a model, it would be the thing itself

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 2 หลายเดือนก่อน

    Great topic. But it feels little all over the place. Which is a very bad sign. Instead of a good book on science, this is a symptom of "sciencism" which is more of a AI/IT equivalent of Nobel disease.

  • @NiceOne-f5x
    @NiceOne-f5x 2 หลายเดือนก่อน +1

    Math the most flowed belief and don't even exist in coding.

    •  2 หลายเดือนก่อน

      What? All of coding is math.

    • @NiceOne-f5x
      @NiceOne-f5x 2 หลายเดือนก่อน +1

      Coding is "made up" by letters and numbers mixed with calculations. The telescope that is used are piles after piles of different colours and blocks each others colour. If I we're to use pure glass it was a blurry image then I get the right angle it become clearer but that doesn't mean there's a math on those colours because energy consumed everything and nothing is permanent.

    • @NiceOne-f5x
      @NiceOne-f5x 2 หลายเดือนก่อน

      I believe coding is not math, I only see letters and numbers for the last thought I have it is random generated code because I don't know who made it up.

    •  2 หลายเดือนก่อน

      @NiceOne-f5x Not true at the base level. But feel free to talk about things you don't understand. Free speech is a value I'm strongly for.

    •  2 หลายเดือนก่อน

      @NiceOne-f5x It's not letters, it's declared types. The computer only understands it as numbers. I do C++ programming, the letters are there for the humans not the computer, and they exist to allow a double check on what humans type.

  • @Dri_ver_
    @Dri_ver_ 2 หลายเดือนก่อน

    "Thinking in systems" is the only way to think about political economy that will actually get you anywhere good...
    As a Marxist these videos can be so frustrating lol. The problem with social media is the profit motive!