Dr. THOMAS PARR - Active Inference

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ม.ค. 2025

ความคิดเห็น • 52

  • @rezamirkhani4747
    @rezamirkhani4747 8 หลายเดือนก่อน +36

    Thank you for making this video. I'm working through this book and I'm finding it very difficult and enjoyable! Difficult because there are so many gaps in my knowledge from Physics to Psychology; Enjoyable, because of the way they have built Active Inference on solid scientific foundations.

    • @eun-jaehwang3061
      @eun-jaehwang3061 2 หลายเดือนก่อน

      Hello, I was wondering how much calculus knowledge is necessary to fully understand the material in this book. Specifically, by a typical college calculus course? Thank you!

  • @simonahrendt9069
    @simonahrendt9069 8 หลายเดือนก่อน +16

    I am deeply thankful for the magnificent and free content of this channel. Always interesting guests and topics and thorough explorations from which I learn a lot. If I had more money, I would definitely consider backing on Patreon but being as it is I just wanted to say a heartfelt thanks!

  • @induplicable
    @induplicable 5 หลายเดือนก่อน +3

    Hands down my favorite channel related to AI/ML. Aesthetically, there’s solid production values. In terms of content: The nuanced discussion around the philosophical, scientific and mathematical aspects is what the broader ‘trending’ discussions seem to sorely lack. Keep up the great work.

  • @redacted5035
    @redacted5035 8 หลายเดือนก่อน +1

    00:00:00 Intro
    00:05:10 When Thomas met Friston
    00:06:13 ChatGPT comparison
    00:08:40 Do NNs learn a world model?
    00:11:04 Book intro
    00:13:22 High road low road of Active Inference
    00:17:16 Resisting entropic forces
    00:20:51 Agency vs free will
    00:26:01 Are agents real? non-physical agents
    00:35:54 Mind is flat / predictive brain
    00:44:23 Volition
    00:50:26 Externalism
    00:51:57 Bridge with Enactivism
    00:53:27 Bayesian Surprise
    01:01:47 Variational inference
    01:05:47 Why Bayesian?
    01:12:04 Causality
    01:17:35 Hand crafted models
    01:26:45 Chapter 10 - bringing it together
    01:28:58 Consciousness
    01:33:10 Humans are incoherent
    01:35:25 Experience writing a book

  • @willywalter6366
    @willywalter6366 4 หลายเดือนก่อน +3

    What a brilliant and humble mind (and also a very good interviewer) - it will be interesting to see what all will come out of active inference in the future. It sounds like an extrem versatile and powerful framework for a lot of different applications?! And also how the brain might function indeed?!

  • @Quix-otic
    @Quix-otic 8 หลายเดือนก่อน +8

    شئ مبهر حقاً كان لقاء عفوي وغني بالمعلومات وقد مر الوقت سريعاً كذلك شكراً على الترجمه الرائعه.

  • @Robert_McGarry_Poems
    @Robert_McGarry_Poems 8 หลายเดือนก่อน +6

    Great work by everyone involved on both sides. Science needs communicators to bring it to the masses... 😊

  • @NeuroScientician
    @NeuroScientician 8 หลายเดือนก่อน +14

    You are making me to spend quite a bit on books.
    EDIT: Book is excellent, very dense reading, but the book itself it feels oddly light.

    • @magsterz123
      @magsterz123 หลายเดือนก่อน +1

      Yes! Physically the book is so light - it's the first time a book has made me wonder about the quality of paper that it's printed on.
      And indeed it's a paradox, given how dense and significant the book actually is. I am slowly working my way through it and have only just started chapter 2!

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing 8 หลายเดือนก่อน +4

    Both exciting and thought provoking. Excellent production too

  • @laplace862
    @laplace862 8 หลายเดือนก่อน +2

    Getting more and more cinematic. Love the theme! (and the content too)

  • @Robert_McGarry_Poems
    @Robert_McGarry_Poems 8 หลายเดือนก่อน +2

    @ 1:10:00 I just thought about a GAN that uses two teachers. One for each side of the network. Isolate the teachers from the generative arena. They only teach the player and update after each iteration based on outcome. That would be interesting.

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems 8 หลายเดือนก่อน

      Or a teacher in the middle type setup. One GAN runs. A teacher is connected to one of the players. The teacher learns from the players, as it teaches. The teacher is also hooked to a second player, external to the first GAN. The second player might be playing the current best model.

  • @Robert_McGarry_Poems
    @Robert_McGarry_Poems 8 หลายเดือนก่อน +1

    @ 45:00 Your question... would the act of imparting or inputing the dynamic portions of an autonomous system keep it from being its own active agent?
    My question to you is... how did you learn language and culture? Were those things imparted or input into/onto you at birth?

  • @anthonyfinbow9638
    @anthonyfinbow9638 8 หลายเดือนก่อน +3

    I think the quote that Dr. Parr might have been searching for, at around 45 minutes in is the Schopenhauer aphorism. „I can will what I want but I cannot will what I will..“

  • @abby5493
    @abby5493 7 หลายเดือนก่อน +1

    Seeing a cute little dog right at the beginning I could tell this was good be a good watch.

  • @ArtOfTheProblem
    @ArtOfTheProblem 8 หลายเดือนก่อน +5

    fan of your work

  • @smicha15
    @smicha15 6 หลายเดือนก่อน

    Damn… that Arnold anecdote about shocking the muscles was just brilliant!!!

  • @jonashallgren4446
    @jonashallgren4446 8 หลายเดือนก่อน +1

    This was really nice, maybe I will have to make it pass chap 3 in the FEP book. I'm wondering of there's a way to apply structural learning for alignment. Hopefully or we're probably throwing a dice with random objective learning

  • @smicha15
    @smicha15 6 หลายเดือนก่อน +1

    It seems like discussing the idea of action in active inference is complicated, and it seems like discussing the idea of consciousness is complicated. Regarding action, just imagine you are in a desert, and you will die without water. In order to learn about where water is, you will have to physically walk to go find water. And if you are lucky enough, you will find some. And so not only will you have walked to the water, each step informing the next, yet verifying the first, you will drink the water, and in doing so you act on the world yet again. And in terms of consciousness, I mean, unless you’ve experienced a full on panic attack where even the craziest ideas seem better than doing something that would increase your sense of panic, then you couldn’t possibly understand the idea that consciousness can be effortlessly hacked, where you will literally, and willingly experience yourself do something insane, because you felt trapped… yeah… a smart ML researcher who has never experienced this could not understand how quickly one’s world can turn upside down in one’s own head, all the while one agree’s it’s both upside down, yet totally doable. I guess it’s like Pearl’s intervention theory… add a little panic and see how a world model adjusts on the fly… and then back again like a rain cloud coming and going… is there an academic discipline for active inference meteorology?

  • @dr.mikeybee
    @dr.mikeybee 8 หลายเดือนก่อน +1

    Surprise is an emotional error function. That's what makes it different from y - y-hat. I've had long arguments with LLMs about how it's still just a biological error function. If an artifact had our slow chemical messaging system (faster than our reasoning system), they would agree that it is just an error function.

  • @NER0IDE
    @NER0IDE 8 หลายเดือนก่อน

    Great conversation.
    I wish you would have touched on the field of curiosity-based exploration in reinforcement learning, as it is an approach to implement active-inference using implicit (internal) rewards based on an agent's world model. There are plenty of works that discuss the issue behind simple prediction-error minimization (such as the dark room thought experiment or the noisy TV problem). Schmidhuber brought up ideas back in his old papers how compression improvement of an agent's world model solves some of these issues, but plenty of more recent works have further generalized these ideas to accommodate for disentangling of epistemic/aleatoric uncertainty, and even implemented them in RL terms.

    • @Daniel-Six
      @Daniel-Six 7 หลายเดือนก่อน

      Jurgen is pleased. More proof that he invented everything.

  • @smicha15
    @smicha15 8 หลายเดือนก่อน +1

    This is the most important video in the world today.

  • @muttch
    @muttch 8 หลายเดือนก่อน +1

    ❤ interesting views on world construction and modelling.

  • @fburton8
    @fburton8 8 หลายเดือนก่อน +6

    I palpate this channel with my undivided attention.

    • @schwajj
      @schwajj 8 หลายเดือนก่อน

      Palpate?

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems 8 หลายเดือนก่อน

      To explore, especially the human body, by touch, mainly in a medical setting. To learn from feeling with the fingers. To investigate by actively pressing against something.

    • @schwajj
      @schwajj 8 หลายเดือนก่อน +1

      @@Robert_McGarry_Poems Yes. How does one do to a channel, with undivided attention?

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems 8 หลายเดือนก่อน +1

      @@schwajj You can press against the want to learn by moving your eyes across a page of writing. You can press against the want to learn by watching a video. Active inference is a kind of palpating.

    • @fburton8
      @fburton8 8 หลายเดือนก่อน

      @@schwajj 1:38 "You are palpating..." Did you miss that bit?

  • @abuomit
    @abuomit 8 หลายเดือนก่อน

    Hi, you have a tremendous amount of content on your channel. Can you make a playlist listing only your favorites? I see that there is one with staffs favorite but I am interested in about your personal favorites…because I have a similar angle when coming to approach ai and I think I can relate to your list better than the whole staffs favorites. Thank you.

  • @BsktImp
    @BsktImp 8 หลายเดือนก่อน +2

    Does AI have anything to say about complexity vs chaos vs randomness vs non-deterministic systems, particularly: does randomness even exist?

    • @normalhuman6260
      @normalhuman6260 8 หลายเดือนก่อน +3

      i dont think you get how AI works.

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems 8 หลายเดือนก่อน +1

      I interpret this question as asking... have humans learned from making and watching AI evolve, whether we can answer your original point.
      Complexity, as used in the video, is an anthropomorphic construction of observable boundaries. Or systems of dynamic interaction between objects that lead to stable outcomes. We then label those observations and outcomes with names that imply the understood parts of the dynamic nature that create them. Energy dynamically becomes mass, which clumps together into particles. Which then clumps together into atoms. And molecules. And so on...
      Chaos is the state of not knowing what individual particles are going to do next and therefore not being able to determine the evolution of a system. We call it coherence and decoherance. Or laminar and turbulent.
      Randomness is also related to not knowing. However, it differs in one key aspect. Chaos can be mapped and turned into order. Randomness can not, by definition, be ordered, ever.
      Non-deterministic systems are just systems that we can't know the outcome to before running the problem through it's algorithm. The halting problem posed by Allen Turing covers this in depth. Life is non-deterministic, but everything that goes into building a body short of consciousness is understood.
      Randomness does exist, in some sense, in reality. However, we can't build a system to do randomness. See the problem... If we build it, we understand how it works. If we understand how it works it can't be randomness. Energy, in it's purest form, is random. But we can't measure that.
      What you can do is set up an arbitrary system. That measures something that can't be known beforehand. Like the best random number generator uses cosmic rays to create the closest approximation of randomness that we can get. But it doesn't just use every input. There is a convolution that takes place. First, the device is constantly scrolling through pseudo random numbers, second it flips a coin, and then if the system gets a heads then it publishes a number. A cosmic ray comes in, it flips a coin, and sometimes spits out a number. We understand exactly how it works but because of the non-deterministic nature between when a cosmic ray shows up and if the generator will flip a heads, we can't be exactly certain of the order of the numbers... now hook this up to a global network of similar devices and you have yourself the closest thing we can get to random... It basically is random for all intents and purposes, even though it actually isn't. It's exactly like key exchange encryption. The amount of energy and processing power it would take to time incoming cosmic rays, is too great. Easy to do one way, hard to do the other...

    • @kevinscales
      @kevinscales 8 หลายเดือนก่อน

      These concepts are all about what you don't know or understand or can know. Randomness and chaos exist because there are things we can't measure that have effects we can measure. No matter how things actually work there will always be this human centric concept of randomness, but also it's just a concept in peoples minds. What do you mean by exist?

  • @transquantrademarkquantumf8894
    @transquantrademarkquantumf8894 8 หลายเดือนก่อน +2

    If you believe in Newton and the philosopher's Stone and his layout of the physics of certain types of spirituality and interaction then understanding the way he coded and shielded the understanding brings about a rarity that there are those that would find rocks very important 41 knows how to use them they can be likened to the philosopher's Stone according to Newton almost anything will do

  • @dr.mikeybee
    @dr.mikeybee 8 หลายเดือนก่อน +1

    Minimizing surprise is probably appropriate for maximizing machine intelligence, but human motivation is more complex. Adrenaline is a substitute for dopamine in that it works on the same receptors. While they are distinct neurotransmitters, they share some similarities. As you point out, humans seek out adventure. It's an important evolutionary survival mechanism. Successful hunters need to enjoy surprise. It's a kind of built-in cognitive dissonance. I call it a chemical bath of toxic content injections.

    • @Daniel-Six
      @Daniel-Six 7 หลายเดือนก่อน

      I call it beer.

  • @oncedidactic
    @oncedidactic 8 หลายเดือนก่อน +1

    this will be good.

  • @Darth_Bateman
    @Darth_Bateman 8 หลายเดือนก่อน

    Pazuzu?

  • @transquantrademarkquantumf8894
    @transquantrademarkquantumf8894 8 หลายเดือนก่อน +1

    Isaac Newton said almost anything will do so Newton's viewpoint is rocks were not trivial that is if you subscribe to Newton

  • @palfers1
    @palfers1 7 หลายเดือนก่อน

    I've been trying to get Bard to discuss Trump's criminality. The filters they have set up are pretty intense. Even using the word "president" shuts down the dialogue with a boilerplate response.