Are Brain Organoids Conscious? AI? Christof Koch on Consciousness vs. Intelligence in IIT [Clip 4]

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 พ.ค. 2024
  • Christof Koch explains why brain organoids are conscious, but AI is not, according to Integrated Information Theory (IIT). Dr. Koch is a contributor to IIT and studies consciousness at the Allen Institute for Brain Science, which he used to run.
    Full interview: / ihmcurious
    More clips: • Christof Koch Intervie...
    For a more complete description of IIT, see the books and audiobooks below.
    Video about researchers calling IIT "pseudoscience": • Consciousness Theory D...
    Get Christof's books and audiobooks (I will get a small commission at no cost to you to support the channel)
    - BOOK ABOUT IIT: The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed amzn.to/3xiJvsd
    - UPCOMING BOOK: Then I Am Myself the World: What Consciousness Is and How to Expand It (Preorder for May 7, 2024) amzn.to/3vt45FX
    - EASY TO READ: Consciousness: Confessions of a Romantic Reductionist amzn.to/49oPBom
    Christof's pen (MUJI pen, writes like a dream, great colors): amzn.to/4aPGvCa
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 33

  • @superawesomegoku6512
    @superawesomegoku6512 16 วันที่ผ่านมา +9

    I feel like conciousness is an emergent quality of intelligence and connections of neurons

    • @alkeryn1700
      @alkeryn1700 5 วันที่ผ่านมา +1

      i think the exact oposite, meaning our neurons and the whole physical world being emergent from consciousness.

    • @BootyRealDreamMurMurs
      @BootyRealDreamMurMurs วันที่ผ่านมา

      ​@@alkeryn1700why not both as a biconditional relationship?

  • @braphog21
    @braphog21 15 วันที่ผ่านมา +4

    Christof is completely misguided at 11:05 when he's explaining the differences between neurons and LLMs.
    Yes, LLMs like GPT-4 run on transistors which only have 3 connections each but that's looking too closely, Christof needs to zoom out and realise that there is a higher structure above these transistors which actually mimic neurons quite well.
    Biological neurons are analogue and have thousands of incoming connections and thousands of outputs.
    Artificial neurons can also have thousands incoming connections and thousands of outputs (like in feed-forward network). It's also important to note that these networks of artificial neurons are non-linear (the maths is done using floating point arithmetic which is non-linear and the activation functions are also non-linear) so they cannot be reduced to a Perceptron network which is very different from biological neurons.
    So if we can create artificial neurons that very closely mimic the structure of biological neurons and we can create networks of these artificial neurons that behave similarly to biological neurons (i.e. they both have an ability to gain intelligence by perceiving things) then what separates the two? The two neurons functionality appear to be very similar which only leaves the possibility of the network architecture being different. Why would current day architectures of neural networks not contain any 'consciousness' at all but contain a lot of 'intelligence'?
    It's clear that consciousness and intelligence are NOT orthogonal concepts but that consciousness is a property of intelligence that increases as intelligence increases. I think Christof is wrong to ascribe so much consciousness to a tiny brain organelle that has very low intelligence. I also think he's wrong to ascribe so much intelligence to GPT-4 and yet so little consciousness (that's not to say I think GPT-4 is very conscious, I don't think it's very smart. It definitely is a little intelligent so I think it has a little bit of consciousness).

  • @mkp8176
    @mkp8176 16 วันที่ผ่านมา +1

    Love this series so much!

  • @cate01a
    @cate01a 16 วันที่ผ่านมา +1

    hope you eventually upload the full interview on yt for free after a bit

  • @spinningaround
    @spinningaround 16 วันที่ผ่านมา

    Is there a full version?

    • @ihmcurious
      @ihmcurious  16 วันที่ผ่านมา +2

      The full interview is up on patreon.com/IhmCurious, and more clips are on the way.

  • @sipper2136
    @sipper2136 15 วันที่ผ่านมา

    It doesn't seem obvious to me why the complexity of the causal relationships at or between each node (transistors or neurons) should scale consciousness more than an increased number of nodes that is able to generate equally complex transformations.
    Certainly the ability of nodes to interact is necessary but I see no reason why you could not view the system instead at a higher level of abstraction and group nodes together into supernodes simply for the purposes of the consciousness calculation. You would have fewer nodes with more diverse connections leading to a higher calculation which leads me to believe that placing the level of abstraction at the smallest information processing unit as opposed to anywhere else (including the system in total) is arbitrary.

    • @ihmcurious
      @ihmcurious  15 วันที่ผ่านมา +2

      In IIT, the system can be viewed at higher levels of abstraction. If there is more intrinsic causal power (higher phi) at a higher level, then that level would be where consciousness emerges. On the other hand, abstracting to a higher level reduces granularity, and higher-level things are generally casually dependent on lower-level things, so there's often more causal power to find at lower levels. For these and other reasons, phi is often lower at higher levels of abstraction.
      But maybe if you go down too low, e.g. to the level of quantum uncertainty, you won't find much intrinsic casual power, since higher-level structures are robust to these fluctuations (i.e. they may be differences that don't "make a difference" from the intrinsic perspective of the system).
      IIT is very complicated and probably wrong, but Giulio Tononi has anticipated many objections. See the books in the description, or the many Google-able and open-access papers on IIT, if you're interested in a more thorough discussion than I was able to have in this interview.

  • @henrycardona2940
    @henrycardona2940 7 วันที่ผ่านมา

    We know how to radically boost our intelligence and create intelligent machines, but what does a radically conscious being look like?

  • @filippomilani9014
    @filippomilani9014 15 วันที่ผ่านมา +1

    The only doubt I have about the way Christof Koch places brain organoids in the graph is that he seems to automatically place the organoid at a higher level of consciousness than the jellyfish (also due to the way the graph ismade in the first place), even though he knows that the brain organoid has no input/perception and no output/action?
    I would think that without input or output it would be very difficult to have any conscious experience, since in nature consciousness seems to generally scale with intelligence (as in capacity to execute different behaviors). So shouldn't we see brain organoids just as what they are, groups of neurons, where their potential for consciousness depends on their outputs and inputs, and maybe also on the way they reward themselves?
    I'm curious (lol) if IIT or other consciousness theories have looked into the role of reinforcement and reward in to explain intelligence and consciousness

  • @angellestat2730
    @angellestat2730 16 วันที่ผ่านมา +1

    Consciousness explained by Harrison Ford.

  • @Gome.o
    @Gome.o 15 วันที่ผ่านมา +2

    The argument that a dog seems less conscious than a human being seems dubious

    • @GhostSamaritan
      @GhostSamaritan 15 วันที่ผ่านมา +5

      They lack metacognition 🤷🏽‍♂️

    • @Gome.o
      @Gome.o 15 วันที่ผ่านมา

      @@GhostSamaritan How do you know? Are you a dog typing on a human keyboard?

    • @Gome.o
      @Gome.o 15 วันที่ผ่านมา

      @@GhostSamaritan How do you know? are you a cute doggo who can communicate with dogs?

    • @alkeryn1700
      @alkeryn1700 5 วันที่ผ่านมา

      @@GhostSamaritan you are making a major mistake in thinking that inteligence and consciousness are related.
      a llm is smarter than a dog on most of our metric yet the dog definitely has more qualia.

  • @doblo2670
    @doblo2670 11 วันที่ผ่านมา +1

    I am sorry, but this guy is so biased towards humans. His only explanation as to why a human derived brain organoid is more conscious than animals is because its human derived. Even though what makes humans special is the amount of neurons we posses and the incredibly efficient way its connections are constructed. Brain organoids are just neurons. They dont have that "human brain structure", because they dont have the rest of what makes humans human beside the brain - the rest of the body. Neurons only learn, their function are ultimately determined by input.
    Babies are unconscious because they are incredibly reflex-based. Once they get enough input they start learning to do more stuff into which i wont get into in a yt comment.
    Human neurons are no different to most animals', neurons, we just have incredible brain structure, but that requires a lot of genetically set in stone input to teach those neurons to form those structures

  • @joeybasile1572
    @joeybasile1572 16 วันที่ผ่านมา

    No.

  • @grivza
    @grivza 16 วันที่ผ่านมา

    3:01 that sounds like a very naive view, implying that the repertoire of "conscious" activities has anything to do with the strength of consciousness. Activities can only be described as "conscious activities" only if there is already a developed consciousness to perceive them. But are those activities vital for the development of a "stronger" consciousness? Sex, drugs and rock n' roll? Doubtful, very much so.

    • @patrickl5290
      @patrickl5290 16 วันที่ผ่านมา +1

      What is a “developed consciousness” though? I think the idea of consciousness being a continuous quality makes more sense

    • @grivza
      @grivza 16 วันที่ผ่านมา

      @@patrickl5290 It may be a continuous quality, I am not ruling that out. I am simply talking about his example about the baby, you can bring it to a rock n' roll party and it isn't going to be much more conscious than it was before. It's something different that develops this capacity, maybe through experiences but not from experiences themselves and certainly not just any experiences.

    • @carltongannett
      @carltongannett 16 วันที่ผ่านมา

      See I would think the dog has as much consciousness as a human but much less intelligence. I figure intelligence just happens to correlate with consciousness in mammals because brains are multipurpose.

    • @grivza
      @grivza 16 วันที่ผ่านมา

      ​@@carltongannettFrom seeing how Helen Keller describes her experience of learning her first word, in my understanding consciousness is directly related to the capacity for the symbolic, so I would actually say that no animal is as conscious as humans but that's a bit of a different topic. Still don't buy the whole "experiences" interpretation.

    • @ihmcurious
      @ihmcurious  16 วันที่ผ่านมา +3

      When Christof talks about "more" consciousness, he's talking about expanding the repertoire of possible conscious states. For example, a human is theoretically capable of a wider range of possible conscious states than a bee or a dog, and the human's conscious states would tend to be richer in information.
      In the language of Integrated Information Theory (IIT), a system with more consciousness (higher phi) is capable of distinguishing more "differences that make a difference" from the perspective of the system itself. Read if you dare: architalbiol.org/index.php/aib/article/viewFile/15056/23165867