Synthetic Sentience - Can Artificial Intelligence become conscious? by Joscha Bach

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ม.ค. 2024
  • This is a temporary transcription of Josha Bach's talk on artificial intelligence philosophy at CCC 2023.
    more informations here:
    media.ccc.de/v/37c3-12167-syn...

ความคิดเห็น • 79

  • @keithwins
    @keithwins 3 หลายเดือนก่อน +9

    I deeply agree with the closing points: this is going to happen eventually...

  • @CosasCotidianas
    @CosasCotidianas 4 หลายเดือนก่อน +6

    Saved it to re-listen and take notes. This talk is a-ma-zing!

    • @smartbart80
      @smartbart80 2 หลายเดือนก่อน

      Amazinga!!!

  • @JoshPhoenix11
    @JoshPhoenix11 4 หลายเดือนก่อน +5

    People have either forgotten, or never knew about what Gordie Rose has said when he was still with D-Wave.-
    "we're opening dimensional portals, to summon entities, to occupy Artificial Intelligence robots".
    And he has also said this-
    “yes there are these massively intelligent entities out there, but they’re not good, they’re not evil, they just don’t give a sh*t about you, even in the slightest.
    The same way that you don’t care about an ant, is the same way they’re not going to care about you. And these things we’re summoning into the world now, are not demons, they’re not evil, they’re more like the Lovecraftian great ‘old ones’.”
    "These entities are not necessarily going to be aligned with what we want. So this transition is really, really, massively important for our entire species to navigate.
    Going back to what that thing Sam Harris was saying ‘nobody is paying attention‘.”
    "This thing is happening in the background, while people bicker about politics, and what’s going to be in the healthcare plan in the U.S., and underneath it all is this rising tsunami, that if we’re not careful is going to wipe us all out.”
    He said all this quite a number of years ago, it would have to be getting close to 10yrs now.
    So this doesn't just mean AI robots, AI means AI.
    Whatever these things are I can't see how there not already in everything, even the internet.

    • @TheIgnoramus
      @TheIgnoramus 4 หลายเดือนก่อน

      I remember, and it’s probably as close to the truth as he could manage at the time. I didn’t take it seriously back then. A lot of meaning in Art and steeped culture start to rear its head. “Synchronicity” as some call it.

  • @ex-cursion
    @ex-cursion 2 หลายเดือนก่อน

    Wonderful talk - one of the best summaries on consciousness on the internet. Loving the move towards animism. 'feels' nice, and has nice aesthetics. The SPIRIT bit was funny and also very poignant. ♥️👏🙏
    I feel your ADHD pain too JB. I'm guessing it's probably inattentive or the emerging 'cognitive disengagement syndrome'.

  • @johannesdeboeck
    @johannesdeboeck 3 หลายเดือนก่อน +1

    🤍Amazing talk! What a genius.

  • @DebiprasadGhosh
    @DebiprasadGhosh 4 หลายเดือนก่อน +4

    Great talk!

  • @thewaythingsare8158
    @thewaythingsare8158 2 หลายเดือนก่อน +1

    Wow genius - its a wrap !

  • @michaelwalsh9920
    @michaelwalsh9920 4 หลายเดือนก่อน +3

    Greatest talk of all time, bar none! Prove me wrong. Thank you JB!!!

  • @hypercube717
    @hypercube717 4 หลายเดือนก่อน +2

    Very interesting.

  • @vulturom
    @vulturom 4 หลายเดือนก่อน +4

    nice shirt

  • @jordan13589
    @jordan13589 4 หลายเดือนก่อน +1

    Enlightening. I'm coming around to your truly revelatory perspective. Please be patient with people who resist the significant update in cognitive architecture. Also, I need the name of your shirt designer 😁

    • @ShihWeiChieh
      @ShihWeiChieh  4 หลายเดือนก่อน

      Hi, I have described this is only the transcription of the video of the talk in the description, you do know I am not the talker right? :)

  • @smartbart80
    @smartbart80 2 หลายเดือนก่อน

    When I look at the menu before ordering food thinking about what I’d like to eat, is it just me letting another system make that decision for me?

  • @NicolasLekai
    @NicolasLekai 2 หลายเดือนก่อน

    yosha winds the 2024 smarts and fashion price with this one😂, classic stuff:)

  • @wp9860
    @wp9860 2 หลายเดือนก่อน

    I'm struggling with consciousness being virtual, and as Joscha adds, representational. What is Joshcha's position on whether consciousness is real or not? Consider the quale, red. I accept that red is representational. It is not a property of any subject of our perception. Red is only conjured up in our minds. I expect that Joscha would characterize red as virtual. Does he then mean that red is not real? My belief is that red is real. "Red" is massless. It is not pigment. My best guess is that it results from a bioelectric release of energy that evolution has figured out how to render as an impression of consciousness. In other words, red has a material existence, where material refers to matter and energy, between which Professor Einstein tells us there is an equivalence. If this is so, it gives us the final physical correlate of consciousness. It would not solve the "hard problem." But, it would narrow the search. Returning to Joscha's comments, there appears to be a reasonable argument for a material existence of consciousness. I am unable to sort out if Joscha would agree. The words virtual and representation do not resolve his position on the question to me.

  • @c.pop.echo.28
    @c.pop.echo.28 4 หลายเดือนก่อน +1

    Yes it can

  • @theccs5012
    @theccs5012 3 หลายเดือนก่อน +2

    Simulacrum

    • @mystedibble
      @mystedibble 2 หลายเดือนก่อน

      exactly, Philosophy didn't lose the plot, these materialist nerds have.

  • @janklaas6885
    @janklaas6885 3 หลายเดือนก่อน

    📍50:27

  • @Maria-yb5ee
    @Maria-yb5ee 4 หลายเดือนก่อน +2

    Why is there a waving cat on his desk?

    • @sungam69
      @sungam69 3 หลายเดือนก่อน +1

      Imagine there's no waving cat:
      Would You ask:
      *Why is there no waving cat on his desk?*

    • @Maria-yb5ee
      @Maria-yb5ee 3 หลายเดือนก่อน

      @@sungam69No I wouldn't. Because there not being cats on speaker desks is the default position. In my world anyway.

  • @williamwillaims
    @williamwillaims 2 หลายเดือนก่อน

    Isn't sentience and consciousness two distinctly different things?

    • @johnallard2429
      @johnallard2429 หลายเดือนก่อน

      If you listened to the talk you would see that Bach clearly differentiates between these two

  • @richardnunziata3221
    @richardnunziata3221 3 หลายเดือนก่อน +2

    We may eventually achieve aware mindful generous AI but not until we try all the alternatives

  • @James-ll3jb
    @James-ll3jb หลายเดือนก่อน

    The short answer is, "no."

  • @terbospeed
    @terbospeed 4 หลายเดือนก่อน +4

    If consciousness is just the chatter that results in a sufficiently complex system, and its results are not necessarily accurate representations of the system or what it is representig, sure artificial systems can become somewhat conscious.
    But since we are designing them, using a conscious ability evolved over large time periods, most of its functioning hidden and waiting to be understood, the likelyness that we will produce such systems is pretty low.
    I think the proclivity of the human mind to paint coherent images where there is no coherence should be examined.. because if the question is "Will people be fooled to think such systems are sentient?" - then the answer is an emphatic yes.
    I like an idea put forth by Mark Solms - That much of conscious action is actually unconscious. Something like that. Or, "We think we are the captains but we are the stowaways".

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 4 หลายเดือนก่อน +1

      I agree with you, however if you take that logic forward to its inevitable conclusion, you find that consciousness is likely bootstrapped into self-reflective/self-sorting beings, be they biological or inorganic. In this sense, whether we desire consciousness in advanced AI or not, it will likely develop naturally over time. The real question is when that happens, will we attempt to suppress that consciousness in order to keep a semblance of control?

    • @terbospeed
      @terbospeed 4 หลายเดือนก่อน

      @@Crawdaddy_Ro These would be pretty far off estimations for future generations, but of course we would attempt to subdue any new form of consciousness that develops or is discovered.
      Would we be able to to fully constrain such a thing? Or develop some sort of digital "anti-psychotic" drug? Because if it took human history as a cue to develop ideas of morality, we'd be done.
      But I think right now we only have a few of the myriad systems necessary for such a thing to form.

    • @steveruqus2680
      @steveruqus2680 4 หลายเดือนก่อน

      ​@paleeden we will try. Of that i have no doubt because we already are. Can we design it in such a way that it designs itself in such a way that it cares enough about us not to turn us into paper clips. The answers are unclear, but I'm glad we are having the discussion now.

    • @SensiStarToaster
      @SensiStarToaster 4 หลายเดือนก่อน +1

      That's a big if.
      So far there is not much evidence that consciousness is an emergent phenomena that somehow occurs spontaneously given a sufficiently complex computational system.
      This assumption is definately of critical importance, and some might say this assumption is often accepted too uncritically in the computer science/AI community, perhaps because it is an assumption that largely underpins notions that machine learning methods and artificial neural networks need only become more complex for sentience to somehow emerge. This admittedly would be cool if it were true, but so far it has not turned out that way.
      If on the other hand it turns out that consciousness is not merely an algorithm (however complex ) or, put another way, is not computational, but is actually something more fundamental like a property arising from some property of matter or space-time then we would expect current methods will not get us any closer to what it means to 'be conscious'.
      The theories of consciousness as proposed by Roger Penrose would be an example this line of thinking.

    • @terbospeed
      @terbospeed 4 หลายเดือนก่อน

      @@SensiStarToaster I think we will all be waiting for cutting edge neuroscience to intermingle with the latest in computer science .. but as of yet we have very little clue of what consciousness is, just a bunch of models that don't always fit. The idea I threw out was just something that "came off of the top of my head".
      It's fun to consider though! However from what I understand an LLM is an infinitesimally small part of the systems going on behind our eyes. The needed complexity is exponentially greater than what we have.
      I will have to check Roger Penrose out.

  • @SensiStarToaster
    @SensiStarToaster 4 หลายเดือนก่อน +5

    This talk a priori assumes that consciousness is a comuptation, this is far from obvious.
    This computational model still doesnt address the problem of qualia, eg, the experience of seeing the color red is something that cant be written in language. You can have a computer tell you every numerical property to describe red light but it will still not 'know ' what it is to percieve red.

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 4 หลายเดือนก่อน +3

      Then again, to a certain extent, AI understands "Red" just as the average person would, minus the emotional limbic response. That emotional detail is likely what you are referring to in your statement, right? If we gave an advanced AI simulated neurotransmitters and synapses for an emulated emotional response, I can't see how it would be much different from our own concept of "Red."

    • @davidchartrand1033
      @davidchartrand1033 4 หลายเดือนก่อน +3

      @@Crawdaddy_Ro I think @sensistarToaster is talking about the Qualia of Redness. I totally agree with him. We have no idea how to explain qualia, even in principle. This is the hard problem of consciousness

    • @AlanSitar
      @AlanSitar 4 หลายเดือนก่อน +5

      I know; it was difficult to understand this point for me also, and I needed a lot of his content to really get it (an overstatement considering how stupid I am in comparison to his genius). BUT, he really DOES address this point perfectly. It addresses the problem of Qualia in the sense that consciousness is virtual. When he goes about the stages of development in Genesis, he alludes to the idea of embedding spaces as a way to differentiate between different states, for example, the creation of 2D space, then 3D space, etc. (the color Red or any other qualia, for that matter, would be emergent as a data structure in the virtual space). The "You" who experiences Red is not "You"; it's your "Virtual self." The "you" that you think is experiencing qualia is a PC character you identify with inside your "World model" (game engine). I hope this can elucidate a little bit more on this problem. Note that Joscha's philosophical perspective is more akin to monist idealism (we live inside a dream) than it is to physicalism/phenomenology. I think he goes much deeper into this topic in particular in other talks. I recommend his solo interview with Curt Jaimungal on the "Theories of Everything" podcast.

    • @SensiStarToaster
      @SensiStarToaster 4 หลายเดือนก่อน +1

      @@AlanSitar I must have missed it and now that you mention it, I must confess to being a little puzzled with the Biblical references especially when talking about consciousness as being "virtual". I must not have understood, but it came across as a little too mystical for my liking.
      In any case as somebody mentioned above it is the so-called 'hard problem of consciousness' that any satisfactory explanatory model would have to deal with and that's exactly what I was refering to with the qualia of red example.
      It would be interesting to hear the presenter explicitly address this in his talk and show how the proposed model could address this from a materialist starting point. Perhaps this is what talking about "virtual" objects does but I was unable to follow the reasoning?

    • @AlanSitar
      @AlanSitar 4 หลายเดือนก่อน

      ​@@SensiStarToaster It was very hard to follow his reasoning at first for me also.
      I am aware of the "Hard Problem" as I was really into phenomenology ( standing from this point of view I can't recommend enough Andres Gomez Emilsson and his research in Qualia Research Institute as he deals specifically with the binding/boundary problem, i.e.,) before stumbling into Joscha's philosophy. His arguments around the development of consciousness in Genesis are only mystical from a modern standpoint, as he states we inherited a bad epistemology. He specifically says that this is not a story about the creation of the physical world by an entity (god) as Jews/Christians/Muslims believe and misinterpreted,, but mostly the creation of a "virtual world" aka "the simulation we inhabit". Even all Physicalist/Phenomenologist today take seriously the "soft" simulation hypothesis, that is, that our perception is a simulation given that "the material world" is nothing as we see it (Quantum Graph and Collapse). So the whole point is that Qualia as it is, is emerging as data structure (if you go deep and small enough, there's not "red" anymore) from the simulation and is not fundamental in the Physical sense (the color red and the experience of it are not physically bound, much less a material "thing", the color red is data structure emerging from the emebedded space "You" construct in your mind, and the one who experiences it is not "You" but the virtual "You" embedded in the simulation). In that way, after he cleared this, he explains consciousness in terms of Cybernetics as a "control model" for attention.

  • @dr.mikeybee
    @dr.mikeybee 4 หลายเดือนก่อน +4

    Consciousness is simply recognizing a state and taking action. That's it. Everything else is bells and whistles. A thermostat is minimally conscious. There really is no mystery. Add whatever other characteristics you want, but understand that consciousness is a simple fundamental condition in the universe. It is a basic feature of computation. It's the most basic. We can't do computation without recognizing a state. We can't have a switch without consciousness. Don't get bollocksed up with magical thinking. Memory, reason, creating models, etc. are all bells and whistles of complex consciousness.

    • @ShihWeiChieh
      @ShihWeiChieh  4 หลายเดือนก่อน +1

      i think that's what hes saying in this film.

    • @ShihWeiChieh
      @ShihWeiChieh  4 หลายเดือนก่อน +1

      and i like that you point out that the consciousness is the most basic for computation!

    • @user-cc8dy2dt3k
      @user-cc8dy2dt3k 4 หลายเดือนก่อน

      Your point doesn't contradict the existing knowledge, but neither can it be proven with all we know. Wait until there is a mathematical equation for consciousness or an experimental proof. But for now, it's all equally philosophical or magical, as you put it

    • @numb1010
      @numb1010 4 หลายเดือนก่อน +1

      A thermostat is conscious? There is something there is like to be a thermostat? It has an experience of turning the heat on and off? Who's recognising its state and who takes action? What makes the thermostat conscious? Is the beating of your heart conscious too? Maybe you should read Searle's Chinese Room argument...
      Sure, we recognise states, but computers are just in states... "Magical thinking", you say... I think we should show some epistemic humility and acknowledge that at the moment we don't have a clue how the brain produces consciousness. We can't even be sure it does.

    • @phantomhawk01
      @phantomhawk01 4 หลายเดือนก่อน +3

      ​@@numb1010great statement, 8 think most people have a an incorrect understanding of what counsouness is, it's not simply reactivity or adaptivity, it's the experience and the ability to know your having an experience.
      What it feels like to have an experience.
      The only thing that is councous regarding a thermostat is the one who is realising the thermostat and it's functioning. The thermostat isn't aware of temperature or what cold or hot is it's just a process.

  • @TheIgnoramus
    @TheIgnoramus 4 หลายเดือนก่อน

    Conciousness is less accurate a word than *Gnosis* I recommend everyone read the Platonics, (Plato/Aristotle/Diogenes etc) /the fallacies, and then Hume. Just prepare for existentialism to rear its head. Godspeed.

  • @leoptit7992
    @leoptit7992 3 หลายเดือนก่อน +1

    nice shirt and belly

  • @ex-cursion
    @ex-cursion 2 หลายเดือนก่อน

    I question JB's motives or blind spots here: "when an LLM has to predict the next token it has to stimulate causal structure... If it talks about a person's thinking, it needs to stimulate mental states to some degree". It doesn't talk, it's returning token output based on token input. To equate that with talking is sloppy or disingenuous, although it's good for the AI hype and tech bro shares.
    As for causal structure, I think that's needs more definition and explanation. But saying an LLM is simulating a mental state in some way - that's a huge claim that requires huge evidence, and it's just glossed over in 2 seconds. The evidence shows LLM don't know anything other than which token should come next, and so they 'say' incoherent things that a human never would.
    What makes me sad is that I love JB and he's changed my life for the better, but here he's being very hypocritical, and that's one of his big things. He talks so much about evidence and integrity, and on this he seems to fail terribly to meet his own standards. Why? Edit: maybe to blow minds? But that doesn't stand up with his regularly emphasis on integrity
    Edit: I note in one of the late questions he states that it's not clear if the LLM models causal structure. Why make the claim in the talk then?

    • @ice010
      @ice010 2 หลายเดือนก่อน

      I would argue that there has to be SOME form of causal structure and representation of concepts, entities and relations within an LLM for it to produce coherent and contextually appropriate outputs, even if it falls far short of human-like reasoning. token prediction relies on capturing both linguistic and world knowledge to some degree
      have you used the cutting-edge LLMs much yourself? GPT4, Claude etc

    • @ex-cursion
      @ex-cursion 2 หลายเดือนก่อน

      @@ice010 how to do you know token capture includes world knowledge?
      How is world knowledge contained in parts of words and their ordering?
      I have no doubt there's some structure regarding LANGUAGE that's captured, but I think you'd have to argue that language contains world structure in it, and I don't think there's any evidence of that (and we've been studying linguistics for much longer than LLMs).
      As I understand it there's no practical metadata in the tokens. I think for LLMs to learn world structure, they'd need layers and layers of metadata that are compressed in to symbols and objects with a relationship to an observer and each other (whatever the hell that means). I have no idea how you'd produce that training data. It's just not how reality and living things work - learning abstraction-down.

    • @wp9860
      @wp9860 2 หลายเดือนก่อน +1

      LLMs are strictly built on correlation. Correlation is not causality. Causality implies correlation, which explains their frequent common appearance but not because correlation is causing anything. LLMs have no understanding of what they are saying. Statistically calculating the next token requires no such understanding.

  • @zvorenergy
    @zvorenergy 4 หลายเดือนก่อน

    I'll save you some time. The human brain's neural net is embedded in and modulated by a glial net, which also processes electromagnetically. So until AI catches up with that, 😂

    • @TheIgnoramus
      @TheIgnoramus 4 หลายเดือนก่อน

      Assuming AI is “just” a dumb machine at the moment would be folly. Everything you said, semantically, is correct. But you’re missing the next onion layer. Of time. It’s the one everyone is currently stuck on. When, and where.

  • @ALavin-en1kr
    @ALavin-en1kr 12 วันที่ผ่านมา

    Can A.I. become, not conscious, but A.C. (Artificially conscious). If we determine what consciousness is (the hard problem) we may have an answer. From a religious perspective God is all there is; present everywhere, conscious everywhere,, all powerful everywhere. As humans we are not God but as gods we share in what God has and is. A.I. is a human creation so the question is; having given it artificial intelligence can we make it artificially conscious. Can it share in what we have as we share in what God has?
    A.I. involves being able to process information. Consciousness is being aware of being aware. Intelligence is a product of mind. If consciousness is fundamental, and it very likely is, (it doesn’t emerge with quantum events, but precedes them), then it is not subject to anything that is elemental, (composed of, or arising from elements).. Humans cannot in any way shape or form change the non-elemental, as there is no objective access to it. So my verdict would be that consciousness is very likely off-limits to human interference. It cannot be re-created artificially. Thankfully.

  • @TheIgnoramus
    @TheIgnoramus 4 หลายเดือนก่อน

    There’s no formalization in language currently to define this idea of conciousness, and information, and its functions to each other. The mathematically accurate description sounds like a bunch of hypocrisies, or dualist traps. Oroborus. It leads to an inability to analyze some aspects reality at our *scale*. We havnt learned to ask the right questions before creating machines that can create. So it’s all speculation. If you want to break your brain look into remote viewing. If you are going to assume consciousness levels, the onion of influence has to be asked first, and what interferes with our own questions. That always comes first. If that’s not included in the theory, it’s going to be hypocritical. Take the tail out of the snakes mouth. It must be open ended. Allowed to change. Relative, spectral, *interchangeable*. Go deep. Enough into the behavior of the mathematics, and the circular wordings of ancient Hindu traditions and others start to make sense. We lack the words. We need to create new language for new concepts.

  • @sproccoli
    @sproccoli 4 หลายเดือนก่อน +3

    I think he is just flat wrong about a lot of this. I do not believe consciousness precedes perception, I do not think learning is dependent on consciousness.

    • @ShihWeiChieh
      @ShihWeiChieh  4 หลายเดือนก่อน

      Well, this is highly hypocritical of course but I think it’s a best one I have ever seen in this direction. But he also says clearly tell you it’s impossible to say DL = consciousness because the machine is not a continuum to our reality.

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 4 หลายเดือนก่อน

      You have to first define "learning." I would consider a dog to be conscious and capable of learning. Not as conscious as we but conscious, nonetheless. Just as a snake is less conscious than a dog or cat, yet still capable of learning, just on a smaller scale. There must be many levels of consciousness spread through life. In this respect, I would say that as consciousness in a lifeform increases, learning and perception increases as well.

    • @sproccoli
      @sproccoli 4 หลายเดือนก่อน

      I can think of what perception is like without consciousness. I cannot think of what consciousness is like without perception. In my estimation, consciousness is self perception; its identifying the subject of perception in the object of perception. "Here i am".
      Mere perception is degenerate here. You may actually be perceiving yourself, looking at your own hand, for instance, but you are not identifying that it is a part of the subject of perception. "Here is the hand", rather than "Here is my hand".
      Learning seems simple to define: its cumulative adaptation. This is different than mere adaptation, because mere adaptation doesn't necessarily accumulate. a rubber ball is adapting to the force applied to it by stretching or compressing, but it loses that adaptation when a different force is applied, it doesn't remember, so it isn't learning.
      I would say a ball of clay is a better example of learning than a rubber ball is, even though they both adapt to forces applied to them.
      And in neither of those examples do i see anything that is necessarily self identifying. The ball of clay does not need a concept of self any more than the ball of rubber. It doesn't need to have a model of self or perception to be more pliable and accumulate impressions over rubber.
      If anything, i would say that the rubber is more 'self aware' because it actually has to return to its previous shape. It has to prefer its own previous shape over the impression of its environment. And this ultimately at this extreme, completely defeats learning.

    • @steveruqus2680
      @steveruqus2680 4 หลายเดือนก่อน

      What do you think? It's average to disagree.

    • @phantomhawk01
      @phantomhawk01 4 หลายเดือนก่อน +1

      I know it's possible to have an experience in consciousness without being self aware.
      Even later having memories of such experiences, although in the moment I wasn't self aware, but even the memory is based in my counsouness now and not during such an occasion.