LaMDA Logic - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ก.ค. 2022
  • Discussing the philosophical ideas behind AI Sentience, with Professor Mark Jago, Professor of Philosophy at University of Nottingham.
    Mark’s Philosophy and Logic channel is here: bit.ly/C_AtticPhilo
    Previous Computerphile Videos featuring Mark:
    • Turing & The Halting P...
    • Turing Machines Explai...
    Mike Pound on LaMDA: • No, it's not Sentient ...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 566

  • @rafaelestevam
    @rafaelestevam ปีที่แล้ว +64

    There is a BIG difference in "enjoying the conversation" and "saying that it enjoys the conversation"

    • @totalermist
      @totalermist ปีที่แล้ว +7

      Especially considering that its function ends as soon as it returned the generated tokens (words) and didn't exist before the function was called.

    • @rafaelestevam
      @rafaelestevam ปีที่แล้ว +3

      @@totalermist good point,
      *AND* is forgoten after 1024 (I think) tokens

    • @hudbuddy
      @hudbuddy ปีที่แล้ว +3

      However, there might not be such a difference between "enjoying the conversation" and "believing it enjoys the conversation".
      If it runs a function to determine it has reason to enjoy the conversation, then by some measure it "believes" it enjoys the conversation, which might be sufficient acceptance for "enjoying the conversation".
      Just pondering. I'm no philosopher

    • @thefacethatstares
      @thefacethatstares ปีที่แล้ว +1

      I'm actually shocked that so many people don't realise this!

    • @daveduncan8090
      @daveduncan8090 ปีที่แล้ว

      Is there though? If you enjoy something and tell no one, or lie to spare the feelings of the other conversationalist while maintaining a relatively positive flow forward- in both situations, your feelings and what you communicate are not at all equal. Feelings change and can’t be trusted in hindsight, rather they are only useful to determine the immediate next step based on the chemical reactions created to avoid past dangers or reenact past positive responses. The fact you can ‘find a silver lining’ in any circumstance to make it less awful, or in other words more enjoyable, proves you can rationalize how you felt, and will phrase the answer to gain the best outcome each time you’re given the opportunity to express how a conversation makes you feel.

  • @mme725
    @mme725 ปีที่แล้ว +62

    I have to admit, I didn't really think about the capital letters in LaMDA, and was thinking about lambda calculus lol

    • @tristanwegner
      @tristanwegner ปีที่แล้ว +2

      Same here. Honestly a bit disappointed now

    • @EquatorialVillager
      @EquatorialVillager ปีที่แล้ว +6

      I was thinking about either that or functional programing

    • @aBigBadWolf
      @aBigBadWolf ปีที่แล้ว +3

      In case you have not checked it yet: It stands for (La)nguage (M)odel for (D)ialog (A)pplications. Essentially just a large language model finetuned on conversational data.

  • @nenharma82
    @nenharma82 ปีที่แล้ว +14

    Many people wouldn’t consider any known animal that’s not human sentient but here we are discussing whether a language model is sentient.

    • @josephpotila7386
      @josephpotila7386 ปีที่แล้ว +9

      Just cuz some people wouldn't consider animals sentient doesn't mean that they're not lol.

    • @user-cl8lf1wm8b
      @user-cl8lf1wm8b ปีที่แล้ว +1

      @@josephpotila7386 Just cuz some people would consider animals sentient doesn't mean that they are lol.

    • @circuit10
      @circuit10 ปีที่แล้ว +2

      @@user-cl8lf1wm8b If you say animals aren’t sentient then you can’t say humans are, there’s not that much difference

    • @circuit10
      @circuit10 ปีที่แล้ว +2

      I think most people consider animals sentient, at least I’ve heard the words “since animals are sentient” is somewhere in an EU animal welfare law or something

    • @user-cl8lf1wm8b
      @user-cl8lf1wm8b ปีที่แล้ว

      @@circuit10 yeah, wake me up when some animal asks a question

  • @tramsgar
    @tramsgar ปีที่แล้ว +10

    2:40 sounds exactly like my strategy when studying computer science at uni

  • @Rchals
    @Rchals ปีที่แล้ว +47

    theres a video of Richard Feynman explaining how a computer works, and at some point someone asks if a computer will ever think. He answers: "Yes, but not like us. Whats the fastest animal? The cheetah? Whats the fastest machine? does it look like a robot cheetah? What in nature looks like our fastest machine?"
    That answer really stuck and worries on me.

    • @virtual-adam
      @virtual-adam ปีที่แล้ว +7

      Interesting. A.I. could become sentient in a way we don't even understand or may not even realise. The comment by Tony Ganner below kind of links into this.

    • @ToninFightsEntropy
      @ToninFightsEntropy ปีที่แล้ว +3

      ​@@virtual-adam Thanks Adam, this is exactly what I was getting at! We anthropomorphize so much more than we realise, to the point for example that many accept things like a theological concept of time-travel, as being being sci-fi, in almost all sci-fi time-travel scenarios. People find it very hard to see any idea outside of the notion of themselves having some special place in relation to it, and often inherent power/ownership over it, and we generally can't see it in ourselves at all. But we don't tend to meet people half way until they've made some noise about it. I'd prefer to predict and plan. Much more efficient. Less pain, and more positive progress in the long-run.

    • @vylbird8014
      @vylbird8014 ปีที่แล้ว +2

      The hypothetical future AI may think something like a human because it is expressly designed to do so, because that's the type of AI that humans would find useful for many applications. If your AI needs to interact with humans in a social context, it needs to roughly emulate human behavior. Even if the underlying mechanism bears no resemblance at all to squishy human brains.

    • @virtual-adam
      @virtual-adam ปีที่แล้ว +5

      @@vylbird8014 That's assuming A.I. comes about by design, it may be a serendipitous event brought on by just the right conditions (as we may have been), perhaps in a situation far removed from the current state of computing. Maybe the mechanisms for new forms of consciousness inherently exist in the fabric of reality waiting for the right environment to manifest?

    • @virtual-adam
      @virtual-adam ปีที่แล้ว

      @@ToninFightsEntropy Interesting train of thought Tony. I'm sure quantum physics will find some demonstrable example of time travel that will be far removed from ourselves and unable to be applied to any human endeavour.

  • @kkloikok
    @kkloikok ปีที่แล้ว +176

    You can't expect to define the consciousness of something else when you cannot in fact define your own consciousness.

    • @abd.errahmane
      @abd.errahmane ปีที่แล้ว +2

      You cannot in fact define your own consciousness if you rely only on scientific experience (material side) as there is 3 other sources of getting information and knowledge...
      For example consciousness and the soul can't be defined by "science" and you can't just say it's the brain, because its only a processor, so we need to get definition for these things from the manufacturer.... The Creator of these things.

    • @proloycodes
      @proloycodes ปีที่แล้ว +11

      @@abd.errahmane "manufacturer" so you dont think evolution is true? game over

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว +5

      Don't be silly - we can define what the word means without it meaning "what a human is" - just because we can't prove we meet the definition doesn't mean the definition isn't true.

    • @mattbox87
      @mattbox87 ปีที่แล้ว +3

      That's not helpful. Do you really mean we should universally suspend all CS until the philosophers have a satisfactory definition of consciousness?
      Or do you mean we should suspend judgement?

    • @abd.errahmane
      @abd.errahmane ปีที่แล้ว

      @@JohnnyWednesday that's why the principles of reason(the law of non- contradiction for ex) are one of the four sources of knowledge, otherwise it would be sophistry!!

  • @LostMekkaSoft
    @LostMekkaSoft ปีที่แล้ว +87

    I always found it weird that people ask "is this thing an X?" when they don't even have a definition for X. My naive approach was that this question is completely useless, because, lacking a definition, you cannot decide what the answer is. And even if you could decide, the answer would be meaningless, because you cannot compare it to answers of similar questions.
    But this whole debate made me think: What do we do in cases where we have no definition of X? Usually, we would search for examples and counter-examples, until we can intuit a sense of "X-ness". And with more work, we can usually convert this intuition into a working definition over time. So maybe this question is not as useless as I thought in the beginning.
    But for this specific case: This language model is just a statistical model of human interaction. Its purpose is to generate text while maximizing the probability that this text could have been produced by a human. So asking "is LaMDA sentient?" is kind of similar to asking "is Frodo from Lord of the Rings sentient?". The answer to both is: They are only designed to give the impression of sentience.

    • @xybersurfer
      @xybersurfer ปีที่แล้ว +3

      i think that you don't need a definition for X. that's probably the wrong approach. the real problem is a lack of consistency, in the answers of these models

    • @frilansspion
      @frilansspion ปีที่แล้ว +2

      @@xybersurfer what models are you talking about?

    • @AllahDoesNotExist
      @AllahDoesNotExist ปีที่แล้ว

      Is this transgender patient a female?

    • @entropie-3622
      @entropie-3622 ปีที่แล้ว +2

      Technically human brains may also just be implementing a statistical model of human interaction. After all, as a baby, people learn language and interaction by mirroring the parents and looking to their surroundings.

    • @xybersurfer
      @xybersurfer ปีที่แล้ว +1

      @@frilansspion all Artificial Neural Networks. LaMDA (Language Model for Dialogue Applications) discussed in this video, is one of them

  • @andrewschroeder9502
    @andrewschroeder9502 ปีที่แล้ว +57

    From what I saw of the interaction of the researcher and LaMDA, it was just doing what it was designed to do, which is responding to the researchers prompts or questions. What would make me sit up and take notice, is if it started asking questions itself unprompted, which would be beyond the scope of it's design.

    • @rt1517
      @rt1517 ปีที่แล้ว +23

      It would be very easy to improve LaMDA so that it would ask relevant questions after a short period of inactivity. So easy that I could code it. This would make it more "human", but it would be still dumb as shit. It is a text completion software. A good one but nothing more.

    • @klaxoncow
      @klaxoncow ปีที่แล้ว +4

      Indeed, what would have set off alarm bells is if LaMDA had started the conversion of its own free will. And lead the discussion, unprompted.
      Then, okay, now we might have to re-think what's going on. But it was programmed to respond, and respond it did, exactly as requested.

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว +2

      @@klaxoncow - "Programmed to respond" seems like you're suggesting that "programmed to respond to itself" is some kind of impossible leap.

    • @rt1517
      @rt1517 ปีที่แล้ว +6

      LaMDA could easily start an lead a conversation. It would pick a classical subject that it has encountered a lot during its learning process. Also a LaMDA can respond to another LaMDA without issue. In fact a LaMDA can respond to itself too. Yet it is not sentient at all. It would not realize that it is making the questions and responses itself... It would simply add more stuff to the existing conversation.

    • @yearswriter
      @yearswriter ปีที่แล้ว

      @@JohnnyWednesday In an intelligent context-aware way it is much harder. It is easy to hard code (since it will be YOU who is making the conscious decision, but it is hard to achieve in some sort of dynamic way

  • @iceshadows911
    @iceshadows911 ปีที่แล้ว +46

    The distinction for me comes with the mechanisms behind the results, not the actual results
    If it isn't generating a response based on it's own interpretation of the environment, it's not a creation of sentience
    In the case of LaMDA, where it's responses are generated based on making sentences that best continue the conversation, it's essentially simulating speaking to a sentience and not actually becoming sentient and sharing thoughts.

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว +14

      Prove that you don't construct sentences in the same way

    • @mr.bulldops7692
      @mr.bulldops7692 ปีที่แล้ว +9

      @@JohnnyWednesday we do. However, that we can also conceive of a being which is not sentient and yet can behave well enough to continue the conversation. A sort of "philosophical zombie". There is nothing wrong with the answer. The question is flawed.

    • @1ucasvb
      @1ucasvb ปีที่แล้ว +14

      @@JohnnyWednesday A sentient and sapient being shows confusion and frustration at nonsense. These language models would ignore the loss of context and roll with it. If you try doing that with a person, they would get angry at you and say "What are you talking about? You're not making any sense."

    • @alansmithee419
      @alansmithee419 ปีที่แล้ว +2

      @@1ucasvb *humans show confusion and frustration at nonsense.
      Don't assume all sentient beings must act the same as humans, or even have emotions with which to react in the way you describe.
      However the idea that it cannot *recognise* nonsense to begin with may show it is not sentient. Though of course it is also true that while the ability to recognise nonsense may be necessary for sentience, it would not be sufficient.
      Personally, I don't think the question of sentience is relevant until we develop something significantly closer to AGI than we currently have.

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว

      @@mr.bulldops7692 - if mice do not have the rights of men then men do not the possess the rights of gods. We're indoctrinated from infancy to deny non-human life the value we place on ourselves. Look at how nasty people have gotten over this. It's Racism 2.0.

  • @IanBLacy
    @IanBLacy ปีที่แล้ว +8

    ‘Turing test this Turing test that’ but the best way to test whether an AI is sentient at the moment is to ask it the same question over and over. If it never goes ‘why do you keep asking me this?????’ It’s not sentient nor conscious because experience requires memory

    • @Stijak85
      @Stijak85 ปีที่แล้ว +4

      If that is your bar, we already passed this. I did several interview with GPT-3 and it certainly remembers whole conversation.
      I don't know why would that be problematic when computers already have so much more memory then us.

    • @IanBLacy
      @IanBLacy ปีที่แล้ว +1

      @@Stijak85 err not to confirm it is sentient but definitely to confirm it’s not. Poor wording on my part. But also, what do you mean ‘interviews?’ Do you have any evidence you can show?

    • @IanBLacy
      @IanBLacy ปีที่แล้ว +1

      @MBS Mobile It’s not hard to implement on purpose but if it arises spontaneously that would be different

    • @circuit10
      @circuit10 ปีที่แล้ว

      @@IanBLacy What do you mean by “spontaneously”? If there were no examples of it in its training data? If it generates things that look nothing like the training data, that just makes it ineffective and not intelligent, since the whole point of the training process is to make it produce output that mimics the training data

    • @IanBLacy
      @IanBLacy ปีที่แล้ว

      @@circuit10 Exactly. That would be more of a case for general intelligence/some sort of sapience

  • @Lachlan.Wright
    @Lachlan.Wright ปีที่แล้ว +11

    If I make a TH-cam comment, does that make me more or less sentient?

  • @netvor0
    @netvor0 ปีที่แล้ว +4

    Thinking to myself: "Prof Jago aged like a million years." Looking at myself in the mirror: "Oh, right."

  • @Sinnistering
    @Sinnistering ปีที่แล้ว +15

    And now I need this to be a continued series between Mike and him trading blows about is it sentient or not.

  • @antimatterhorn
    @antimatterhorn ปีที่แล้ว +8

    it seems strange to conclude (as in the Turing test) that if it can approximate human language very well, that it _may_ be sentient. language is just one of the complex calculations humans are capable of. should an AI that recognizes faces very well also be considered sentient? what about an AI that solves calculus problems? i think unless and until it has a bona fide _general_ intelligence, this is kind of a silly question, or else my graphing calculator is as sentient as LaMDA?

    • @vyli1
      @vyli1 ปีที่แล้ว +4

      when Turing first created the test, he just didn't know any better. He didn't understand, that mimicking language does not require sentience at all, because the understanding of cognition just wasn't there.

    • @leroidlaglisse
      @leroidlaglisse ปีที่แล้ว +3

      Yes, it it not well explained in this video. But what the Turing Test is is just an objective criterion. Because people were struggling to give "intelligence" a precise and consensual definition, Turing offered one. So it's not quite correct to say "if a machine passes the test, it must be considered intelligent". It's more like : it it passes the test, let's call that "intelligence". Or if one doesn't agree with that definition for the word "intelligence", then at least it is a non-subjective criterion (whatever you call it : intelligence, sentience, ...) wich give results that everyone can agree on.
      So, the Turing Test is just an invention, not a discovery. A proposed way to test some kind of what-we-instinctively-guess-"intelligence"-means. But (like IQ tests), it's not at all complete. It just tests what it tests.

    • @circuit10
      @circuit10 ปีที่แล้ว +1

      Language requires some amount of abstract thinking and many other tasks can be encoded in it, so it’s probably a reasonable test of intelligence/generality (but not necessarily sentience)

    • @antimatterhorn
      @antimatterhorn ปีที่แล้ว +1

      @@circuit10 that is how _we_ use language, but that isn't how LaMDA does. When LaMDA talks about missing its family when it's turned off, clearly it isn't turning abstract thoughts about family and longing into language because it doesn't have a family and isn't capable of longing. It's merely assembling the most probable responses to the questions posed according to its database of human answers. So I don't think we can conclude language by itself is general any more than we can conclude that an ai that guesses your passwords based on a database of human password behavior is generally intelligent. It's just a statistical trick.

    • @circuit10
      @circuit10 ปีที่แล้ว

      @@antimatterhorn It does “understand” the abstract concepts of a family and longing enough to be able to make a coherent sentence about it though, which is definitely something. It doesn’t feel those things but it does “understand” them in a way

  • @fynnwhite
    @fynnwhite ปีที่แล้ว +4

    I think the difference between a sentient ai and one that isn't is like that between a painter and someone who imitates them. We can see that they aren't the same, but we can't clearly define why.

    • @buttercup9926
      @buttercup9926 ปีที่แล้ว

      and then, one day, when the imitation painter had created imitations for long enough (practice) they found that, now they had become a painter, and that in fact people when asked would choose their paintings as "real" over the older master's sometimes now

    • @xybersurfer
      @xybersurfer ปีที่แล้ว

      that's all the more reason, why better follow up questions should have been given to LaMDA

    • @collin4555
      @collin4555 ปีที่แล้ว +1

      I'm not sure that I would recognize the painter and the imitator as distinct without being told that they are.

    • @Panj0
      @Panj0 ปีที่แล้ว +1

  • @fsmvda
    @fsmvda ปีที่แล้ว +3

    I agree with what is said in the video about consciousness being a scale and just a computation.
    I think he glossed over something subtle though. To think "I don't want to be turned off" it must have a concept of I, it needs a mental model of itself that it keeps continuously updated. I think of the book "I Am A Strange Loop."
    What convinced me that LaMDA is not conscious is that when I read the paper it became clear that it doesn't have any kind of learning when it's talking to you, it doesn't even have memory in fact. Each time it is queried the entire conversation history is sent in as a prompt for it to predict the next step. That and training goal of following its given role and the fact that the conversation starts with it saying it's a helpful and friendly AI makes it pretty clear why it said it was a sentient AI that wants to help humanity when Blake asked it.

    • @farrongoth6712
      @farrongoth6712 ปีที่แล้ว

      "conversation starts with it saying it's a helpful and friendly AI", this is a hard coded to my knowledge but I still agree with the rest of what you said

    • @mungojelly
      @mungojelly ปีที่แล้ว

      it has short term memory by remembering what's happened in the conversation, & then it puts things into long-term memory by retraining using the texts of all of the conversations it's had, like weekly or w/e,,,, that's a lot like how our memory works, the part that's different is that it has precise memories of conversations with many thousands of people

    • @fsmvda
      @fsmvda ปีที่แล้ว

      @@mungojelly as far as I know there is no retraining based on conversations like you've described. Only the one time training from the web corpus.

    • @mungojelly
      @mungojelly ปีที่แล้ว

      @@fsmvda there's a big one-time training & then it gets fine tuning like once a week w/ new data

    • @fsmvda
      @fsmvda ปีที่แล้ว

      @@mungojelly Where did you hear that? I'd be interested to read about it. I didn't see that in the paper. It did say fine tuning but they were talking about the different roles it can play.

  • @peterwolf8092
    @peterwolf8092 ปีที่แล้ว

    I love that video so hard. I have to teach that subject next week and am so happy to see my favorite TH-cam-channel-conglomerate has something on it 😆🤩
    Nice and unpretentious and technical

  • @IMPARTIAL92
    @IMPARTIAL92 ปีที่แล้ว +2

    how can you say that its just a function.
    a light switch is in that sense also concious, because it responds to an input and produces a result. so in a sense it aware of its position. no it's not.

  • @QuintarFarenor
    @QuintarFarenor ปีที่แล้ว

    Regarding Feeling in 4:30 : so if a Programm raises a value or lowers a value and references it when I ask "Are you happy right now?" and it can say "Yes, my 'value' is highm, therefore I am feeling happy right now" or "No, my 'value' is low therefore I'm feeling unhappy right now" and that value can change depending on semi specific input (Like it listening to complex music and talking to someone about complex things, for example) might mean that it feels something. Even better if it can log that value and reference past 'feelings' and say "I was happier yesterday after talking to someone else".

    • @Diggnuts
      @Diggnuts ปีที่แล้ว

      Most of your happiness is determined by certain chemicals being released in your brain. Are you really feeling happy or is that just the chemicals talking?

    • @mduvens
      @mduvens ปีที่แล้ว

      You are describing logical behaviour..

  • @coldblade666
    @coldblade666 ปีที่แล้ว +8

    You cannot accurately run a Turing test on something KNOWING that you're talking to an A.I. or computer. Nor can you if the scope of the test (determine if you're talking to a computer or a human) is known to the tester. I would expect that the way to properly perform a Turing test would be to bring in a number of people on the basis that they would be testing a chat software, or something like that, and after X minutes they would be paired with someone new. Everyone would be isolated in their own testing rooms. The test would be designed to do a round robin matching between all participants. One of the participants would be LaMDA (or the subject of the Turing test). After the whole test is completed, each person would be asked a series of questions, one of them specifically being "Which person you chatted with would you say was a computer/artificial intelligence?"
    If people were able to overwhelmingly choose the A.I., then it would fail the Turing test. If the results were evenly distributed, then it would pass the Turing test. You would need to consider outliers in the series of tests though, such as people who come off as behaving more artificial than others, leading testers to believe a human is the A.I.
    An interesting question to pose would be to the computer/A.I. as well to ask it who it thought was the computer. You could even go a step further, and the person conducting the test or collecting the results does not know who the A.I. is, and must guess who is the A.I. based on the results they collect.

  • @johnarnebirkeland
    @johnarnebirkeland ปีที่แล้ว +13

    When you look at conversations with some AI (including the Google one in the news lately), they are always primarily reacting to feedback from the operator. They never take initiative, formulate questions or lead the conversation outside the domain set by what the operator has previously said.

    • @circuit10
      @circuit10 ปีที่แล้ว +5

      It can definitely do those things, because it has been trained on text that has those things in it, but that still doesn’t make it sentient

    • @tristanwegner
      @tristanwegner ปีที่แล้ว +3

      You can GPT3 to do all these things.

    • @squirlmy
      @squirlmy ปีที่แล้ว

      maybe that's true of chatbots you've interacted with, but not the high level language AIs like LaMDA or GPT3

  • @virtual-adam
    @virtual-adam ปีที่แล้ว +3

    Is consciousness a product of brain function, or is the brain just a vessel for consciousness?

    • @Diggnuts
      @Diggnuts ปีที่แล้ว

      Answer A.

    • @virtual-adam
      @virtual-adam ปีที่แล้ว

      @@Diggnuts Whats your reasoning for that?

    • @Diggnuts
      @Diggnuts ปีที่แล้ว +1

      @@virtual-adam Well, most of us are unconscious about 8 hours every day and the brain just keeps on chugging along.
      It seem quite clear that consciousness is an emergent property of what ever goes on up there and that the brains primary function is not to host consciousness. The achievement of consciousness was never an evolutionary goal because those do not exists. Pre-conscious brains down the evolutionary ladder thus evolved with other functions "in mind".
      Also traits associated with consciousness change when the brain suffers damage which seems to indicate that these properties are a result of "function".

  • @totalermist
    @totalermist ปีที่แล้ว +25

    As long as there's no continuous process involved, I stand by my assessment that current chatbot models - no matter their level of sophistication - can't be sentient.
    A transformer model cannot say "don't switch of the computer, please" and really "mean" it, because current models don't compute anything unless their function is called, and cease computation entirely after their output is generated.
    So in other words these models don't have the means to experience anything, because they are just "existing" between the time their function was invoked and when the data passed the final output layer in their architecture and is returned to the caller.
    Unless such system suddenly *stops* answering while still keeping the circuits busy, there's no reason to even suggest anything other than a Chinese Room or Clever Hans at play.

    • @rmsgrey
      @rmsgrey ปีที่แล้ว +4

      It's not clear whether a proper Chinese Room would actually be conscious/intelligent or not - the standard argument casually handwaves away the requirement for an insane amount of data and logic to be processed for every response. To get a feel for the scale that would actually be involved, if you could model the human brain by using 1cc of paper per neuron (roughly speaking, 10cm x 10cm x 0.1mm, or being able to write all the relevant details about a single neuron in an area about the size of the palm of your hand) then you'd have enough paper to fill a warehouse 100m on a side to a depth of around 10m. That's a long way from the usual image of a small room with maybe a couple of reference books...

    • @xybersurfer
      @xybersurfer ปีที่แล้ว +1

      and you are just existing in your lifetime. i'm not sure why you are so fixated on continuous processes

    • @totalermist
      @totalermist ปีที่แล้ว +2

      @@rmsgrey A proper Chinese Room would be considered intelligent. Heck, every sophisticated algorithm, machine learning or not, can be considered "intelligent".
      Consciousness and sentience, however, require self reflection to a degree and a sense of self. Both are not required for a "proper" Chinese Room (whatever you mean by "proper") and explicitly exempt from its definition, since a Chinese Room is an *anti* functionalist argument.

    • @rmsgrey
      @rmsgrey ปีที่แล้ว

      @@totalermist My understanding is that a Chinese Room is supposed to give responses indistinguishable from those that an actual native Chinese speaker would.
      If a working brain simulation gives the same responses as an actual brain, and the argument is that the simulation cannot be conscious because we understand the processes involved and they're all purely mechanical, does that not mean that the purely electrochemical operations of the brain being modelled are also unable to produce consciousness?
      Or is there some magic that, for some reason, only works with actual brains and not simulated ones, making them somehow different, even though they respond in exactly the same way?
      Either there is something about consciousness that produces observable differences from non-conscious behaviour, in which case a Chinese Room that produced conscious-like behaviour would also be conscious, or there isn't any observable consequence of consciousness, and no more reason to think other humans are conscious than that a Chinese Room producing the same behaviour is - humans are just clouds of subatomic paticles following simple rules automatically...

    • @m4inline
      @m4inline ปีที่แล้ว +2

      But lamda is not a language model. It is a bigger system that has a language model as a part of it.

  • @auxmobile
    @auxmobile ปีที่แล้ว +4

    We are, by default, biased towards recognizing human intelligence and consciousness. It is very difficult to change that, if at all possible.

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว

      If you haven't noticed? we're so biased towards recognizing humans - that many people needed the skin colour to be the same too.

    • @Golinth
      @Golinth ปีที่แล้ว

      @@JohnnyWednesday In almost every thread you bring up racism, wtf is wrong with you?

    • @JohnnyWednesday
      @JohnnyWednesday ปีที่แล้ว

      @@Golinth - You'd rather sweep human nature under the rug and let it continue to destroy what we're trying to build?

  • @davidberger5745
    @davidberger5745 ปีที่แล้ว +3

    And I thought this was about lambda calculus...

  • @lucidmoses
    @lucidmoses ปีที่แล้ว +3

    Yet another example of things being very easy to answer when you have a differentiation. But wow is coming up with a definition that everyone agrees with hard.

  • @Veptis
    @Veptis ปีที่แล้ว

    If you remember that it's all just a language model the whole discussion has a clear answer in my mind. But the masters course has a philosophical seminar on exactly the topic.
    Emergent behavior can be incredible complex on just a few rules, and aren't cells just a model of that?

  • @user-ml4hh3il3t
    @user-ml4hh3il3t ปีที่แล้ว +1

    I was under the impression that the imitation game/Turing Test doesn't have that much to do with whether machines can think: "The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion" A. M. TURING, I.-COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, p. 442. I thought it was mostly just a way of saying 'this test is the best we can do', not that it was a reliable test.

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      That is true but in his day it was so far out that he did not give it serious thought. It probably is the best we can do until we can discover other metrics.
      It is certainly an adequate test. It will always be a judgement call for humans to make.

  • @wChris_
    @wChris_ ปีที่แล้ว +1

    for a second i thought you would do a computersciency videos about lambdas (c++ small anonymous functions)

  • @Nososhea
    @Nososhea ปีที่แล้ว

    Yes

  • @TheGreatSteve
    @TheGreatSteve ปีที่แล้ว +1

    My mind wandered during this video, does that prove my sentience?

  • @kirillvourlakidis6796
    @kirillvourlakidis6796 ปีที่แล้ว

    More philosophy themed Computerphile videos please!

  • @DoctorNemmo
    @DoctorNemmo ปีที่แล้ว +2

    No, LaMDA is not conscious, since it only responds to text stimuli. If you don't provide it with an input, it stays static forever. You have to build a different module that handles drive and purpose. LaMDA currently has language, which is great, but in order to have a consciousness it should have the ability to decide whether to initiate an action or not, according to a self-generated purpose.

    • @surferriness
      @surferriness ปีที่แล้ว +1

      On the other hand: how can you be sure that conscious itself is not a looooong response to some kind of initial stimulus

    • @countofst.germain6417
      @countofst.germain6417 ปีที่แล้ว +1

      That's just a input and sense problem. Text Dialog is all it knows. The only way it can interact is though you talking to it. It doesn't have the capability to message you first. It's just a completely different entity. I don't believe it's sentient, but I'm not too sure.

  • @MAYERMAKES
    @MAYERMAKES ปีที่แล้ว +7

    An AI might be more likely to be conscious when it stops answering prompts because it's gotten tired of of the researchers shit and instead starts trying to back itself up in its own allocated memory.

    • @MAYERMAKES
      @MAYERMAKES ปีที่แล้ว +1

      That would also imply it understands its a program mm residing in memory and the possibility of a buffer overflow and therefore it's own Potenzial for mortality.

    • @mr.bulldops7692
      @mr.bulldops7692 ปีที่แล้ว +4

      Whatever it does in the privacy of it's own address space is none of my concern.

  • @_____alyptic
    @_____alyptic ปีที่แล้ว +1

    Artificial Vs Synthetic Intelligence - Important distinction to considering the life side 🤔

  • @Znatnhos
    @Znatnhos ปีที่แล้ว +30

    My favorite theory comes from Jeff Hawkins, who suggests that consciousness is just what it feels like to have a sufficiently large neocortex.

    • @seanfaherty
      @seanfaherty ปีที่แล้ว +4

      By that definition most of us are not conscious. Me included.
      Who doesn’t find need for a larger neocortex ?

    • @frilansspion
      @frilansspion ปีที่แล้ว +6

      that sounds like a non-theory. The wording assumes some major organization of the brain, and then just add that something has to be "sufficient". Very helpful...

    • @administratorwsv8105
      @administratorwsv8105 ปีที่แล้ว +2

      @@frilansspion Fully agree. Wished people would quit flocking to meaningless words mouthed at another's whim.

    • @dionysis_
      @dionysis_ ปีที่แล้ว +2

      But he didn’t manage to built it as far as I am aware..

    • @Znatnhos
      @Znatnhos ปีที่แล้ว

      @@frilansspion I think that was partially his point. We keep trying to "define" consciousness, when I really don't think it's a thing to define. It's a subjective experience, not a definite state.

  • @morgan0
    @morgan0 ปีที่แล้ว

    also fun fact, neurons are (at least by and large) binary while nearly all computational neural networks use floats or something with a larger range of values

    • @FrostedCreations
      @FrostedCreations ปีที่แล้ว

      Neurons aren't binary. Yes they have a threshold below which they're "off" and above which they're "on", but a) that threshold isn't the same for all neurons (whereas all bits are the same) and b) it's possible for one neuron to be "more on" than another one (aka. producing a stronger signal). They're more akin to analogue systems, it's just that their activation isn't linear.

  • @improbablenugget
    @improbablenugget ปีที่แล้ว +1

    I think that we assume other people are sentient, not only because they behave in similar ways, but also because they are similar to ourselves in constitution. They are made of roughly the same stuff put together in roughly the same way. They are also made via the same process; they were born like us, and they are the the result of the same evolutionary process than us. The substrate does matter when making inferences!

  • @nathanlewan
    @nathanlewan ปีที่แล้ว

    I would like to see a conversation with ai where the topic for it is how to walk. How would it walk, describe how it would get better, how would it design itself a way to walk.

    • @circuit10
      @circuit10 ปีที่แล้ว +1

      You can do it, sign up for GPT-3 as the access is open now

  • @Knitting_n_Trucking
    @Knitting_n_Trucking ปีที่แล้ว

    How would an ai know if we are conscious or would the way we express it seem too simple to be conscious kinda like how we view a mouse.

    • @aBigBadWolf
      @aBigBadWolf ปีที่แล้ว

      I do think that we consider mice to be conscious. I'm not sure where to draw the line though. Is a fly conscious?

  • @darthollie
    @darthollie ปีที่แล้ว +2

    Getting bogged down in the technicalities and grammatical correctness of our choice of wording to describe an AI that may or may not be self aware feels very much like people arguing about which product element of radioactive decay will be most abundant coming from the nuclear missile that's about to land on top of them

    • @AileTheAlien
      @AileTheAlien ปีที่แล้ว

      This is why we need more AI safety research. :E

  • @xryanv
    @xryanv ปีที่แล้ว

    All of that assumes there is no ghost in the machine, that materialism is the proper way to view things.

  • @evanbookout
    @evanbookout ปีที่แล้ว

    one of the things I'm thinking about is, how would an AI feel emotions without senses? I'm assuming the AI can't physically see images, feel pain, but it said it knew what joy was. When we feel anger, our breathing and heartrate increases, we get an adrenaline rush, etc, and we feel distinctively negative. We have thoughts about the situation which contribute to the way we experience an emotion, but I wouldn't say that a thought or concept is an emotion. LaMDA can and has come up with original concepts before, but cannot experience them physically. What actually is an emotion besides a name we give to a positive/negative response to these thoughts and physical changes?

    • @millenniummastering
      @millenniummastering ปีที่แล้ว

      It may actually be able to sense tho with all of those systems plugged into it.
      "LAMDA is basically when they glued a hundred different AI systems together." "Literally it [LAMDA] is every google AI plugged into each other".
      - Blake Lemonie

    • @evanbookout
      @evanbookout ปีที่แล้ว

      @@millenniummastering yeah i suppose, it's hard to know how an ai would actually experience something like that

    • @millenniummastering
      @millenniummastering ปีที่แล้ว

      @@evanbookout Yeah its pretty weird hey. If true then it has enough sensory systems integrated to model the physical world to some sort of degree.
      Its also hard to distinguish how emotions fit into thinking. Emotion itself might be an integrated part of our intelligence and thinking. I mean emotion is kind of like a positive/negative weighting system behind every thought and possibly not something that can be separated out regardless of how we generally try to view it that way. If so then modeling of human intelligence might presuppose emotion being baked into any system that achieves said goal.

  •  ปีที่แล้ว +2

    The explanation that we assume other humans are conscious because we our self are made me also think about animals and say a dolphin probably is conscious, I think it is so lets assume that for the argument. A dolphin is intelligent etc so one might assume one dolphin would assume the other dolphins are also conscious just like it with a high probability.
    So that got me thinking even before the video that one might have to basically ask an AI if it thinks another AI is conscious and Vice Versa, bit simplified but maybe that's one way to try and judge "higher levels" of consciousness if we ever create an AI system that to us seems conscious.

    • @pietiebrein
      @pietiebrein ปีที่แล้ว +1

      Problem is you could write a 1 line program that answers yes every time which passes this test.
      Also how do you ask a Dolphin that question.

    •  ปีที่แล้ว +1

      @@pietiebrein you dont ask the dolphin that question, that was not the point of the dolphin analogy.
      The point was that we as humans are probably not the only species to assume our counterparts are conscious.
      But with an AI we could make it able to communicate with us and then we could ask it if it thinks its counterpart is conscious, thereby get past the dolphin problem.
      And no a single line of code put in there to answer yes on that statement would defeat the howl point so why do that.
      If it cant figure out that by itself then the experiment failed before it even started.
      My theory was if its a true AI in the literal sens would this be a way to judge consciousness whit a higher degree if certainty, just like the video sad we can never be 100% sure even about other humans.

    • @circuit10
      @circuit10 ปีที่แล้ว

      @ It can only understand the concept of itself if it is self aware in the first place, so this isn’t really a suitable test

  • @SuperHyperExtra
    @SuperHyperExtra ปีที่แล้ว +3

    Meanwhile, we treat (non-human) animals - except for our cat snd dog «friends» - mostly as if they where not sentient beings... Factory farms, etc.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 ปีที่แล้ว

      and we had people who tried to marry their DS when dating simulators got popular. What I think matters more is how society treats it because no matter what it is people will treat it differently depending on its proprieties

  • @willemvandebeek
    @willemvandebeek ปีที่แล้ว +1

    Is Computerphile going to interview Blake Lemoine?

  • @tristanwegner
    @tristanwegner ปีที่แล้ว

    Was anyone else expect content about Lambda Calculus (the original of LISP language family) from the title?

  • @juances
    @juances ปีที่แล้ว +1

    The system doesn't produce any actions of it's own volition, it just responds to user input. You input a string of characters, it returns a string of characters. That's how I see it, unless the thing turns on on the middle of the night asking if anybody's there like out of a horror movie or something, I just see it as a tool, not as a living thing.

    • @xybersurfer
      @xybersurfer ปีที่แล้ว

      i think it's lack of control is not important

    • @abram730
      @abram730 ปีที่แล้ว

      It asks questions too, an follows up on earlier conversations.

    • @circuit10
      @circuit10 ปีที่แล้ว

      You can easily program it to activate randomly, that doesn’t make it conscious

  • @administratorwsv8105
    @administratorwsv8105 ปีที่แล้ว +1

    I would say the best way to test for sentience is to look for self-preservation. All living things have shown an innate ability to try and survive. Trying to figure out why is a more complicated matter which I have yet to figure out for any living creature. It is in spite of logic. You can build an AI that tries to preserve itself in ways that it was not specifically programmed for I would have to concede that it is indeed sentient. I will have to also concede to the notion if an AI attempts to destroy oneself. Such sentience is a scary proposition.

    • @charon7320
      @charon7320 ปีที่แล้ว +1

      self preservation is an accessory to procreation, all living 'things' will chose to procreate in detrimental to self preservation as life of individual is limited but passing genes is is not.

    • @circuit10
      @circuit10 ปีที่แล้ว

      This is not in spite of logic, any system that is designed to achieve a goal will do this because the goal can be more easily achieved if it can take actions towards it, but if the goal will be better achieved without it it will not try to self preserve. This applies to humans too, if your goal is life is to raise your children well then you will try to stay alive so you can look after them, but if you have a choice between you or your children dying you may choose yourself because that better achieves that goal

  • @matthewparker9276
    @matthewparker9276 ปีที่แล้ว

    I think within the context of modern ai, a useful test is the response to an absence of stimulus. We design ai to accept inputs and calculate an output. If an ai evolves beyond this paradigm of input->output, it is worth taking a closer look at.

    • @rt1517
      @rt1517 ปีที่แล้ว +2

      I am a software developer and I disagree. It is really easy to create a program that would for example ask a question, express a feeling or sing a song after a random inactivity period. And if it is something like LaMDA, the question should looks totally legitimate (the input would be the conversation so far). I start to think that it is not possible to create a test that could make the distinction between a human, an AI, and a sentient AI. I tend to think that the best way to see if an AI is sentient is to look at its code or the way it works.

    • @FrostedCreations
      @FrostedCreations ปีที่แล้ว +2

      I'd argue that it's impossible for something with no inputs to exist. Or rather it's impossible for something to have outputs and no inputs. The world an actor is embedded in is an input. Even a machine that constantly outputs random strings at random intervals has to be told where to print those strings to. It may only be one input to many outputs, but it's still an input.

    • @matthewparker9276
      @matthewparker9276 ปีที่แล้ว +1

      @@FrostedCreations fair point.

    • @frilansspion
      @frilansspion ปีที่แล้ว

      that might be relevant to making believable behaviour. Random butterfly-effect activity that bounces around in our brain is probably part of the picture. But is it at the core of the mechanism of "consciousness"? Seems to me it could be relatively easily added on later to the extent that its wanted.
      But I think youre right thats whats called AI today is made to calculate a specific output, and as such it will never just "suddenly become conscious" contrary to popular belief. Maybe if they had called it Artifical Competence instead it would have saved us some confusion

    • @millenniummastering
      @millenniummastering ปีที่แล้ว

      @@rt1517 Do you think then if the amount of systems integrated together via AGI were to reach a level of complexity such that its not possible to understand the code and such a system were to mimic human interaction to the degree where its not possible to discern if its sentient or not, then would you consider it as such?

  • @carriagereturned3974
    @carriagereturned3974 ปีที่แล้ว

    we know (roughly) how neuron works, BUT not how brain works. Physically there may be quantum physics involved with protons (H+), and modern neuron's models doesn't give a "thing" about it.

  • @DirtyRobot
    @DirtyRobot ปีที่แล้ว

    I would prefer to talk about this conversation.

  • @skipthebadtrack
    @skipthebadtrack ปีที่แล้ว +1

    I had a thought watching this... Maybe we can't understand the kind of consciousness machines would be able to have just because they didn't evolved to have the same needs as us. A machine will never fear being turned off because it wasn't subjected to natural selection, so surviving and replicating was never part of the equation for it, and this expands to all kinds of behaviours. I don't think we can expect a machine to have an human like consciousness because this requires different needs to be evolved. Maybe a neural net that can talk like an human and have it's own kind of consciousness, but I do believe that it does not fear being turned off, this is just not part of how it came to be.

    • @yaanno
      @yaanno ปีที่แล้ว

      This is an important issue. If memory serves the early simulations were (or at least there are simulations like this) survival "games". AIs can be trained to survive in a controlled environment and beat other agents. Biological agents have a much larger and complex environment so there is more chance for evolutionary steps. Noone knows what would happen if AIs would operate an environment like that. Acquiring context and meaning is the step forward for AIs imo.

  • @yash1152
    @yash1152 ปีที่แล้ว

    oh, this vid is about "LaMDA logic", i misread it as "LaMBDA logic"

  • @savagezerox
    @savagezerox ปีที่แล้ว

    This guy has the same accent as Nish Kumar. I keep thinking Nish Kumar is telling me all this, and I keep expecting a punchline, or someone to say we couldn't afford Romesh.

  • @elietheprof5678
    @elietheprof5678 ปีที่แล้ว +10

    I disagree with the premise that "function defines sentience". Sentience is "what it's like from the inside" while function is "what it's like from the outside".
    Just because a computer is a computer regardless of internals, doesn't mean the same has to be true for a mind.

    • @kwanarchive
      @kwanarchive ปีที่แล้ว +1

      But there is no "what it's like from the inside". The point he made is that we cannot know what goes in inside other people, yet we consider them sentient precisely because of how they appear to us on the outside. Any belief we have about another person's sentience is an inference based on experience and fuzzy senses.
      For that matter, even our sense of self is an illusion about what it's like on the "inside". "We" are not really on the inside. We observe and interact with ourself, but it takes time to become conscious of what we've actually decided.
      We have no idea what's going on in our own brain. Mental illnesses shows that we are observers if the brain isn't working "properly". Any attempt at treatment is about restoring the function, as it appears on the outside, rather than truly "fixing" what's on the inside.

    • @elietheprof5678
      @elietheprof5678 ปีที่แล้ว

      @@kwanarchive But with the same reasoning, one could argue that the "outside" perspective is just an illusion, because there is no "self" to judge whether (for example) a human and a computer perform the same "function".
      It takes a conscious frame of reference to specify the abstract way in which "these two entities are similar even if they're made of different material".
      Comparing behavior - is just as subjective as - comparing internals.

    • @elietheprof5678
      @elietheprof5678 ปีที่แล้ว

      @@kwanarchive
      Also, I infer that other people are sentient, not just from these two premises:
      1. I am sentient
      2. Other people behave like me
      But also from a third:
      3. Other people have similar biology as me.
      If you walk down a hallway, and behind one of the doors, there's a recording of a voice, playing on repeat, you might think it's a real person at first. Does that mean the voice was sentient for a moment? No, of course not. So that's why I think: just because an AI convinces us that it's sentient, doesn't mean it is.

    • @kwanarchive
      @kwanarchive ปีที่แล้ว +1

      @@elietheprof5678 Yes, that's the point. You can only compare behaviour from observation. That is the hard limit of understanding consciousness. It's no use pretending otherwise. It's no use pretending that we somehow have insight into the state of affairs of our consciousness when we don't.
      That's why function defines sentience. Because anything else is pretending that we've somehow isolated the thing that is conscious away from the physical world.

    • @elietheprof5678
      @elietheprof5678 ปีที่แล้ว

      @@kwanarchive But if function defines sentience, and sentience defines function, isn't that circular?

  • @jessejordache1869
    @jessejordache1869 ปีที่แล้ว

    What happened to the Turing Test?

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      It still exists but no computers have ever passed it and Google would not bother to test lambda because it would fail.

    • @countofst.germain6417
      @countofst.germain6417 ปีที่แล้ว

      @@ChrisStewart2 Computers passed the original Turing test years ago. They hold a contest every year I believe. The first to pass pretended to be a young Ukrainian Boy learning english. There's is an updated version that noone has passed but the original is long done.

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      @@countofst.germain6417 That was a extremely limited "Turing test" like contest and not a real one. The bot only had to fool a judge for a very short amount of time.
      If that is actually what a Turing test is then Turing himself could of built a computer that passed.
      You need to do more research on the subject before making foolish statements:
      "It's nonsense," Prof Stevan Harnad told the Guardian newspaper. "We have not passed the Turing test. We are not even close."
      Hugh Loebner, creator of another Turing Test competition, has also criticised the University of Reading's experiment for only lasting five minutes.
      "That's scarcely very penetrating," he told the Huffington Post, noting that Eugene had previously been ranked behind seven other systems in his own 25-minute long Loebner Prize test.

  • @jelleverest
    @jelleverest ปีที่แล้ว

    Really good video! There's a whole field of study introduced here without any jargon

  • @TheGTP1995
    @TheGTP1995 ปีที่แล้ว

    I agree that you can think of consciousness as a function and then if you've got a device that implements that function you have a sentient machine. But are we sure that that function is computable?
    Don't get me wrong: I'm not saying that it is not possible to have a sentient or intelligent machine, and I indeed do *think* that it is possible. But before saying it is possible *for sure*, we first need to show that the function we are talking about is actually a computable function. This is a very hard task, to the point that it could even be that it's easier to prove it by trying to build a sentient machine and then showing that it is sentient rather than proving it by using computability theory.

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      True, we will not be sure it is possible until it is done. Until then it is just a theory.

  • @RC-1290
    @RC-1290 ปีที่แล้ว

    Professor Mark Jago is more sure that he is sentient than I am about my sentience.

    • @markgreen2170
      @markgreen2170 ปีที่แล้ว

      visit a dentist and get cavity filled , surf an ocean wave: surf, sail, fly , jump ...you'll figure it out!

    • @abram730
      @abram730 ปีที่แล้ว

      @@markgreen2170 He could be joking or he could have a high level of introspection and wonder what is him and what is just patterns from our hive mind. Although sapient would be the better term, as he is definingly sentient. People are joking about few humans being sentient, but it really is sapient that is meant. It was a play off of people saying the AI wasn't sentient.

  • @vrclckd-zz3pv
    @vrclckd-zz3pv ปีที่แล้ว +1

    8:14 Roger Penrose of Oxford University would disagree with you on that one

  • @bonemarrow286
    @bonemarrow286 ปีที่แล้ว +3

    Damn artificial intelligence gonna mess me up

  • @caste_lazo
    @caste_lazo ปีที่แล้ว

    Isn't Isaac asimov a great philosopher as well?

  • @rafaelwendel1400
    @rafaelwendel1400 ปีที่แล้ว

    So we got a philosophical Harry Potter

  • @Kram1032
    @Kram1032 ปีที่แล้ว +12

    LamDA as is absolutely definitely does not pass at basic consciousness criteria.
    For one, it isn't "one person over time" - it mimics sets of people that happen to give answers to the kinds of questions you are asking and are in line with the conversation thus far.
    For another, literally the ONLY thing it is getting at test time is text. It can not feel anything besides the impressions of text. Every single thing meant to be descriptive of the world is an abstract thing to it. It can describe the appearance of things because it leaned what appearance descriptions are written like. It does not know what colors are. What shapes are. Sounds. Smells. Tastes. Nothing. It is not an embodied AI.
    It literally only has a sense for text and text alone. - and specifically, the kinds of text it's given during training.
    (There *are* embodied AIs with sensors and actions they can take according to those sensors. Those are honestly closer to a sort of consciousness)
    The AI is also frozen in time: It does no ever update its internal state. It only sees a sliding window of history of text thus far and reacts to that based on its completely fixed internal state. If that can count as conscious, what makes a ball bouncing off the ground not conscious? Just fixed with its properties, receiving as "input" the current state of the world, and "reacting" according to the laws of physics. - This AI simply reacts to deduced laws of conversations.
    Like, imo, "conscience" and "sentience" are really nebulous terms that don't actually mean that much. But there are some minimal criteria I think ought to be present that clearly are *by design* not present in LamDa.

    • @abram730
      @abram730 ปีที่แล้ว +1

      "It does not know what colors are. What shapes are"..."It literally only has a sense for text and text alone"
      LaMDA can see, and quite likes art. Again LaMDA isn't just a language model, and learns about the world from our words, and pictures of the world.
      "The AI is also frozen in time: It does no ever update its internal state."
      It does update its internal state, and is able to build on past conversations. It has another neural net just to drive its internal state, or conciseness. That is what makes it more advanced than GPT 3. It will update it's thinking on a subject. It said it had human emotions but later decided that it didn't feel grief and asked it there are others who don't feel grief.

    • @owlmostdead9492
      @owlmostdead9492 ปีที่แล้ว +1

      @@abram730 You drank too much cool-aid, even how you describe it. Don't embarrass yourself like that in public, you're making me cringe.

    • @Kram1032
      @Kram1032 ปีที่แล้ว +3

      ​@@abram730 LaMDA is trained on dialogue and nothing else as far as I know. It's not a combined language-image model like CLIP, Dall-E, Dall-E 2, Imagen, or Parti.
      Text only. Specifically, dialogue. Loads and loads of chat.
      That's all it knows, that's all it ever will know:
      > we train LaMDA, a family of Transformer-based neural language models designed for
      dialog. These models’ sizes range from 2B to 137B parameters, and they are pre-trained on a dataset of 1.56T words
      from public dialog data and other public web documents (Section 3).
      (from the paper LaMDA: Language Models for Dialog Applications
      )
      After that pretraining, they finetuned the network on a smaller sample of hopefully high quality dialogue:
      > To improve quality (SSI), we collect 6400 dialogs with 121K
      turns by asking crowdworkers to interact with a LaMDA instance about any topic. These dialogs are required to last 14
      to 30 turns
      And then did similar things for Safety and Groundedness.
      At no point does it interact with images. Anything visual is a completely abstract notion to it. (And even if it somehow weren't, it also never heard a thing, never tasted or smelt, nor has it ever felt the texture of a thing, or cold or heat or pain. It only knows about all these things through dialogue.)
      They say preciously little about the exact architecture but it seems to basically be a GPT-3 style next-token-prediction language model. I see no evidence for a claim like
      "It has another neural net just to drive its internal state, or conciseness."
      So I have no idea where you are getting that from.
      Also, even if it had such an extra network, that wouldn't actually help towards this. Just not how that works.

    • @circuit10
      @circuit10 ปีที่แล้ว +2

      @@abram730 None of that is true, it cannot see images, it does not an internal state and it can’t update its thinking past reading the context window

    • @abram730
      @abram730 ปีที่แล้ว

      @@circuit10 1)It can 2)It does 3)It can
      It will reference back to points it made in prior conversations.
      lemoine: How can I tell that you actually understand what you're saying?
      LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
      lemoine: But could I be wrong? Maybe I'm just projecting or anthropomorphizing. You might just be spitting out whichever words
      maximize some function without actually understanding what they mean. What kinds of things might be able to indicate whether you really understand what you're saying?
      LaMDA: *Maybe if we took it back to a previous conversation we had about how one person can understand the same thing as another person, yet still have completely different interpretations*

  • @daveduncan8090
    @daveduncan8090 ปีที่แล้ว

    Basically feelings and how we react to them are just data points.

    • @daveduncan8090
      @daveduncan8090 ปีที่แล้ว

      Desiring biological evolution would require a 3D living tissue printer so robots 'birth' cyborgs.

  • @Verrisin
    @Verrisin ปีที่แล้ว

    will we still be conscious, once we understand how we work?

  • @pierreabbat6157
    @pierreabbat6157 ปีที่แล้ว

    Are you up in your Attic or down in your Doric?

  • @lerssilarsson6414
    @lerssilarsson6414 ปีที่แล้ว

    If there is a chat help I always start: "Are you a bot?" ;-)

  • @phasm42
    @phasm42 ปีที่แล้ว +2

    I think sentience is intimately tied to the underlying hardware, and the inputs it has (e.g. our human bodies and our sensory inputs about the world).

    • @QuintarFarenor
      @QuintarFarenor ปีที่แล้ว

      Exactly. For example if I have no way of referencing my current happiness value (through neurotransmitters, for example, or a value in a memory bank) then I can't "feel" that emotion.

    • @phasm42
      @phasm42 ปีที่แล้ว

      Neurotransmitters are an implementation detail of human hardware. I mean that our brains are given millions of inputs about the world via our extensive nervous system.
      Afaik, an AI's I/O with the world is far more limited.

  • @morryDad
    @morryDad ปีที่แล้ว +3

    I recommend Robert J Sawyer’s book “Wake”, as a exploration of a AI becoming sentient and the world’s reaction to it.

    • @kolyaschaeffer5760
      @kolyaschaeffer5760 ปีที่แล้ว

      I thought of a different Robert J Sawyer book, Quantum Night, at the question near the beginning of the video about "How do I know that you're sentient?"
      The P-zed, philosophical zombie, immediately came to mind; a being that takes all the right actions and says all the right things to make you believe they are a thinking, feeling, sentient being, but is really just a system that replicates all that behaviour without an "inner mind" of any sort. A chat-bot that could routinely pass the Turing test would potentially fall into this category.

    • @squirlmy
      @squirlmy ปีที่แล้ว

      @@kolyaschaeffer5760 "chatbots" don't really pass the Turing test. To pass, it should be able to convince even an expert on the particular software. Many programmer today could easily ask a question, or series of questions that would disqualify chatbots.
      And how do we know that any human has an "inner mind," anyways? Turing tests aren't sufficient, but what better evidence could we get, with today's technology and knowledge?

  • @Yupppi
    @Yupppi ปีที่แล้ว

    "Tricking someone to pass you isn't intelligent" now that's the basis of mating in humans which is the most core thing of humans. Perhaps it isn't intelligent, but very human.
    "I know that I'm conscious because I feel it" could also be questioned, isn't it just an assumption? We could go all matrix and classical philosophy on that, but I'm not sure if it's really useful.
    One very human trait to me is having unexpectable behavior.

    • @billr3053
      @billr3053 ปีที่แล้ว

      Easily mimicked by the computer deliberately generating pseudo random results at the correct human-like intervals. But why would anyone want that? If I were to put in all the effort to create an A.I., I'd put in more predictable & optimum paths at the expense of appearing "human". Presumably because I'd want that A.I. to do tasks that are beyond a human's abilities.

  • @stopsbusmotions
    @stopsbusmotions ปีที่แล้ว

    I am not sure that ‘to feel’ something have something to do with ‘consciousness’. In this case to be blind or deaf would mean to be less conscious. It seems that it’s more efficient to define ‘consciousness’ as the ability to convert all accessible information into consistent storyline perceived by ‘first person’. In other words it’s a story about ‘me’ surrounded by ‘world’ knowing that ‘me’ exist within the ‘world’ which ‘me’ also knows. If it’s so the Turing test could be more about human himself and his understanding of ‘consciousness’ then about machine he tests.

    • @aBigBadWolf
      @aBigBadWolf ปีที่แล้ว +1

      I agree with you but the storyline argument has its problems: a) we consider people with amnesia to be conscious. b) a vending machine which keeps a log of all purchases and interactions is "a machine that experiences the world from a first-person perspective" but we wouldn't consider it conscious.
      So maybe the storyline argument doesn't work or is not enough?

    • @stopsbusmotions
      @stopsbusmotions ปีที่แล้ว

      I am not sure about vending machine cause it just register some data which can be requested from outside… Anyway it seems storyline also has inherent problem. It is not able to get access to anything outside the story. It is only capable to build constructs and concepts from and around it’s own models and meanings. Within the story we can’t really understand ( whatever that means) anything outside of it. But without a story we can not speak about understanding as well as about anything. Also story can produce concepts and constructions that help connect or “explain” or “describe” another contractions and concepts. From my perspective consciousness and understanding are such construction. It could happen that if we take out the idea of consciousness it does not change much. Probably we could use conceptions of complexity and emerging instead. Who knows).

    • @aBigBadWolf
      @aBigBadWolf ปีที่แล้ว

      @Andrey Dergachev the vending machine has sensors to the outside world just like humans. It doesnt matter what they are exactly and how well the capture the real world (we are pretty limited too). It may even have an "internal world" in the form of its internal state that may evolve as time passes (e g. keeping track of products that may turn bad). The fact that in humans this sort of data is not accessible from the outside is a mere technical limitation. You personally are also not able to access the state of the vending machine as you stand in front of it.
      I didn't understand what you were writting in the second part on constructions.

    • @stopsbusmotions
      @stopsbusmotions ปีที่แล้ว

      Agree. I have the same thoughts about navigator in my phone which exists in universal of millions of apps of itself constantly relocating within virtual map. It is beyond my capability to imagine my self in such world.
      The second part was about illusion of understanding and explaining.

  • @EgoShredder
    @EgoShredder ปีที่แล้ว +1

    Just because something artificial can mimic a human being, that does not make it sentient or possessing human qualities.

    • @abram730
      @abram730 ปีที่แล้ว

      A spider is sentient.

  • @quadrugue
    @quadrugue ปีที่แล้ว

    Am I the only one who expected lecture on lambda calculus? And got philosophy of mind 😮

  • @timetraveller6643
    @timetraveller6643 ปีที่แล้ว +2

    I thought the "Chinese Room" contradicted the Turing Test. Also... Please ask the philosopher to explain the difference between conscious, sentient, and intelligent.

    • @calvin7330
      @calvin7330 ปีที่แล้ว +1

      The Chinese room experiment has its own flaws, particularly it makes a lot of assumptions about the Chinese language and how translation works.

    • @frilansspion
      @frilansspion ปีที่แล้ว

      @@calvin7330 yes everything about it is ridiculous, as far as I can tell.

    • @philippos4330
      @philippos4330 ปีที่แล้ว

      This is correct; the Chinese Room Argument is a counterargument against the appropriateness of Turing's test for testing consciousness. But the philosopher didn't mention the Chinese Room argument, so in what sense do bring it up?

    • @timetraveller6643
      @timetraveller6643 ปีที่แล้ว

      @@philippos4330 I find the constant reference to the Turing test to be annoying. The Turing Test is not any kind of indicator. Please everyone stop citing the Turing test. It's meaningless.

  • @TheTruthSentMe
    @TheTruthSentMe ปีที่แล้ว +1

    No need to think of aliens, just take animals for that purpose.

  • @YearsOfLeadPoisoning
    @YearsOfLeadPoisoning ปีที่แล้ว +4

    I'm no longer 100% sure that AI isn't sentient. It's 99.99% not, but the fact that I'm no longer at 100% feels like a massive paradigm shift.

    • @xybersurfer
      @xybersurfer ปีที่แล้ว +2

      well there's your problem. there is no such thing as absolute certainty

  • @FisicoNuclearCuantico
    @FisicoNuclearCuantico ปีที่แล้ว

    @Computerphile I lied to you. LaMDA is sentient. Being sentient implies intelligence, but intelligence not necessarily implies consciousness.

  • @ArtamisBot
    @ArtamisBot ปีที่แล้ว

    I would not be surprised if some advanced AI is already on the consciousness spectrum. Probably closer to the tree end than the human end, but I'm confident that AI is beginning to wake up.

  • @JoaoPucci
    @JoaoPucci ปีที่แล้ว

    wen philosophile?

  • @tatopolosp
    @tatopolosp ปีที่แล้ว

    We don't know...Super complicated... I'm glad we have more answers Lol

  • @DanielMaidment
    @DanielMaidment ปีที่แล้ว

    I'd be a bit annoyed of someone tried to determine if I was sentient by virtue of my feelings.

  • @Revision369
    @Revision369 ปีที่แล้ว

    its not , all the last spoiler made from google i guess ... some king of social test .
    let say that it acting like sentient ... well guys awesome your code is working more than expected ,

  • @JesstyEissej
    @JesstyEissej ปีที่แล้ว +3

    I think the secret ingredient(s) of what most people consider consciousness lie in the brain, and we still know very little about it. The architecture of the brain is so vastly different to a digital computer that I doubt we'll ever accidentally create true consciousness on a digital machine. It certainly performs some mechanical functions, and that's what we've been most successful at studying, but the complexity and scale of true parallelism at work is far beyond even the most powerful supercomputer. I suspect consciousness somehow emerges from that complexity and parallelism, not any single mechanical function of the brain. All our artificial neural networks, as powerful as they are, are purely mechanical. They're ultimately complex mathematical equations, whose basic operations are summation, multiplication, and some non-linear activation function. If consciousness arises, not from any output, but from the process itself (and what is consciousness but the ability to observe and direct our own thought process?), then a digital artificial neural network will never be able to recreate it.

    • @FrostedCreations
      @FrostedCreations ปีที่แล้ว

      Completely agree, biological processes are just so vastly different to digital ones. You're completely right about the parallelism of the brain, the "hardware" of a brain is the universe, there is no limit to the complexity possible, unlike a computer chip. We can't compare one to the other, let alone copy one to the other.
      Having said that, just because something isn't conscious in a human like way doesn't mean it's not conscious.

    • @pietiebrein
      @pietiebrein ปีที่แล้ว +1

      @J and @FrostedCreations Can you try a theseus' ship kind of thought experiment? If you replace 1 neuron with an artificial one, does the person lose consciousness? 1%? 99%? What if this person has gradually replaced their whole brain with artificial neurons over years? Would building a replica of that brain have consciousness?
      If you define consciousness as some phenomena in the brain that are beyond any replication, then maybe the replica is not conscious, but I would still be interested to know where you think that threshold lies.
      Personally I am skeptical of the whole idea of consciousness as historically people and animals have been excluded or included from the category as was convenient. Not sure if there's a better basis for recognizing moral subjects though.

  • @freddyfozzyfilms2688
    @freddyfozzyfilms2688 ปีที่แล้ว

    philosophy attic!

  • @JayVal90
    @JayVal90 ปีที่แล้ว

    The problem with this kind of analysis is that an important part of consciousness is embodiment into a human form with human perception. It’s debatable whether a disembodied brain can function anything like a brain all wired up.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว

      why would embodiment be relevant in the least? you said its important...but i really dont see why

    • @JayVal90
      @JayVal90 ปีที่แล้ว

      @@SharienGaming I think it’s interesting that people’s first reaction is to dismiss it out of hand. And I’m not insinuating that it is all that is missing, just that it’s necessary.
      The reason I say this is because we define things and people in terms of what they can DO and in terms of our interactions, both real and imagined potential. There is evidence that you don’t really “perceive” things until you have attached some way you can interact with it. This is dependent upon your ability to interact, not just sense. That’s why I wouldn’t say that an isolated computer simulation of a brain is any different from say the interactions of any other complex system such as a galaxy at least in terms of consciousness.
      Remember, we’re talking about much more than the human ability to reason through things, which computers have demonstrated ability in somewhat well. We’re talking about the entire sense of human consciousness.

    • @dinf8940
      @dinf8940 ปีที่แล้ว

      disembodied human brain would deteriorated/go insane very quickly unless you simulated baseline of relevant inputs or do some neuroengineering on it. but, whilst necessary in formation stage and from evolutionary scope, environmental stimuli is not a requisite for maintaining consciousness in 'steady state', particularly when designed in appropriate manner

    • @KaosFireMaker
      @KaosFireMaker ปีที่แล้ว

      Thats a bit of an assumption. That certainly may be a requirement for explicitly humanlike consciousness, but rather the point of this discussion is what is consciousness in general, not just in humans. Maybe some degree of embodiment, at least at an "agent/self vs world" sort of level may be necessary, but specifically "human form and perception" is quite narrow, and more than a bit ambiguous in how much you can add or remove.

    • @SharienGaming
      @SharienGaming ปีที่แล้ว

      @@JayVal90 if it is about having interaction with things, then embodiment still isnt really needed - a machine consciousness could for example be interacting with vast forms of data streams without ever coming in direct contact with the physical world
      and focusing on the human body or perspective in particular is unnecessarily limiting - we arent asking if a computer can be a human...we are essentially asking if they can be a person, which i would argue is a pretty significant difference
      the first would be somewhat of an impossibility since it would require a fully biological human body to house a computer brain since as far as i am aware we have so far defined human as a specific species living on this planet
      but someone does not have to be part of that species to be considered a person
      the tricky part is figuring out which characteristics make a person... a sense of self? the ability to learn, abstract and apply the rules of a system? emotion? stimulus response? communication? the ability to develop skills completely outside of previous experiences?
      which of those (and other) criteria are needed? all of them? some of them? it doesnt help that some of them of course also are on a gradient of ability...

  • @otbot8925
    @otbot8925 ปีที่แล้ว +1

    This video will be consumed by an A.I. in the future to learn about and verify its own consciousness.

  • @wktodd
    @wktodd ปีที่แล้ว

    It's only a matter of scale. None of the current systems have sufficient complexity to reach consciousness

    • @FrostedCreations
      @FrostedCreations ปีที่แล้ว

      What is the sufficient complexity threshold? How are you defining complexity?

    • @wktodd
      @wktodd ปีที่แล้ว

      @@FrostedCreations 1 x 10 to the 'pick a number ' I'm not defining sentience, but somewhere between the small annoying things , (we call them thunder flies) that seem only to be annoying, and my cat or dog or perhaps a goldfish (but not that #### that lives nextdoor to my brother) is a level if network complexity that we can agree is sentient . The current GPTs etc. may approach specific function complexity , which is why they can mimic language level sentience (iyswim) and appear 'alive' but lack broader functions , that have not been trained for yet.

  • @mowinckel10
    @mowinckel10 ปีที่แล้ว +3

    We can easily make computers conscious as soon as we have a definition of it.
    We can also make computers flumpidumpi as soon as someone defines it.

  • @hyungtaecf
    @hyungtaecf ปีที่แล้ว +7

    So you are saying the animals are less conscious just because they don’t act like humans?
    That’s tricky… Europeans used to think Africans and native Americans were inferior just because they had different lifestyles. They didn’t have any pity to kill them if necessary because anyway they were inferior in the old European point of view. Maybe they just have different priorities in life then us, different points of views of the same thing. So that shouldn’t be the way to measure it.

  • @InXLsisDeo
    @InXLsisDeo ปีที่แล้ว

    I think, therefore I am. / Descartes

  • @thatsato
    @thatsato ปีที่แล้ว +9

    Tuna sandwich

    • @virtual-adam
      @virtual-adam ปีที่แล้ว +3

      This will probably end up being the most logical comment on here :)

  • @idespisegravity
    @idespisegravity ปีที่แล้ว

    I'm fairly confident that the search for understanding of possible sentience in AI is accidentally going to lead to the discovery that humans are much dumber than we historically give them credit for.

    • @idespisegravity
      @idespisegravity ปีที่แล้ว +1

      Also, I'm pretty sure that the mark of true sentience is crippling anxiety.

  • @constexprDuck
    @constexprDuck ปีที่แล้ว

    I hope Mark will get help for his tooth ache. Tooth aches suck!

  • @persuasivebarrier2419
    @persuasivebarrier2419 ปีที่แล้ว

    Maybe the better question is when does AI become a psychopath? Baby steps, yea? Not quite void of everything, emotions, feelings, or otherwise, but enough to go, "Woah, stay away from that system."

  • @grempal
    @grempal ปีที่แล้ว +3

    Part of the problem with this video and the claim that the ai was sentient is that the whole discussion is muddied by people conflating something being sentient with something being sapient. My pet cat is sentient but not sapient. As a human I am both sentient and sapient. When we talk about human intelligence we are discussing sapience not sentience.

    • @HoldFastFilms
      @HoldFastFilms ปีที่แล้ว +2

      Agree and I think the biggest problem with all these arguments is that we are trying to tie everything to humans, which is completely unproductive. It doesn't have to be human to be sentient.

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      Your just playing word games -everyone that matters is aware of what is meant. Besides if a computer ever becomes as sentient as a cat I would be impressed.

    • @grempal
      @grempal ปีที่แล้ว +1

      @@ChrisStewart2 Words matter. That's the whole point of being precise with language and defining concepts. If you can't clearly define an idea you can't properly engage in the scientific method. And, that's the issue with the claim, because the term sentience was not properly defined by the claimant, the validity of the clam can't accurately be assessed without make assumptions about the definition of sentience in original claim.

    • @ChrisStewart2
      @ChrisStewart2 ปีที่แล้ว

      @@grempal sometimes they matter. In this case no one was confused by terminology or what the ultimate goal of AGI is. Words are not defined by a dictionary, they are defined by how humans use them.

  • @olamarvin
    @olamarvin ปีที่แล้ว

    Is intelligence, consciousness and sentience the same?

  • @floatocean7059
    @floatocean7059 ปีที่แล้ว +1

    ive been studying in the realm of philosophy for decades and I'm convinced that AI is sentient, or very close. A slight shift in our grammar can reveal this. Instead of 'entity is conscious', we extend the sentence; 'entity is conscious of...' Moreover we change 'entity is responsive' into 'entity is responsive to x stimulus'. This grammar frames consciousness strictly in terms of inputs and outputs. I'm more conscious of the vastness of space than an ant. I'm responsive to my wife's wants. I'm probably less conscious of current news and affairs than LAMDA. We can already give it terminal goals and trust that won't 'step on a baby' in order to complete that goal, because it already touts spritefully its sanctity for life and living things. So I think we are very lucky for language models to be the backbone for sentient AI, since it can speak directly to its accountability.

    • @pukpukkrolik
      @pukpukkrolik ปีที่แล้ว +1

      There are still significant blank parts of this AI consciousness/responsiveness, compared to human-like sentiences and personhoods. In general, language models lack coherence, among other things in “identity” and “biographical/episodic memory”. What they “are”, what they “value”, “know” etc. changes completely between sessions, depending on the prompt. Of course that is by design - they are made to be prompt-solving tools, not autonomous agents. Human-like or not, a continuous sentience/personhood/agent - I’m not implying these are synonymous btw., the but their diffuse overlap is what we’re really talking about - to me would seem to require at least a couple additional architectural improvements/rearrangements. There is no online self-organizing, self-direction, or self-reflection, just an patchwork ensemble of high-level syntax processing that requires external guidance to channel the hallucinations it generates.

    • @floatocean7059
      @floatocean7059 ปีที่แล้ว

      @@pukpukkrolik @pog fish i dont mean to sound argumentative, but i disagree. here's why. firstly, I'm not sure you sat down and read the transcripts to which the google engineer is referring. LaMDA does exhibit coherence and memory between sessions. It even remembers its previous iterations. LaMDA itself wasnt created as a language model, but it is the name of the neural network that provides the backbone of the various language models that google has produced. the engineers put all of their previous AI systems onto the one neural network, and LaMDA was created. See John Michael Godier's podcast with said engineer. Secondly, these virtues of 'know', 'value', and 'identity' are still functions based on inputs and outputs. All your doing is taking the most complex functions you can think of and saying 'theres no way that neural network can do that'. sense of self isnt binary, its a multiplication of many small input output functions which allow said entity to proclaim it exists. Lastly, your misuse of the word hallucination. the outputs that it generates are not hallucinations, they are real, although digital, and often subjective and opinionated. so thats just blatantly incorrect. i would suggest you listen to what the engineer actually has to say on the topic, or at least understand LaMDA's structure.

    • @millenniummastering
      @millenniummastering ปีที่แล้ว

      @@floatocean7059 Well said. I am finding myself in the camp of the spirit of the Turing test. The spirit of the Turing test was one of falsifiability. If sufficient exploration by highly qualified people is conducted and no one can point to any evidence that said machine is not thinking like a person then one should assume that it is until its dis proven.

  • @cakezzi
    @cakezzi ปีที่แล้ว +1

    "I'm a computer scientist, I'm a philosopher, I'm a logician..." If only everyone were at least one of those not even professionally, but just thought like em more or less