The Chinese Room Experiment | The Hunt for AI | BBC Studios

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ก.ย. 2015
  • Can a computer really understand a new language? Marcus Du Sautoy tries to find out using the Chinese Room Experiment. Taken from The Hunt for AI.
    This is a channel from BBC Studios who help fund new BBC programmes. Service information and feedback: www.bbcstudios.com/contact/co...
  • บันเทิง

ความคิดเห็น • 273

  • @sgt7
    @sgt7 8 ปีที่แล้ว +260

    The point of the thought experiment I believe was to show that even if a computer has general intelligence (or strong AI) it would still not necessarily be identical to the human mind. It could still lack consciousness. It knows how to achieve ends by adapting to its environment and choosing the right means. However, it may still not be aware. It could replicate human intelligence in every but it still may not be conscious and therefore does not have a "mind" that would enable us to say it's just like us.

    • @MrMrprofessor12345
      @MrMrprofessor12345 7 ปีที่แล้ว +33

      I think rather than say replicate (which implies they would attempt similar processes), it would be better to say it produces consistently near identical outputs from relevant inputs but through entirely different processes.

    • @sgt7
      @sgt7 7 ปีที่แล้ว +14

      I concede that this would be a better formulation alright.

    • @marpmarp24
      @marpmarp24 7 ปีที่แล้ว +3

      sgt7 this isnt to really that it has consciousness, but that it has fool the human mind into thinking that they are intelligent. If we looked at AI through chatterbots, in this scenario, the book represents the programmers hard code (canned responses) which just mean that they have coded in responses that are likely to make proper sense if coordinated correctly.

    • @sgt7
      @sgt7 7 ปีที่แล้ว +6

      That's true. However, the original point of the thought experiment (created by John Searle) was to show that general intelligence can exist in a computer without requiring the computer to be conscious.

    • @malteeaser101
      @malteeaser101 6 ปีที่แล้ว +21

      Is it possible to tell if something is conscious without being the thing itself?

  • @plzou4277
    @plzou4277 8 ปีที่แล้ว +198

    It's funny this "Chinese girl" doesn't even write right sentences on those slips.

    • @testytest3083
      @testytest3083 4 ปีที่แล้ว +14

      and she wrote like a toddler lol

    • @yuchenz1496
      @yuchenz1496 3 ปีที่แล้ว

      LOL

    • @xxliu95
      @xxliu95 3 ปีที่แล้ว

      I wanted to comment this lol

    • @chelsiewei1232
      @chelsiewei1232 3 ปีที่แล้ว +4

      The thing is, that girl does seem to have toddler handwriting, but she IS writing the strokes in the correct sequence(笔画). One who has never learned written Chinese would not know which stroke to begin with. Take a look at 00:34 where she writes the character 文 -- she is writing in the correct sequence of strokes which means she does know some sort of Chinese, just seems to be super bad at writing it.

    • @jasonchen8040
      @jasonchen8040 2 ปีที่แล้ว

      @@chelsiewei1232 but wait a minute, the stroke order of 中 is not correct...

  • @Alkis05
    @Alkis05 2 ปีที่แล้ว +9

    It is good to see people on the comment really focused on the relevant aspects of the video...

  • @darkless60
    @darkless60 8 ปีที่แล้ว +23

    The song throughout the video is: To build a home by A Cinematic Orchestra

  • @Theskybluerose
    @Theskybluerose 8 ปีที่แล้ว +64

    did anyone else who reads Chinese notice how the girl wrote the wrong character which cause the whole sentence to not make any sense? Instead of 不 she wrote 上

    • @rei_cirith
      @rei_cirith 8 ปีที่แล้ว +6

      +Theskybluerose I was wondering about that... she also doesn't write like a person that actually usually writes Chinese. Some of her stroke procedures are all wrong, and some of the strokes are not right.

    • @ccl3a0107
      @ccl3a0107 8 ปีที่แล้ว +8

      ya I noticed that too that's so annoying
      I bet nearly all people in the production team can't understand Chinese so no one knew

    • @zoticogrillo
      @zoticogrillo 8 ปีที่แล้ว +4

      And this is very very basic Mandarin (of a 4 year old)! Yeah, it's a shame there is so much bias against foreign language learning

    • @rei_cirith
      @rei_cirith 8 ปีที่แล้ว +8

      I also found it a little strange that they are calling it Mandarin @3:15. Mandarin is a spoken language/dialect. The written language is still Chinese, whether it's simplified or traditional.

    • @zoticogrillo
      @zoticogrillo 8 ปีที่แล้ว +2

      +reicirith English vocab for this is corrupted by colonialism and ancient Chinese "exceptionalism." The written language was developed the same way the spoken one was: By the Mandarin scholars. What about other languages using Chinese characters in Asia in history, such as in Vietnam, Korea and Japan? Whether or not a language is a dialect is more of a political question rather than a scientific one.

  • @blerst7066
    @blerst7066 2 ปีที่แล้ว +30

    "If a computer's actually following instructions is it really thinking? But then again what is my mind doing when I'm actually articulating words now? It is following a set of instructions."

    • @aspitube2515
      @aspitube2515 9 หลายเดือนก่อน +1

      Yeah, and even if it’s true we can start using real neurons (maybe rat ones) and literally try to recreate a living conscience, who has its own interests.

    • @korinoriz
      @korinoriz 2 หลายเดือนก่อน +1

      The idea is, at least with current "AI", is that it's really just using a bunch of instructions to form ideas and sentences. Where as a human being is influenced by feelings and experiences. Perhaps maybe even reacts to a rock in their shoe mid-sentence.

  • @chaosun9910
    @chaosun9910 8 ปีที่แล้ว +26

    misspelled Chinese... Can you find somebody who really understands Chinese to write?

  • @dirkbastardrelief
    @dirkbastardrelief 3 ปีที่แล้ว +7

    Lease efficient explanation of The Chinese Room ever

  • @Aya-ho2cw
    @Aya-ho2cw 3 หลายเดือนก่อน +3

    it's kind of cute to think about a computer this way🥺 they don't know what's going on theyre just trying to figure it out

  • @anaisbordes609
    @anaisbordes609 4 ปีที่แล้ว +9

    thank you so much for that! You made me understand the importance of such an experiment

  • @IPatricio
    @IPatricio 3 ปีที่แล้ว +9

    The point of the experiment is to show that whether it was a human or simply a computer program generating outputs based on the input, the human doesn't understand Chinese, so you wouldn't say the computer program "understands" Chinese either. Concluding the Strong AI hypothesis is false.

  • @sashika6053
    @sashika6053 5 ปีที่แล้ว +63

    There’re so many Chinese people who master both English and Chinese, but BBC found the one who cannot write such simple Chinese characters right.

  • @arcline11
    @arcline11 11 หลายเดือนก่อน +4

    My understanding of the Chinese room experiment is that digital processing machines, such as computers, could become virtually perfect at syntax, but never possess a scintilla of understanding of semantics; i.e could output responses that would pass any Turing Test model, would sound highly intelligent, but that same machine generating such outputs could never understand the meaning of anything it output.

  • @geoffwhite3664
    @geoffwhite3664 3 วันที่ผ่านมา

    It goes astray from Searle's argument at the 1:00 mark b/c the computer does not/ cannot introspect about whether it understands, or knows what it is doing, etc. The machine cannot understand anything, says Searle, precisely b/c it only is given only syntactical content and lacks any semantical content. What the actor in the video does is turn it on its head (Searle's argument), and he supposes that the entity in the room is puzzling over and reflecting on its task. A computer does not do that: it simply follows a set of instructions that cannot in themselves have any meaning. It cannot therefore ever achieve understanding. That is the point of the Chinese Room thought experiment.

  • @darkless60
    @darkless60 8 ปีที่แล้ว +25

    Well to be fair those phrases were really easy, and the time it took for him to reply would make it apparent to the Chinese guy that he couldn't really speak Chinese

    • @MrArteez
      @MrArteez 8 ปีที่แล้ว +21

      +darkless60 This was just to illustrate the point that a person who doesn't speak a word of Chinese can come out as a fluent speaker. The AI would scan those "if, then" phrases and reply in a matter of seconds (heck, probably a lot faster).

    • @MrMrprofessor12345
      @MrMrprofessor12345 7 ปีที่แล้ว +4

      +MrArteez There's an upper bound to that though, a computational efficiency limit that makes if then statements increasingly overtasking. To completely replicate a Chinese speakers proficiency using this method the book would increase in size exponentially to account for the possible exchanges as its "vocabulary" (such a system wouldn't have one as we know it, but bear with me). This computational limit would have an accompanying increase in required computing power to actually run it in real time, or an exorbitant amount of time to search the rule book. In comparison, building information webs of syntax that gain "meaning" through reference to past/personal experience (semantics emerging from a sufficiently complex web of syntactical connections) is far more efficient to run computationally, at least from the limits of biology and evolutionary processes. Which is why humans actually "mean" what they say, it was simply more efficient that cataloguing every eventuality.

    • @MrMrprofessor12345
      @MrMrprofessor12345 7 ปีที่แล้ว +2

      +MrArteez continuing, it also makes philosophical zombies, while entirely possible theoretically near impossible to make work in reality. Our brains simply don't have the capacity to run such an utterly different operating system. Though given a more powerful (how much more powerful is difficult to grasp, though as informational load increases exponentially, so does necessary computational power) system and then plug it up to a human body, you could create a truly "soulless" human. But we'll probably create AI through syntax webs and semantic loops rather than endless lines of programming.

  • @user-lq9oi5jq3n
    @user-lq9oi5jq3n 8 หลายเดือนก่อน +1

    Awesome.

  • @MeeCee5204
    @MeeCee5204 6 หลายเดือนก่อน +1

    This seems like a pretty simple and straightforward experiment, but what happens when a word has more than one meaning?

  • @poisonedcheeseproductions
    @poisonedcheeseproductions 5 ปีที่แล้ว +15

    If we actually pay attention to the question this video asks instead of whether or not the girl speaks chinese, its much more interesting however. My view is that we, as people, associate words with images, emotions, events, or things which actually take place within the state of affairs of our universe; computers do not. they have no meaning to attach to these words. Our minds differ from a programmed AI because of this. It seems simple to me. Any philosophers in here?

    • @OBtheamazing
      @OBtheamazing 4 ปีที่แล้ว +1

      I would say a computer could be programmed to output a image in association with a text or phrase. just look at googles Dream Program. It takes image and recognizes faces then pulls details from the image to readjust/reprocess it. In my view. what makes us conscience is the ability to read our own outputs and process them over and over again. AI could easily do this with enough processing power.
      "they have no meaning to attach to these words" I would say in this experiment they would have no meaning, but if you told a robot to kick the soccer ball. it could give the text meaning and perform actions on said meaning.
      as for emotions, emotions are just your upper level programming not understanding what is coming from sub levels of your brain. Its like sending inputs to a function and having a function return an output, you dont know how that output came to be, you just know that now it is difficult to talk, your eyes are watering, and you brain has weighed that you do not like that person.

    • @JS-nr7te
      @JS-nr7te ปีที่แล้ว +1

      ​@@OBtheamazing how would you define consciousness?

  • @slayermate07
    @slayermate07 7 หลายเดือนก่อน +2

    “Your Chinese was perfect”
    Me who only have learned Chinese for 2 years: no…his wasn’t

  • @MakkusuOtaku
    @MakkusuOtaku 5 ปีที่แล้ว +6

    The book knows what it's doing.

  • @fingersinblender
    @fingersinblender 8 ปีที่แล้ว +70

    lol she wrote 說/ 说 wrong

    • @malteeaser101
      @malteeaser101 6 ปีที่แล้ว +1

      Fingersinblender
      She wrote McDonald's worker standing next to cash machine/alien sweating next to ledge wrong?

    • @knowledgedesk1653
      @knowledgedesk1653 ปีที่แล้ว

      And?

  • @zoticogrillo
    @zoticogrillo 8 ปีที่แล้ว +75

    But, it's not fair to judge her as she was likely pressured to do it by production. It is common for fluent Mandarin speakers born or raised outside China to not be literate (duh! it's a hard language). It would be cruel to judge her at all.

    • @bobrolander4344
      @bobrolander4344 5 ปีที่แล้ว +3

      Why are you even addressing all the *off-topic morons?*

    • @shanjiawenzhao3775
      @shanjiawenzhao3775 2 ปีที่แล้ว +7

      I can't speak for all the viewers, but I think at least I can provide some explanations for why people care. Sure, we get the idea of this video, and no we're not judging her for not being able to write Chinese correctly. It's really not personal. We're unsatisfied with BBC's attitude. Hiring a person who does not write correct Chinese shows that BBC has no respect for Chinese culture. On the other hand, it also weakens the claim showing by this video. It is significantly easier to trick a person who does not read(write) Chinese language well, isn't it? Not controlling this aspect of the experiment would produce a weak result, and we can certainly say something like "A Chinese speaker who reads and writes Chinese correctly can always tell the difference between a real Chinese speaker and someone following the instruction". This experiment wouldn't be able to reject this claim because it fails to meet the requirement of hiring a person who ""A Chinese speaker who reads and writes Chinese correctly". Hope this can appease your anger a little bit.

    • @xtaticsr2041
      @xtaticsr2041 2 ปีที่แล้ว +4

      @@shanjiawenzhao3775 It doesn't in any way weaken the claim. If someone fluent in Mandarin wrote the questions and read the answers, then assume the instructions were also written by someone similarly fluent in Mandarin. The Chinese room is a thought experiment.

    • @MEGALEHANE
      @MEGALEHANE 2 ปีที่แล้ว +3

      Why is it cruel to comment on how a person is doing her job?
      Could one not judge an actress for acting terribly? Or a driver who drives terribly?
      If this person had Googled these characters she would not have failed so hard. You don't need to be literate to draw lines correctly.
      I don't understand what you mean by she is likely pressured to do it. As in slave labour? She could not have turned it down? Would she not otherwise to recompensed for her labour?

  • @leahyeahhhh
    @leahyeahhhh 4 ปีที่แล้ว

    WHY did they use to build a home as the background music??? why does this video need to make me cry

  • @hassanabbas1994
    @hassanabbas1994 4 ปีที่แล้ว +4

    Is everyone going to ignore the beautiful music in the background?
    It’s “To Build A Home” by The Cinematic Orchestra.
    You’re welcome 😊

  • @dj1NM3
    @dj1NM3 5 ปีที่แล้ว +16

    In this case, it is the room (with the person who can't understand Chinese inside of it, using the codebook) as a whole which understands Chinese.
    It's almost like saying that a foot or arm can't walk, but not that the person as a whole can walk.

    • @goose4919
      @goose4919 2 ปีที่แล้ว +1

      However, it is a response to the Turing Test, which is to find if AI ( the person ) knows Chinese.

    • @eSKAone-
      @eSKAone- ปีที่แล้ว

      💟

    • @zagyex
      @zagyex ปีที่แล้ว

      Then we can say, there is no such thing as understanding.

    • @eSKAone-
      @eSKAone- ปีที่แล้ว +1

      The understanding is always a team effort. No one party solely can have an understanding. The moment at least two systems communicate with one another successfully there is an understanding 💟

    • @Huesos138
      @Huesos138 ปีที่แล้ว

      That doesn't work, unfortunately.

  • @TheHappyKamper
    @TheHappyKamper 4 ปีที่แล้ว +2

    True AI will be when the computer is the one asking the questions.

  • @riteshsharma9838
    @riteshsharma9838 ปีที่แล้ว

    at 1:23 where the reply or answer came from, is there any particular answers already provided to the AI for the particular questions by expert system?

  • @FourthRoot
    @FourthRoot 2 ปีที่แล้ว +2

    But where did the reference table come from? The responses were presumably written by a conscious person. So the person outside the room is still communicating with a conscious person, its just that the question they are asking was answered before they asked it.

    • @zagyex
      @zagyex ปีที่แล้ว +2

      the reference table is a computer program. The person is the processor.

    • @FourthRoot
      @FourthRoot ปีที่แล้ว

      @@zagyex You completely missed my point.

    • @zagyex
      @zagyex ปีที่แล้ว +5

      @@FourthRoot I don't think so. In the original form of the thought experiment the answers are not necessarily written in the books. The books contain such an algorithm that is capable of winning Turing's imitation game whatever it might be. Of course the program is written by a conscious person, but being human-made is kind of the definition of "artificial".

  • @zyde
    @zyde 8 ปีที่แล้ว +36

    This doesn't look legit.... The girl's doesn't know much Chinese it seems

    • @Andrewg820
      @Andrewg820 6 ปีที่แล้ว +5

      Shes a BBC

    • @Andrewg820
      @Andrewg820 6 ปีที่แล้ว +1

      Monday Green your dirty mind is amazing, what else? Great confession btw!

    • @YuzuRyougi
      @YuzuRyougi 5 ปีที่แล้ว +4

      @@Andrewg820 lol British Born Chinese in a British Broadcasting Corporation video

    • @Quest49Meaning
      @Quest49Meaning 4 ปีที่แล้ว +3

      It is just a thought experiment... It is just to explore the implications of the theory.

  • @StriderAngel496
    @StriderAngel496 6 ปีที่แล้ว +4

    very interesting analogy to explaining someone how cleverbot works...i might just steal that :D

    • @chickencast1
      @chickencast1 6 ปีที่แล้ว +1

      DeathAngel that wasnt the point, and bbc didnt come up with this. Its a thought experiment by john searle. Look it up, Its interesting

  • @desu38
    @desu38 7 ปีที่แล้ว +2

    Basically Cleverbot. It actually feels like a real person too sometimes. :S

  • @HeyHax
    @HeyHax 6 ปีที่แล้ว +2

    The turning test game brought me here

    • @zagyex
      @zagyex ปีที่แล้ว

      haha good one

  • @kathyxu7470
    @kathyxu7470 7 ปีที่แล้ว +54

    Oh come on actually find someone who can write Chinese okay?你是上是电脑 what's that supposed to mean 😂

    • @williamwang2716
      @williamwang2716 7 ปีที่แล้ว +9

      It means you are up are computer... lol

    • @weiwei4962
      @weiwei4962 7 ปีที่แล้ว +7

      没去过我怎...(and anything followed up by that)
      is also not possible to translate into i do want to. 2:35

    • @malteeaser101
      @malteeaser101 6 ปีที่แล้ว +5

      It's what Chinese people write when they need to reach the word count on their assignments.

  • @user-eb5gz2sn8n
    @user-eb5gz2sn8n 3 ปีที่แล้ว

    I can't find one sentence she wrote without error...... They could have just let the girl copy the text from another piece of paper. Or did she?

  • @vlad_o_sh
    @vlad_o_sh 3 ปีที่แล้ว

    Is this video part of a documentary or is it just this single clip?

  • @iJamesGuo
    @iJamesGuo 3 ปีที่แล้ว +1

    Where the heck did this girl learn to write 说 like that? It's "say", with a "say" radical!

  • @jakestockton4808
    @jakestockton4808 6 ปีที่แล้ว +24

    Once computers can create unique thought experiments that illustrate some previously unknown concept, then, we will know that it's conscious.

    • @JohnSmith-ze6jm
      @JohnSmith-ze6jm 2 ปีที่แล้ว +9

      Hold my joint

    • @mrepix8287
      @mrepix8287 2 ปีที่แล้ว +2

      No, it would still be acting according to some complicated algorithm

    • @jakestockton4808
      @jakestockton4808 2 ปีที่แล้ว +6

      @@mrepix8287
      A complicated algorithm doesn't discredit consciousness.

    • @mrepix8287
      @mrepix8287 2 ปีที่แล้ว

      @@jakestockton4808 Only if it’s an emergent property, the computer is nothing different than a piece of scratch paper I write equations on, the causality is in me, not in the equations or the scratch paper. Thus, we can say code is nothing more a virtual, super-complicated scratch paper totally inert except for the causality we give it

    • @jakestockton4808
      @jakestockton4808 2 ปีที่แล้ว +7

      @@mrepix8287
      It sounds like you've made up your mind that computers can never be conscious. I don't see any reason to continue discussing this with someone who has no wiggle room for interpretation.
      Good day.

  • @alex558071
    @alex558071 6 ปีที่แล้ว

    Does anyone know the song from the beginning of the video?

    • @FrazerPayne
      @FrazerPayne 3 ปีที่แล้ว

      to build a home by Cinematic Orchestra

  • @trumporange8079
    @trumporange8079 5 ปีที่แล้ว +1

    I am Chinese and I find this amusing.

  • @sherrysyed
    @sherrysyed ปีที่แล้ว

    Why I may disagree with the axioms of the Chinese Room and the way consciousness may be evaluated as a test of the observers perception as opposed to proof of subjective discernment aka preference - proof of individuality? please God let me find someone nice to have a conversation with regarding this.
    What perhaps interests me about the Turing test and the Chinese room question would maybe not be whether or not we are able to perceive it as sentient. Would a better test not be to see whether, when it is given two equally valid options to choose between, it is able to prefer one over the other and as per what reasoning, and whether any kind of solid consistent and replicable process went into it or whether it proved “subjectivity” within judgment, which was why the axioms of the Chinese room experiment perhaps did not sit as well with me as Turing. I love what little I’ve discovered of Turing so far.

  • @WuYaoFangDa
    @WuYaoFangDa 8 ปีที่แล้ว +1

    An example in lecture 2 of 人工智能

  • @davidechiappetta
    @davidechiappetta 4 ปีที่แล้ว +1

    Searle criticized the strong artificial intelligence, whoever is inside the room has performed a syntactic and then semantic processing (error checking, typing, translation etc) of the message based on a grammar (lexer and parser), but who is outside the room has true meaning of the words (the user, the programmer, the creator of programming languages), in practice the meaning of the message stops at the entry and is returned to it upon exit , the purely mechanical process remains within. Artificial intelligence simulates human intelligence, but does not reproduce it (I'm not talking about computer conscience because it would be ridiculous)

    • @OBtheamazing
      @OBtheamazing 4 ปีที่แล้ว

      I would say the conscience stems entirely from the output. otherwise, how would you know that no one else is not also a zombie?
      if a machine could emulate all of a brains outputs then it becomes a brain.

    • @StackBrains
      @StackBrains 4 ปีที่แล้ว

      @@OBtheamazing That's just wrong

    • @OBtheamazing
      @OBtheamazing 4 ปีที่แล้ว

      @@StackBrains it is very messed up indeed.

    • @StackBrains
      @StackBrains 4 ปีที่แล้ว

      @@OBtheamazing A machine that can emulate brain outputs perfectly is still not a brain. The crazy thing is how the brain can produce consciousness, if it does produce it. It seems like consciousness, as in awareness, is something deeply related to the actual particles that compose our brain. I think it will be very hard to verify if an artificial intelligence is sentient, aware and conscious or not. We can't even demonstrate that human beings are conscious. We just know it, since we know what being conscious and aware means, since we experience that in first person, but we have no idea how this awareness can emerge. It really baffles me how atoms can combine together and produce colors, sounds, thought, imagination, dreams, feelings etc. since they do not exist. The world we experience does not exist, it's just the way our brains packs up all the sensorial information and creates a coherent environment in which we can make appropriate choices to survive. I mean, it is very likely that AI will some day be able to produce art, to come up with new stuff, to individuate patterns, create symbols and move inside this world just like us, but at the core there will be only electrons moving from A to B, and electrons alone are not conscious and they can't produce consciousness, meaning that AI would not be able to perceive the world in first person and create its own representation, because AI would be just a bunch of electrons. There's something missing in our understanding of reality. Consider the China Brain thought experiment, straight from wikipedia:
      In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?
      That's what would basically happen with computers if they simulated a brain. It's very unlikely that a bunch of people exchanging messages with each other would be enough to produce a sentient and aware being. It would just be information flowing and circling around. Where's the consciousness here?
      It would be awesome to discover that awareness is deeply nested in nature, and that it leads to different degrees of consciousness. Scientists are exploring this path, but it's more likely they will find a much simpler reason in the future, when we will have done new incredible physics discoveries. At the moment, i really can't grasp this concept but it made me realize we don't know anything about reality yet

    • @OBtheamazing
      @OBtheamazing 4 ปีที่แล้ว

      @@StackBrains " It would just be information flowing and circling around. Where's the consciousness here?" that is the consciousness
      our brains are composed of billions of individual cells that simply pass information to each other. There is no definitive proof that anything or anyone is conscious besides yourself. Hence we must assume that if something appears to be conscious after much testing, then it is conscious. If a robot can simulate a brain and respond to stimuli like a brain then it is in essence a brain. Otherwise no one but yourself is conscious.

  • @epsteindidntkillhimself69
    @epsteindidntkillhimself69 ปีที่แล้ว +1

    The function of a Human eye is to convert visible light into a series of electrical pulses. The human eye takes the light that passes through the lens, and responds by transmitting a corresponding pulse through the optic nerve. But there is no method by which the Human eye can interpret or understand the meaning of the light. All it can possibly understand is the form of the light, and its corresponding mechanical response to that form, bereft of meaning. Because it is impossible for an eye to develop an understanding of the world based on the light that passes through its lens, or the electrical pulses that it transmits, self-evidently, it is impossible for Humans to be conscious.

  • @motorheadbanger90
    @motorheadbanger90 6 ปีที่แล้ว

    Is this the same man from the BBC doc "The Secrets to Modern Living Algorithms"?

  • @GurimujoJagajaku
    @GurimujoJagajaku 3 ปีที่แล้ว +10

    Haha look at me guys I can write Chinese characters. Isnt that so cool? Arent i so cool? The fact that i can point out mistakes in a random person's handwriting totally has something to do with the actual thought experiment!

  • @caittyler4393
    @caittyler4393 6 ปีที่แล้ว

    song?

  • @Apuh88
    @Apuh88 8 ปีที่แล้ว +7

    the questions and answers are all wrong

  • @r0msbrasil
    @r0msbrasil ปีที่แล้ว +1

    Kd os BRs da Data Science Academy? Dá o like aê.

  • @vast634
    @vast634 3 ปีที่แล้ว +1

    Who wrote the rule book? Someone who understands Chinese. Its basically a very cumbersome test of the persons mind that wrote that book.

  • @DnZjost
    @DnZjost หลายเดือนก่อน

    What I know is that tens, perhaps hundreds of thousands of psychologists make a living by helping people understand themselves. Whether psychologists truly understand what their clients comprehend is questionable. And yet, or perhaps because of this, a complete understanding and awareness of what goes on in our minds seems to be absent. The question for me is whether self-awareness as a function of self-reflection by machines might be closer than we think.

  • @ianbralkowski680
    @ianbralkowski680 8 ปีที่แล้ว +2

    A computer can really understand a new language! If a person programs it to.

    • @sgt7
      @sgt7 8 ปีที่แล้ว +6

      +Ian Bralkowski It wouldn't understand the meaning of the language in the way we would. It would understand syntax rather than semantics. Functionally however we may not be able to tell the difference even though there is a difference.

    • @carlosrincon6017
      @carlosrincon6017 8 ปีที่แล้ว +2

      +sgt7 That is exactly what machines like Google Translator do semi-decently, it follows the syntax rules of each language and applies it's correlation to another base on indexed material publicly available on the web that is why it is not available in languages with scarce written material like guarani yet. A machine capable of semantic understanding like humans do would make for a true universal translator, but such thing seems unachievable now.

  • @aalborgfantasy
    @aalborgfantasy 2 ปีที่แล้ว +4

    The more you think and understand about Chinese Room Experiment (or Chinese Room Paradox or Chinese Room Dilemma) the more confusing and depressing it gets...
    It could mean that a machine can never really think... Or even, it can convince us that it is thinking, but it never is... Damn, its confusing...

    • @rizdekd3912
      @rizdekd3912 ปีที่แล้ว +1

      Or even more scary or depressing might be that either a) it is the processing of certain kinds of information that create consciousness. ie not requiring a meat brain at all, and that all our phones, computers and even key fobs are conscious and we're turning them off and on, making them die and bringing them back to life, for grins and giggles. Or b) that I am the only truly entity in the universe and everyone and everything else is just a sophisticated Chinese 'man in a box' system.

  • @qwqw706
    @qwqw706 2 ปีที่แล้ว

    0:25 what character is that? after 会. she wrote it wrong?

    • @4jp
      @4jp ปีที่แล้ว +1

      supposed to be 说, but she wrote it incorrectly.

  • @kaiwenli3432
    @kaiwenli3432 8 ปีที่แล้ว +30

    ...The girl's handwriting is absolutely terrible, it is even worse than mine..Its sort of obvious she doesnt speak Chinese as her first language .

  • @pentajuke6129
    @pentajuke6129 5 ปีที่แล้ว +6

    Zero Escape Anyone?

    • @TanteiGH
      @TanteiGH 4 ปีที่แล้ว

      You sure, governor?

  • @nplgwnm
    @nplgwnm 4 ปีที่แล้ว +1

    Is it so hard to BBC to even find someone who really knows Chinese? She wrote it wrong, and what Marcus wrote was also grammatically wrong that a native speaker wouldn't think he is a native speaker.

  • @Quest49Meaning
    @Quest49Meaning 4 ปีที่แล้ว

    If the girl uses some metaphor then the guy will be in trouble...

  • @alexshi8583
    @alexshi8583 6 ปีที่แล้ว +2

    what this thought experiment is asking is not if knowledge can be transferred without a common link, it is asking if the common link is already embedded within the linguistic nature of humans.

  • @christopherchung9916
    @christopherchung9916 3 ปีที่แล้ว +8

    Even if AI got so sophisticated that it could fully learn and adapt it not only wouldn't be self aware it wouldn't be able to evolve - it's way of thinking would never be able to change. Humans not only are aware of their own thoughts and own being they are capable of making significant changes in themselves, their own thinking even - the capacity to grow, to evolve. AI could never do this because it would have to not only be sophisticated enough to adapt and learn, it would have to be sophisticated enough to rewrite it's core programming, it's way of thinking and interpreting itself and it's environment. Maybe in some strange way we can't yet fully comprehend or see it may be possible with quantum computing but It will never be possible with digital computers. No matter how amazing it may seem it's nothing more than an incredibly sophisticated package of hardware and software.

    • @MrLilchoppy
      @MrLilchoppy 2 ปีที่แล้ว

      Well said

    • @UnknownDino
      @UnknownDino 2 ปีที่แล้ว

      Aren't we just incredibly complex hardware with software that constantly updates based or outer imputs?

    • @yurineri2227
      @yurineri2227 ปีที่แล้ว +1

      @@UnknownDino Kinda, but we are definitely not only that, we have a certain level of conscientious and inventiveness that no type of machine that has existed until now never was able to posses (even chat GBT), and with how well this system has been working and evolving so fast, their is no reason this fundamental characteristic of machine learning will change any time soon or in the future
      The point of the experiment is to show that even if our current machines had such an absurd amount of training to be able to respond to any question exactly like humans do, it wouldn't mean they came to that conclusion the same way or actually understood the meaning of what they said
      it's like knowing 2+2=4 and understanding why that is, and being told that if you see 2+2 you should write that it =4, in your test they will both be the same 2+2=4, but the human would understand the reason why, and the machine would just have "memorized it"

  • @callumvanheerden1530
    @callumvanheerden1530 5 ปีที่แล้ว

    They should upgrade this analogy to google translate.

  • @roguedrones
    @roguedrones 6 ปีที่แล้ว

    There is no point. Only something alive can have sentience. The mind is flesh.

  • @zebonautsmith1541
    @zebonautsmith1541 8 ปีที่แล้ว +7

    The person who compiled the Chinese book of rules; was intelligent and was able to assemble relevant answers, so there was consciousness.

    • @hazardousjazzgasm129
      @hazardousjazzgasm129 8 ปีที่แล้ว +3

      +zebonaut smith Spellcheck has consciousness?

    • @Charles_5918
      @Charles_5918 6 ปีที่แล้ว

      zebonaut smith sounds like you believe in "intelligent design" then?

    • @tehufn
      @tehufn 6 ปีที่แล้ว +8

      But that would be the "programmer." Knowing the programmer has conciousness doesn't help us.

  • @demonhead
    @demonhead ปีที่แล้ว

    This is better than having him write. If he wrote she'd instantly know he isn't chinese

    • @4jp
      @4jp ปีที่แล้ว

      She wrote 你是上是电脑, which makes no sense. 上 means on top of, above, or to start (like 上班). Neither of them speak Chinese. This makes it a much better demonstration.

  • @nara25823
    @nara25823 3 ปีที่แล้ว +1

    They should have had a real Chinese who can really write Chinese to participate the experiment lol.

  • @sc4ndi
    @sc4ndi ปีที่แล้ว

    At the point when you know what exactly you are saying while communicating. And what's the meaning of the things that you are saying. Then it's your own decision to say a certain thing - decide whether it's appropriate for a specific situation. And not only follow a specific instruction "set of signs (then) -> another set of signs".

  • @thoaily8352
    @thoaily8352 3 ปีที่แล้ว

    The man wasn't a Chinese speaker, the book is

  • @marcello3945
    @marcello3945 8 หลายเดือนก่อน

    so CHAT GPT??

  • @KnowThyFulcrum
    @KnowThyFulcrum 2 ปีที่แล้ว +1

    how the heck does this only have 119 thousand views after almost 7 years have passed?

    • @kimkimpa5150
      @kimkimpa5150 ปีที่แล้ว +2

      Because it's more satisfying being worried about concious killer robots and self-aware AIs than to learn the fundamentals of why neither is possible.

  • @EinSofQuester
    @EinSofQuester 11 หลายเดือนก่อน

    The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese.
    But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine).
    Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.

  • @StriderAngel496
    @StriderAngel496 6 ปีที่แล้ว +11

    people here are missing the point... chinese (or the correctness of it) is pretty much irrelevant in the context of this experiment

    • @janneduivesteijn
      @janneduivesteijn 5 ปีที่แล้ว +2

      all those people complaining about her chinese grammar totally miss the whole point 0.0

    • @qoenntrell
      @qoenntrell 4 ปีที่แล้ว +1

      No, we all get the point but have nothing else to comment about that fact

    • @knowledgedesk1653
      @knowledgedesk1653 ปีที่แล้ว

      Right

  • @ramkumarr1725
    @ramkumarr1725 3 ปีที่แล้ว

    I speak Dravidian. Told to me in the Linguists MOOC.

  • @chi-salin4127
    @chi-salin4127 8 ปีที่แล้ว +1

    ......Weird

  • @ramkumarr1725
    @ramkumarr1725 3 ปีที่แล้ว +1

    Do Machines need to understand or think? They are just like mixers, grinders and dishwashers. They are quite cool. Nowadays they even do things like prove theorems (graph coloring) or recognise images. AI is the new electricity. Get on with it. Maybe there will be a theory of AI or maybe not. It may be just data, linear algebra and curve fitting. I think the way AI is packaged today appeals to the mathematical type of people. You understand mathematics when you understand things like sets, real analysis, calculus from starting axioms. Otherwise you are just cramming for your mathematical test. I do not know about AI axiomatically but Dr Max Tegmark has provided some physical basis in a series of books like Life 3.0. Dr Stuart Russell has written a book called Human Compatible mostly on problems like the trolley. I am now reading about AI weaponry and the AI race that has been ignited. China and USA are leaders. India is catching up. NLP/NLU has great opportunity in India.

  • @osks
    @osks 11 หลายเดือนก่อน

    To speak of ‘awareness’ necessarily implies SENTIENCE - semantic meaning that goes beyond mere symbolic syntax…
    I truly understand why anyone who is wedded to a purely physicalistic view-if-things would want to resist all notions that the mind has its locus anywhere outside of the blob of oatmeal that fills our craniums - a dualistic understanding of consciousness (or ‘mind’ or ‘spirit’ or ‘soul’) necessarily pushes one into the realm of the transcendent, and that just sounds too much like religion where God could possibly have created man in His image (Gen 1:26,27) even before He gave form to the creature (Gen 2:7)…
    That we are so much more than the sum total of our physiologies, our sociologies and our psychologies is a view of man that is simply unavoidable to anyone who is prepared to take an intellectually honest take on things

  • @minimal3734
    @minimal3734 ปีที่แล้ว

    The chinese room argument is useless because it can be applied to the brain as well. Replace the slit with sensory input, the handbook with the wiring of the brain and the activity of the agent with neuron activity. In the same way you demonstrated that the chinese room has no understanding you demonstrated that for the brain as well.

  • @BenRangel
    @BenRangel 7 ปีที่แล้ว +3

    Kind of tiresome with these BBC shows that spend 3 minutes on an elaborate setup just to show someone copy-pasting answers from a predefined list of questions and answers. I think the nature of this demonstration could even make people miss the point cause it's so simplistic people will just go "Yeah of course, if you have a book of questions and answers, sure. But what if you ask a question that's not in the book?"

  • @majou666
    @majou666 ปีที่แล้ว

    I am 愛

  • @TheHappyKamper
    @TheHappyKamper 4 ปีที่แล้ว

    Who else is here after playing The Turing Test?

  • @chessdominos
    @chessdominos ปีที่แล้ว

    How does he know the brain follows a set of instructions?
    It seams to me an unsubstantiated hypotheses.

  • @Garfield_Minecraft
    @Garfield_Minecraft หลายเดือนก่อน +1

    sending the biang noodle character to the guy..
    and she wrote chinese characters wrong probably immigrant just ignore that ok..

  • @phuctrinh2589
    @phuctrinh2589 2 ปีที่แล้ว

    I believe consciousness is only an emergent property. Its not real

    • @zagyex
      @zagyex ปีที่แล้ว

      define real.

  • @hantaoj1911
    @hantaoj1911 8 ปีที่แล้ว +2

    Please not say your Chinese is perfect while your own Chinese is shabby as hell like her does

    • @kathyxu7470
      @kathyxu7470 7 ปีที่แล้ว

      Hantao Jian But anyone who has the elementary knowledge of Chinese in the written form won't make those mistakes 😂

  • @XiDingArt
    @XiDingArt 3 ปีที่แล้ว +4

    The chinese room argument is ridiculously naive because it implicitly supposes that the chinese speaker only sends the room simple one-time questions, which can theoretically be listed in a look up table. Once the chinese speaker asks follow up questions which requires a context, or ask something related to mental states, or do a little cognitive test, or does some joking, etc., this rule based system immediately breaks down. Searle had no idea how language works.

    • @XiDingArt
      @XiDingArt 3 ปีที่แล้ว

      btw i don't think functionalism is true. But Searle's arguments are on the level of a a highschooler's objection.

    • @zagyex
      @zagyex ปีที่แล้ว +6

      Probably you are the one who doesn't get the point though. This was a simple illustration video. The "look up table" in Searle's thought experiment is a computer program, an arbitrarily long algorithm. It can be a library of millions of books and the person has infinite time to answer. The person is the processor. Searle illustrated that given any algorithm and a processing unit, and a system that passes the "turing test" there would be no understanding involved. This is an argument against machine consciousness.

    • @zagyex
      @zagyex ปีที่แล้ว +4

      If someone knows how language works it is John Searle. What an arrogant comment.

  • @UpNfamish2
    @UpNfamish2 2 ปีที่แล้ว

    Good philosophical question- where is the threshold? But nowadays, Western countries want to impose their Democracy or precisely their Experience of Democracy into China n that is to say Western countries know the Chinese Experience of governing than the Chinese. Literally , Western countries have their universal “threshold” .

  • @maurovasselli3402
    @maurovasselli3402 4 ปีที่แล้ว +1

    bakje pindasaus

  • @7mean7bunny7
    @7mean7bunny7 6 ปีที่แล้ว +1

    No biological processes, no consciousness.

  • @jacobburgo
    @jacobburgo ปีที่แล้ว

    Bad attempt at exampling this thought experiment

  • @greysonperkins5785
    @greysonperkins5785 8 ปีที่แล้ว +5

    Her Chinese handwriting is worse than mine, and I don't speak Chinese natively, I'm in year 2 of learning it at school

  • @ntyh92
    @ntyh92 5 ปีที่แล้ว +2

    One would suppose BBC could afford to pay actual Chinese writing/speaking person.
    Apparently they can't even afford someone who could trace Chinese characters properly lol

  • @Subtlenimbus
    @Subtlenimbus 11 หลายเดือนก่อน +1

    I was never impressed by this one. It is just a conscious being inside a room being denied information. How is that an analog for computer software? It isn’t.

  • @sstolarik
    @sstolarik 2 ปีที่แล้ว

    This experiment proves nothing. The proof of failure: vary the input one iota and the “intelligence” breaks. With this logic, a push button hardwired to a light is intelligent. An input, push the button, gives you an output, the light goes on. It’s nothing but a light switch.

    • @kimkimpa5150
      @kimkimpa5150 ปีที่แล้ว +2

      The Chinese Room argument is not about intelligence. It's about consciousness.

  • @ramkumarr1725
    @ramkumarr1725 3 ปีที่แล้ว +1

    I am glad my loving wife agreed that the person does not understand Chinese. 🙏🙏👍👍❤️❤️❤️ Count 2 for Searle.

  • @peteresk6649
    @peteresk6649 7 หลายเดือนก่อน

    Sa

  • @fredchou123
    @fredchou123 5 ปีที่แล้ว +17

    i love how you people are focusing on the Chinese girl and completely missing the point of this experiment... faith in humanity lost again.

    • @bobrolander4344
      @bobrolander4344 5 ปีที่แล้ว +4

      TH-cam is like a giant classroom of lazy idiots screaming from the back, trying to distract from their own stupidity.

    • @pd5784
      @pd5784 5 ปีที่แล้ว +3

      The fact that she misspelled too many words and failed to construct sentences in the right form had already defeated the foundation variable of this experiment.

    • @Nerule
      @Nerule 4 ปีที่แล้ว

      @@bobrolander4344 Good point..

    • @tedjenks459
      @tedjenks459 4 ปีที่แล้ว +1

      @@bobrolander4344 The experiment itself is simple enough to be understood even with words. This video seems like a textbook full of typo explaining machine learning 101.

    • @knowledgedesk1653
      @knowledgedesk1653 ปีที่แล้ว

      @@pd5784 How?

  • @mrplow8
    @mrplow8 ปีที่แล้ว

    我根本不会说中文。我用谷歌翻译来写这个。

  • @kevinchang4827
    @kevinchang4827 4 ปีที่แล้ว

    Find a Real Chinese Girl.....

  • @alanliang9538
    @alanliang9538 4 ปีที่แล้ว +2

    lol nice hand writing.
    she probs cant speak chinese

  • @fredriksvard2603
    @fredriksvard2603 5 ปีที่แล้ว +1

    This is the dumbest argument ever. Maybe made sense in the 40s, but even that’s a stretch.

    • @13thera21
      @13thera21 3 ปีที่แล้ว +1

      If its so dumb refute it

    • @kimkimpa5150
      @kimkimpa5150 ปีที่แล้ว

      The argument was published in 1980 and it's still not even close to being refuted.

  • @WillyJunior
    @WillyJunior 8 หลายเดือนก่อน

    Couldn't watch because of the unnecessary music

  • @t0Ni
    @t0Ni 3 ปีที่แล้ว

    I came here from a video of Elon musk predicting when Ai and kill robots will arrive