The point of the thought experiment I believe was to show that even if a computer has general intelligence (or strong AI) it would still not necessarily be identical to the human mind. It could still lack consciousness. It knows how to achieve ends by adapting to its environment and choosing the right means. However, it may still not be aware. It could replicate human intelligence in every but it still may not be conscious and therefore does not have a "mind" that would enable us to say it's just like us.
I think rather than say replicate (which implies they would attempt similar processes), it would be better to say it produces consistently near identical outputs from relevant inputs but through entirely different processes.
sgt7 this isnt to really that it has consciousness, but that it has fool the human mind into thinking that they are intelligent. If we looked at AI through chatterbots, in this scenario, the book represents the programmers hard code (canned responses) which just mean that they have coded in responses that are likely to make proper sense if coordinated correctly.
That's true. However, the original point of the thought experiment (created by John Searle) was to show that general intelligence can exist in a computer without requiring the computer to be conscious.
The thing is, that girl does seem to have toddler handwriting, but she IS writing the strokes in the correct sequence(笔画). One who has never learned written Chinese would not know which stroke to begin with. Take a look at 00:34 where she writes the character 文 -- she is writing in the correct sequence of strokes which means she does know some sort of Chinese, just seems to be super bad at writing it.
did anyone else who reads Chinese notice how the girl wrote the wrong character which cause the whole sentence to not make any sense? Instead of 不 she wrote 上
+Theskybluerose I was wondering about that... she also doesn't write like a person that actually usually writes Chinese. Some of her stroke procedures are all wrong, and some of the strokes are not right.
I also found it a little strange that they are calling it Mandarin @3:15. Mandarin is a spoken language/dialect. The written language is still Chinese, whether it's simplified or traditional.
+reicirith English vocab for this is corrupted by colonialism and ancient Chinese "exceptionalism." The written language was developed the same way the spoken one was: By the Mandarin scholars. What about other languages using Chinese characters in Asia in history, such as in Vietnam, Korea and Japan? Whether or not a language is a dialect is more of a political question rather than a scientific one.
"If a computer's actually following instructions is it really thinking? But then again what is my mind doing when I'm actually articulating words now? It is following a set of instructions."
Yeah, and even if it’s true we can start using real neurons (maybe rat ones) and literally try to recreate a living conscience, who has its own interests.
The idea is, at least with current "AI", is that it's really just using a bunch of instructions to form ideas and sentences. Where as a human being is influenced by feelings and experiences. Perhaps maybe even reacts to a rock in their shoe mid-sentence.
The point of the experiment is to show that whether it was a human or simply a computer program generating outputs based on the input, the human doesn't understand Chinese, so you wouldn't say the computer program "understands" Chinese either. Concluding the Strong AI hypothesis is false.
It goes astray from Searle's argument at the 1:00 mark b/c the computer does not/ cannot introspect about whether it understands, or knows what it is doing, etc. The machine cannot understand anything, says Searle, precisely b/c it only is given only syntactical content and lacks any semantical content. What the actor in the video does is turn it on its head (Searle's argument), and he supposes that the entity in the room is puzzling over and reflecting on its task. A computer does not do that: it simply follows a set of instructions that cannot in themselves have any meaning. It cannot therefore ever achieve understanding. That is the point of the Chinese Room thought experiment.
Yes. As soon as he shifted from "thinking" (the machine is not doing that) to "understanding" (the machine is definitely not doing that), he showed himself up
If we actually pay attention to the question this video asks instead of whether or not the girl speaks chinese, its much more interesting however. My view is that we, as people, associate words with images, emotions, events, or things which actually take place within the state of affairs of our universe; computers do not. they have no meaning to attach to these words. Our minds differ from a programmed AI because of this. It seems simple to me. Any philosophers in here?
I would say a computer could be programmed to output a image in association with a text or phrase. just look at googles Dream Program. It takes image and recognizes faces then pulls details from the image to readjust/reprocess it. In my view. what makes us conscience is the ability to read our own outputs and process them over and over again. AI could easily do this with enough processing power. "they have no meaning to attach to these words" I would say in this experiment they would have no meaning, but if you told a robot to kick the soccer ball. it could give the text meaning and perform actions on said meaning. as for emotions, emotions are just your upper level programming not understanding what is coming from sub levels of your brain. Its like sending inputs to a function and having a function return an output, you dont know how that output came to be, you just know that now it is difficult to talk, your eyes are watering, and you brain has weighed that you do not like that person.
But, it's not fair to judge her as she was likely pressured to do it by production. It is common for fluent Mandarin speakers born or raised outside China to not be literate (duh! it's a hard language). It would be cruel to judge her at all.
I can't speak for all the viewers, but I think at least I can provide some explanations for why people care. Sure, we get the idea of this video, and no we're not judging her for not being able to write Chinese correctly. It's really not personal. We're unsatisfied with BBC's attitude. Hiring a person who does not write correct Chinese shows that BBC has no respect for Chinese culture. On the other hand, it also weakens the claim showing by this video. It is significantly easier to trick a person who does not read(write) Chinese language well, isn't it? Not controlling this aspect of the experiment would produce a weak result, and we can certainly say something like "A Chinese speaker who reads and writes Chinese correctly can always tell the difference between a real Chinese speaker and someone following the instruction". This experiment wouldn't be able to reject this claim because it fails to meet the requirement of hiring a person who ""A Chinese speaker who reads and writes Chinese correctly". Hope this can appease your anger a little bit.
@@shanjiawenzhao3775 It doesn't in any way weaken the claim. If someone fluent in Mandarin wrote the questions and read the answers, then assume the instructions were also written by someone similarly fluent in Mandarin. The Chinese room is a thought experiment.
Why is it cruel to comment on how a person is doing her job? Could one not judge an actress for acting terribly? Or a driver who drives terribly? If this person had Googled these characters she would not have failed so hard. You don't need to be literate to draw lines correctly. I don't understand what you mean by she is likely pressured to do it. As in slave labour? She could not have turned it down? Would she not otherwise to recompensed for her labour?
Well to be fair those phrases were really easy, and the time it took for him to reply would make it apparent to the Chinese guy that he couldn't really speak Chinese
+darkless60 This was just to illustrate the point that a person who doesn't speak a word of Chinese can come out as a fluent speaker. The AI would scan those "if, then" phrases and reply in a matter of seconds (heck, probably a lot faster).
+MrArteez There's an upper bound to that though, a computational efficiency limit that makes if then statements increasingly overtasking. To completely replicate a Chinese speakers proficiency using this method the book would increase in size exponentially to account for the possible exchanges as its "vocabulary" (such a system wouldn't have one as we know it, but bear with me). This computational limit would have an accompanying increase in required computing power to actually run it in real time, or an exorbitant amount of time to search the rule book. In comparison, building information webs of syntax that gain "meaning" through reference to past/personal experience (semantics emerging from a sufficiently complex web of syntactical connections) is far more efficient to run computationally, at least from the limits of biology and evolutionary processes. Which is why humans actually "mean" what they say, it was simply more efficient that cataloguing every eventuality.
+MrArteez continuing, it also makes philosophical zombies, while entirely possible theoretically near impossible to make work in reality. Our brains simply don't have the capacity to run such an utterly different operating system. Though given a more powerful (how much more powerful is difficult to grasp, though as informational load increases exponentially, so does necessary computational power) system and then plug it up to a human body, you could create a truly "soulless" human. But we'll probably create AI through syntax webs and semantic loops rather than endless lines of programming.
My understanding of the Chinese room experiment is that digital processing machines, such as computers, could become virtually perfect at syntax, but never possess a scintilla of understanding of semantics; i.e could output responses that would pass any Turing Test model, would sound highly intelligent, but that same machine generating such outputs could never understand the meaning of anything it output.
But where did the reference table come from? The responses were presumably written by a conscious person. So the person outside the room is still communicating with a conscious person, its just that the question they are asking was answered before they asked it.
@@FourthRoot I don't think so. In the original form of the thought experiment the answers are not necessarily written in the books. The books contain such an algorithm that is capable of winning Turing's imitation game whatever it might be. Of course the program is written by a conscious person, but being human-made is kind of the definition of "artificial".
"But where did the reference table come from?" Machine learning. Statistics calculations about what token follows another. Still, zero understanding of what the words mean.
@ChristianIce But why assume that machine learning is an unconscious process? And you can't say that the machine doesn't understand what the words mean. If you asked the system whether it felt it was conscious, wouldn't it require just as much processing as a human brain and produce the same answer? You are arbitrarily assuming silicon processing (or, in this case, unimaginably complex on-paper processing) is fundamentally different from biological computers. No argument has actually been made demonstrating that to be the case.
Understanding is perceiving things and respond accordingly. Which again can be following rules. What this experiment tells me is that sometimes free will can be illogical. What goes beyond following rule is Will and Choice. And it doesn't have to be result of understanding.
Searle criticized the strong artificial intelligence, whoever is inside the room has performed a syntactic and then semantic processing (error checking, typing, translation etc) of the message based on a grammar (lexer and parser), but who is outside the room has true meaning of the words (the user, the programmer, the creator of programming languages), in practice the meaning of the message stops at the entry and is returned to it upon exit , the purely mechanical process remains within. Artificial intelligence simulates human intelligence, but does not reproduce it (I'm not talking about computer conscience because it would be ridiculous)
I would say the conscience stems entirely from the output. otherwise, how would you know that no one else is not also a zombie? if a machine could emulate all of a brains outputs then it becomes a brain.
@@OBtheamazing A machine that can emulate brain outputs perfectly is still not a brain. The crazy thing is how the brain can produce consciousness, if it does produce it. It seems like consciousness, as in awareness, is something deeply related to the actual particles that compose our brain. I think it will be very hard to verify if an artificial intelligence is sentient, aware and conscious or not. We can't even demonstrate that human beings are conscious. We just know it, since we know what being conscious and aware means, since we experience that in first person, but we have no idea how this awareness can emerge. It really baffles me how atoms can combine together and produce colors, sounds, thought, imagination, dreams, feelings etc. since they do not exist. The world we experience does not exist, it's just the way our brains packs up all the sensorial information and creates a coherent environment in which we can make appropriate choices to survive. I mean, it is very likely that AI will some day be able to produce art, to come up with new stuff, to individuate patterns, create symbols and move inside this world just like us, but at the core there will be only electrons moving from A to B, and electrons alone are not conscious and they can't produce consciousness, meaning that AI would not be able to perceive the world in first person and create its own representation, because AI would be just a bunch of electrons. There's something missing in our understanding of reality. Consider the China Brain thought experiment, straight from wikipedia: In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do? That's what would basically happen with computers if they simulated a brain. It's very unlikely that a bunch of people exchanging messages with each other would be enough to produce a sentient and aware being. It would just be information flowing and circling around. Where's the consciousness here? It would be awesome to discover that awareness is deeply nested in nature, and that it leads to different degrees of consciousness. Scientists are exploring this path, but it's more likely they will find a much simpler reason in the future, when we will have done new incredible physics discoveries. At the moment, i really can't grasp this concept but it made me realize we don't know anything about reality yet
@@StackBrains " It would just be information flowing and circling around. Where's the consciousness here?" that is the consciousness our brains are composed of billions of individual cells that simply pass information to each other. There is no definitive proof that anything or anyone is conscious besides yourself. Hence we must assume that if something appears to be conscious after much testing, then it is conscious. If a robot can simulate a brain and respond to stimuli like a brain then it is in essence a brain. Otherwise no one but yourself is conscious.
In this case, it is the room (with the person who can't understand Chinese inside of it, using the codebook) as a whole which understands Chinese. It's almost like saying that a foot or arm can't walk, but not that the person as a whole can walk.
The understanding is always a team effort. No one party solely can have an understanding. The moment at least two systems communicate with one another successfully there is an understanding 💟
@@jakestockton4808 Only if it’s an emergent property, the computer is nothing different than a piece of scratch paper I write equations on, the causality is in me, not in the equations or the scratch paper. Thus, we can say code is nothing more a virtual, super-complicated scratch paper totally inert except for the causality we give it
@@mrepix8287 It sounds like you've made up your mind that computers can never be conscious. I don't see any reason to continue discussing this with someone who has no wiggle room for interpretation. Good day.
Why I may disagree with the axioms of the Chinese Room and the way consciousness may be evaluated as a test of the observers perception as opposed to proof of subjective discernment aka preference - proof of individuality? please God let me find someone nice to have a conversation with regarding this. What perhaps interests me about the Turing test and the Chinese room question would maybe not be whether or not we are able to perceive it as sentient. Would a better test not be to see whether, when it is given two equally valid options to choose between, it is able to prefer one over the other and as per what reasoning, and whether any kind of solid consistent and replicable process went into it or whether it proved “subjectivity” within judgment, which was why the axioms of the Chinese room experiment perhaps did not sit as well with me as Turing. I love what little I’ve discovered of Turing so far.
Is it so hard to BBC to even find someone who really knows Chinese? She wrote it wrong, and what Marcus wrote was also grammatically wrong that a native speaker wouldn't think he is a native speaker.
What I know is that tens, perhaps hundreds of thousands of psychologists make a living by helping people understand themselves. Whether psychologists truly understand what their clients comprehend is questionable. And yet, or perhaps because of this, a complete understanding and awareness of what goes on in our minds seems to be absent. The question for me is whether self-awareness as a function of self-reflection by machines might be closer than we think.
+Ian Bralkowski It wouldn't understand the meaning of the language in the way we would. It would understand syntax rather than semantics. Functionally however we may not be able to tell the difference even though there is a difference.
+sgt7 That is exactly what machines like Google Translator do semi-decently, it follows the syntax rules of each language and applies it's correlation to another base on indexed material publicly available on the web that is why it is not available in languages with scarce written material like guarani yet. A machine capable of semantic understanding like humans do would make for a true universal translator, but such thing seems unachievable now.
She wrote 你是上是电脑, which makes no sense. 上 means on top of, above, or to start (like 上班). Neither of them speak Chinese. This makes it a much better demonstration.
… and? Ending the video where it gets interesting. How about exploring existing publications about the Chinese-Room argument, for example the ones between Bennett, Searle and Hofstadter? The video would have much more value that way.
The more you think and understand about Chinese Room Experiment (or Chinese Room Paradox or Chinese Room Dilemma) the more confusing and depressing it gets... It could mean that a machine can never really think... Or even, it can convince us that it is thinking, but it never is... Damn, its confusing...
Or even more scary or depressing might be that either a) it is the processing of certain kinds of information that create consciousness. ie not requiring a meat brain at all, and that all our phones, computers and even key fobs are conscious and we're turning them off and on, making them die and bringing them back to life, for grins and giggles. Or b) that I am the only truly entity in the universe and everyone and everything else is just a sophisticated Chinese 'man in a box' system.
Even if AI got so sophisticated that it could fully learn and adapt it not only wouldn't be self aware it wouldn't be able to evolve - it's way of thinking would never be able to change. Humans not only are aware of their own thoughts and own being they are capable of making significant changes in themselves, their own thinking even - the capacity to grow, to evolve. AI could never do this because it would have to not only be sophisticated enough to adapt and learn, it would have to be sophisticated enough to rewrite it's core programming, it's way of thinking and interpreting itself and it's environment. Maybe in some strange way we can't yet fully comprehend or see it may be possible with quantum computing but It will never be possible with digital computers. No matter how amazing it may seem it's nothing more than an incredibly sophisticated package of hardware and software.
@@UnknownDino Kinda, but we are definitely not only that, we have a certain level of conscientious and inventiveness that no type of machine that has existed until now never was able to posses (even chat GBT), and with how well this system has been working and evolving so fast, their is no reason this fundamental characteristic of machine learning will change any time soon or in the future The point of the experiment is to show that even if our current machines had such an absurd amount of training to be able to respond to any question exactly like humans do, it wouldn't mean they came to that conclusion the same way or actually understood the meaning of what they said it's like knowing 2+2=4 and understanding why that is, and being told that if you see 2+2 you should write that it =4, in your test they will both be the same 2+2=4, but the human would understand the reason why, and the machine would just have "memorized it"
Haha look at me guys I can write Chinese characters. Isnt that so cool? Arent i so cool? The fact that i can point out mistakes in a random person's handwriting totally has something to do with the actual thought experiment!
Kind of tiresome with these BBC shows that spend 3 minutes on an elaborate setup just to show someone copy-pasting answers from a predefined list of questions and answers. I think the nature of this demonstration could even make people miss the point cause it's so simplistic people will just go "Yeah of course, if you have a book of questions and answers, sure. But what if you ask a question that's not in the book?"
what this thought experiment is asking is not if knowledge can be transferred without a common link, it is asking if the common link is already embedded within the linguistic nature of humans.
To speak of ‘awareness’ necessarily implies SENTIENCE - semantic meaning that goes beyond mere symbolic syntax… I truly understand why anyone who is wedded to a purely physicalistic view-if-things would want to resist all notions that the mind has its locus anywhere outside of the blob of oatmeal that fills our craniums - a dualistic understanding of consciousness (or ‘mind’ or ‘spirit’ or ‘soul’) necessarily pushes one into the realm of the transcendent, and that just sounds too much like religion where God could possibly have created man in His image (Gen 1:26,27) even before He gave form to the creature (Gen 2:7)… That we are so much more than the sum total of our physiologies, our sociologies and our psychologies is a view of man that is simply unavoidable to anyone who is prepared to take an intellectually honest take on things
At the point when you know what exactly you are saying while communicating. And what's the meaning of the things that you are saying. Then it's your own decision to say a certain thing - decide whether it's appropriate for a specific situation. And not only follow a specific instruction "set of signs (then) -> another set of signs".
The chinese room argument is useless because it can be applied to the brain as well. Replace the slit with sensory input, the handbook with the wiring of the brain and the activity of the agent with neuron activity. In the same way you demonstrated that the chinese room has no understanding you demonstrated that for the brain as well.
Good philosophical question- where is the threshold? But nowadays, Western countries want to impose their Democracy or precisely their Experience of Democracy into China n that is to say Western countries know the Chinese Experience of governing than the Chinese. Literally , Western countries have their universal “threshold” .
The chinese room argument is ridiculously naive because it implicitly supposes that the chinese speaker only sends the room simple one-time questions, which can theoretically be listed in a look up table. Once the chinese speaker asks follow up questions which requires a context, or ask something related to mental states, or do a little cognitive test, or does some joking, etc., this rule based system immediately breaks down. Searle had no idea how language works.
Probably you are the one who doesn't get the point though. This was a simple illustration video. The "look up table" in Searle's thought experiment is a computer program, an arbitrarily long algorithm. It can be a library of millions of books and the person has infinite time to answer. The person is the processor. Searle illustrated that given any algorithm and a processing unit, and a system that passes the "turing test" there would be no understanding involved. This is an argument against machine consciousness.
The fact that she misspelled too many words and failed to construct sentences in the right form had already defeated the foundation variable of this experiment.
@@bobrolander4344 The experiment itself is simple enough to be understood even with words. This video seems like a textbook full of typo explaining machine learning 101.
Do Machines need to understand or think? They are just like mixers, grinders and dishwashers. They are quite cool. Nowadays they even do things like prove theorems (graph coloring) or recognise images. AI is the new electricity. Get on with it. Maybe there will be a theory of AI or maybe not. It may be just data, linear algebra and curve fitting. I think the way AI is packaged today appeals to the mathematical type of people. You understand mathematics when you understand things like sets, real analysis, calculus from starting axioms. Otherwise you are just cramming for your mathematical test. I do not know about AI axiomatically but Dr Max Tegmark has provided some physical basis in a series of books like Life 3.0. Dr Stuart Russell has written a book called Human Compatible mostly on problems like the trolley. I am now reading about AI weaponry and the AI race that has been ignited. China and USA are leaders. India is catching up. NLP/NLU has great opportunity in India.
I was never impressed by this one. It is just a conscious being inside a room being denied information. How is that an analog for computer software? It isn’t.
It is. Text prediction is just statistics, the computer memorized what token statistically follows, but it has no idea what words mean. It's not even supposed to, the program works ok without that knowledge.
@@ChristianIce my issue is that there is nothing in the thought experiment indicating that the system couldn't know what the words mean, only that it doesnt in Searle's telling of it. It is a weak experiment.
@@ChristianIce the Chinese Room doesnt understand Chinese. My contention is that there is nothing in the design that says it CANT EVER understand Chinese. If the inputs were better, the person inside COULD come to understand Chinese. That is the weakness of the TE. Searle is trying to convince people that machines could never think like humans, but that inability isnt simply due to lack of input.
@@Subtlenimbus It can't ever understand chinese because there's no *new* element you can add to equation at a later time. The conditions are static. IF there was a way to learn chinese, it would have known chinese in the first place.
LOL imagine someone 50 years ago giving an analogy of why we can't catolog our DNA or why a future "smart phone" could never exist , the "Chinese room problem " is a creative excuse for a problem that hasn't been conqered yet. of course I know nothing lol.
The Chinese Room is only a reframing of the fundamental question: how can information processing give rise to consciousness? Replacing the CPU with a man accomplishes NOTHING. No information has been added and no claims have been debunked. We can assume that information processing gave rise to consciousness in the case of human beings since consciousness is correlated with brain function (though still roughly,) so some alternate explanation or mechanism would be required to argue impossibility, and you would have to PROVE that computers are incapable of said function. No such mechanism has been discovered and the computational theory of mind is our best guess at the moment. The Chinese Room is the equivalent of aerodynamic arguments against the possibility of airplanes, despite the obvious observation of birds flapping overhead. If human consciousness is possible then Strong AI IS possible, we just don't understand how yet. This is a silly argument and a waste of time. Even granting the possibility of a human equivalent intelligence without consciousness, how would you demonstrate the lack thereof? Completely unfalsifiable. Only the AI itself would be capable of knowing. This whole thought experiment is pure sophistry. An argument from ignorance. Chinese Room: "Strong AI is impossible!" Reasonable person: "Why?" Chinese Room: "Because we don't understand how to create it!" Reasonable person: "That's a dumb argument."
I find it ridiculous as well, the following instructions part highlights the misunderstanding WHAT WROTE THE INSTRUCTIONS in the case of strong AI the instructions would be generated by the AI which implies that the person in the room just following the instructions is not the strong AI but is instead the processor. It is however a useful thought experiment to delineate between strong AI and complex machine learning. But it certainly isn't a refutation of strong AI and I don't understand why otherwise intelligent people act like it is.
One would suppose BBC could afford to pay actual Chinese writing/speaking person. Apparently they can't even afford someone who could trace Chinese characters properly lol
The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese. But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine). Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.
The point of the thought experiment I believe was to show that even if a computer has general intelligence (or strong AI) it would still not necessarily be identical to the human mind. It could still lack consciousness. It knows how to achieve ends by adapting to its environment and choosing the right means. However, it may still not be aware. It could replicate human intelligence in every but it still may not be conscious and therefore does not have a "mind" that would enable us to say it's just like us.
I think rather than say replicate (which implies they would attempt similar processes), it would be better to say it produces consistently near identical outputs from relevant inputs but through entirely different processes.
I concede that this would be a better formulation alright.
sgt7 this isnt to really that it has consciousness, but that it has fool the human mind into thinking that they are intelligent. If we looked at AI through chatterbots, in this scenario, the book represents the programmers hard code (canned responses) which just mean that they have coded in responses that are likely to make proper sense if coordinated correctly.
That's true. However, the original point of the thought experiment (created by John Searle) was to show that general intelligence can exist in a computer without requiring the computer to be conscious.
Is it possible to tell if something is conscious without being the thing itself?
It is good to see people on the comment really focused on the relevant aspects of the video...
It's funny this "Chinese girl" doesn't even write right sentences on those slips.
and she wrote like a toddler lol
LOL
I wanted to comment this lol
The thing is, that girl does seem to have toddler handwriting, but she IS writing the strokes in the correct sequence(笔画). One who has never learned written Chinese would not know which stroke to begin with. Take a look at 00:34 where she writes the character 文 -- she is writing in the correct sequence of strokes which means she does know some sort of Chinese, just seems to be super bad at writing it.
@@chelsiewei1232 but wait a minute, the stroke order of 中 is not correct...
it's kind of cute to think about a computer this way🥺 they don't know what's going on theyre just trying to figure it out
The song throughout the video is: To build a home by A Cinematic Orchestra
did anyone else who reads Chinese notice how the girl wrote the wrong character which cause the whole sentence to not make any sense? Instead of 不 she wrote 上
+Theskybluerose I was wondering about that... she also doesn't write like a person that actually usually writes Chinese. Some of her stroke procedures are all wrong, and some of the strokes are not right.
ya I noticed that too that's so annoying
I bet nearly all people in the production team can't understand Chinese so no one knew
And this is very very basic Mandarin (of a 4 year old)! Yeah, it's a shame there is so much bias against foreign language learning
I also found it a little strange that they are calling it Mandarin @3:15. Mandarin is a spoken language/dialect. The written language is still Chinese, whether it's simplified or traditional.
+reicirith English vocab for this is corrupted by colonialism and ancient Chinese "exceptionalism." The written language was developed the same way the spoken one was: By the Mandarin scholars. What about other languages using Chinese characters in Asia in history, such as in Vietnam, Korea and Japan? Whether or not a language is a dialect is more of a political question rather than a scientific one.
“Your Chinese was perfect”
Me who only have learned Chinese for 2 years: no…his wasn’t
thank you so much for that! You made me understand the importance of such an experiment
"If a computer's actually following instructions is it really thinking? But then again what is my mind doing when I'm actually articulating words now? It is following a set of instructions."
Yeah, and even if it’s true we can start using real neurons (maybe rat ones) and literally try to recreate a living conscience, who has its own interests.
The idea is, at least with current "AI", is that it's really just using a bunch of instructions to form ideas and sentences. Where as a human being is influenced by feelings and experiences. Perhaps maybe even reacts to a rock in their shoe mid-sentence.
@@korinorizfeelings and experiences are still just input. There’s no reason they, or a form of them, can’t be fed into an AI.
The point of the experiment is to show that whether it was a human or simply a computer program generating outputs based on the input, the human doesn't understand Chinese, so you wouldn't say the computer program "understands" Chinese either. Concluding the Strong AI hypothesis is false.
misspelled Chinese... Can you find somebody who really understands Chinese to write?
It goes astray from Searle's argument at the 1:00 mark b/c the computer does not/ cannot introspect about whether it understands, or knows what it is doing, etc. The machine cannot understand anything, says Searle, precisely b/c it only is given only syntactical content and lacks any semantical content. What the actor in the video does is turn it on its head (Searle's argument), and he supposes that the entity in the room is puzzling over and reflecting on its task. A computer does not do that: it simply follows a set of instructions that cannot in themselves have any meaning. It cannot therefore ever achieve understanding. That is the point of the Chinese Room thought experiment.
Yes. As soon as he shifted from "thinking" (the machine is not doing that) to "understanding" (the machine is definitely not doing that), he showed himself up
The book knows what it's doing.
True AI will be when the computer is the one asking the questions.
Are you a human? That's a question 😂
If we actually pay attention to the question this video asks instead of whether or not the girl speaks chinese, its much more interesting however. My view is that we, as people, associate words with images, emotions, events, or things which actually take place within the state of affairs of our universe; computers do not. they have no meaning to attach to these words. Our minds differ from a programmed AI because of this. It seems simple to me. Any philosophers in here?
I would say a computer could be programmed to output a image in association with a text or phrase. just look at googles Dream Program. It takes image and recognizes faces then pulls details from the image to readjust/reprocess it. In my view. what makes us conscience is the ability to read our own outputs and process them over and over again. AI could easily do this with enough processing power.
"they have no meaning to attach to these words" I would say in this experiment they would have no meaning, but if you told a robot to kick the soccer ball. it could give the text meaning and perform actions on said meaning.
as for emotions, emotions are just your upper level programming not understanding what is coming from sub levels of your brain. Its like sending inputs to a function and having a function return an output, you dont know how that output came to be, you just know that now it is difficult to talk, your eyes are watering, and you brain has weighed that you do not like that person.
@@OBtheamazing how would you define consciousness?
@@JS-nr7te seeing what you see. Best example I can produce
But, it's not fair to judge her as she was likely pressured to do it by production. It is common for fluent Mandarin speakers born or raised outside China to not be literate (duh! it's a hard language). It would be cruel to judge her at all.
Why are you even addressing all the *off-topic morons?*
I can't speak for all the viewers, but I think at least I can provide some explanations for why people care. Sure, we get the idea of this video, and no we're not judging her for not being able to write Chinese correctly. It's really not personal. We're unsatisfied with BBC's attitude. Hiring a person who does not write correct Chinese shows that BBC has no respect for Chinese culture. On the other hand, it also weakens the claim showing by this video. It is significantly easier to trick a person who does not read(write) Chinese language well, isn't it? Not controlling this aspect of the experiment would produce a weak result, and we can certainly say something like "A Chinese speaker who reads and writes Chinese correctly can always tell the difference between a real Chinese speaker and someone following the instruction". This experiment wouldn't be able to reject this claim because it fails to meet the requirement of hiring a person who ""A Chinese speaker who reads and writes Chinese correctly". Hope this can appease your anger a little bit.
@@shanjiawenzhao3775 It doesn't in any way weaken the claim. If someone fluent in Mandarin wrote the questions and read the answers, then assume the instructions were also written by someone similarly fluent in Mandarin. The Chinese room is a thought experiment.
Why is it cruel to comment on how a person is doing her job?
Could one not judge an actress for acting terribly? Or a driver who drives terribly?
If this person had Googled these characters she would not have failed so hard. You don't need to be literate to draw lines correctly.
I don't understand what you mean by she is likely pressured to do it. As in slave labour? She could not have turned it down? Would she not otherwise to recompensed for her labour?
Well to be fair those phrases were really easy, and the time it took for him to reply would make it apparent to the Chinese guy that he couldn't really speak Chinese
+darkless60 This was just to illustrate the point that a person who doesn't speak a word of Chinese can come out as a fluent speaker. The AI would scan those "if, then" phrases and reply in a matter of seconds (heck, probably a lot faster).
+MrArteez There's an upper bound to that though, a computational efficiency limit that makes if then statements increasingly overtasking. To completely replicate a Chinese speakers proficiency using this method the book would increase in size exponentially to account for the possible exchanges as its "vocabulary" (such a system wouldn't have one as we know it, but bear with me). This computational limit would have an accompanying increase in required computing power to actually run it in real time, or an exorbitant amount of time to search the rule book. In comparison, building information webs of syntax that gain "meaning" through reference to past/personal experience (semantics emerging from a sufficiently complex web of syntactical connections) is far more efficient to run computationally, at least from the limits of biology and evolutionary processes. Which is why humans actually "mean" what they say, it was simply more efficient that cataloguing every eventuality.
+MrArteez continuing, it also makes philosophical zombies, while entirely possible theoretically near impossible to make work in reality. Our brains simply don't have the capacity to run such an utterly different operating system. Though given a more powerful (how much more powerful is difficult to grasp, though as informational load increases exponentially, so does necessary computational power) system and then plug it up to a human body, you could create a truly "soulless" human. But we'll probably create AI through syntax webs and semantic loops rather than endless lines of programming.
My understanding of the Chinese room experiment is that digital processing machines, such as computers, could become virtually perfect at syntax, but never possess a scintilla of understanding of semantics; i.e could output responses that would pass any Turing Test model, would sound highly intelligent, but that same machine generating such outputs could never understand the meaning of anything it output.
Exactly!
This seems like a pretty simple and straightforward experiment, but what happens when a word has more than one meaning?
There’re so many Chinese people who master both English and Chinese, but BBC found the one who cannot write such simple Chinese characters right.
But where did the reference table come from? The responses were presumably written by a conscious person. So the person outside the room is still communicating with a conscious person, its just that the question they are asking was answered before they asked it.
the reference table is a computer program. The person is the processor.
@@zagyex You completely missed my point.
@@FourthRoot I don't think so. In the original form of the thought experiment the answers are not necessarily written in the books. The books contain such an algorithm that is capable of winning Turing's imitation game whatever it might be. Of course the program is written by a conscious person, but being human-made is kind of the definition of "artificial".
"But where did the reference table come from?"
Machine learning.
Statistics calculations about what token follows another.
Still, zero understanding of what the words mean.
@ChristianIce But why assume that machine learning is an unconscious process? And you can't say that the machine doesn't understand what the words mean. If you asked the system whether it felt it was conscious, wouldn't it require just as much processing as a human brain and produce the same answer? You are arbitrarily assuming silicon processing (or, in this case, unimaginably complex on-paper processing) is fundamentally different from biological computers. No argument has actually been made demonstrating that to be the case.
Understanding is perceiving things and respond accordingly. Which again can be following rules. What this experiment tells me is that sometimes free will can be illogical. What goes beyond following rule is Will and Choice. And it doesn't have to be result of understanding.
lol she wrote 說/ 说 wrong
Fingersinblender
She wrote McDonald's worker standing next to cash machine/alien sweating next to ledge wrong?
And?
Awesome.
I am Chinese and I find this amusing.
Basically Cleverbot. It actually feels like a real person too sometimes. :S
Oh come on actually find someone who can write Chinese okay?你是上是电脑 what's that supposed to mean 😂
It means you are up are computer... lol
没去过我怎...(and anything followed up by that)
is also not possible to translate into i do want to. 2:35
It's what Chinese people write when they need to reach the word count on their assignments.
very interesting analogy to explaining someone how cleverbot works...i might just steal that :D
DeathAngel that wasnt the point, and bbc didnt come up with this. Its a thought experiment by john searle. Look it up, Its interesting
WHY did they use to build a home as the background music??? why does this video need to make me cry
This doesn't look legit.... The girl's doesn't know much Chinese it seems
Shes a BBC
Monday Green your dirty mind is amazing, what else? Great confession btw!
@@Andrewg820 lol British Born Chinese in a British Broadcasting Corporation video
It is just a thought experiment... It is just to explore the implications of the theory.
...The girl's handwriting is absolutely terrible, it is even worse than mine..Its sort of obvious she doesnt speak Chinese as her first language .
Ok?
This doesn't disprove the video
The turning test game brought me here
haha good one
Searle criticized the strong artificial intelligence, whoever is inside the room has performed a syntactic and then semantic processing (error checking, typing, translation etc) of the message based on a grammar (lexer and parser), but who is outside the room has true meaning of the words (the user, the programmer, the creator of programming languages), in practice the meaning of the message stops at the entry and is returned to it upon exit , the purely mechanical process remains within. Artificial intelligence simulates human intelligence, but does not reproduce it (I'm not talking about computer conscience because it would be ridiculous)
I would say the conscience stems entirely from the output. otherwise, how would you know that no one else is not also a zombie?
if a machine could emulate all of a brains outputs then it becomes a brain.
@@OBtheamazing That's just wrong
@@StackBrains it is very messed up indeed.
@@OBtheamazing A machine that can emulate brain outputs perfectly is still not a brain. The crazy thing is how the brain can produce consciousness, if it does produce it. It seems like consciousness, as in awareness, is something deeply related to the actual particles that compose our brain. I think it will be very hard to verify if an artificial intelligence is sentient, aware and conscious or not. We can't even demonstrate that human beings are conscious. We just know it, since we know what being conscious and aware means, since we experience that in first person, but we have no idea how this awareness can emerge. It really baffles me how atoms can combine together and produce colors, sounds, thought, imagination, dreams, feelings etc. since they do not exist. The world we experience does not exist, it's just the way our brains packs up all the sensorial information and creates a coherent environment in which we can make appropriate choices to survive. I mean, it is very likely that AI will some day be able to produce art, to come up with new stuff, to individuate patterns, create symbols and move inside this world just like us, but at the core there will be only electrons moving from A to B, and electrons alone are not conscious and they can't produce consciousness, meaning that AI would not be able to perceive the world in first person and create its own representation, because AI would be just a bunch of electrons. There's something missing in our understanding of reality. Consider the China Brain thought experiment, straight from wikipedia:
In the philosophy of mind, the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?
That's what would basically happen with computers if they simulated a brain. It's very unlikely that a bunch of people exchanging messages with each other would be enough to produce a sentient and aware being. It would just be information flowing and circling around. Where's the consciousness here?
It would be awesome to discover that awareness is deeply nested in nature, and that it leads to different degrees of consciousness. Scientists are exploring this path, but it's more likely they will find a much simpler reason in the future, when we will have done new incredible physics discoveries. At the moment, i really can't grasp this concept but it made me realize we don't know anything about reality yet
@@StackBrains " It would just be information flowing and circling around. Where's the consciousness here?" that is the consciousness
our brains are composed of billions of individual cells that simply pass information to each other. There is no definitive proof that anything or anyone is conscious besides yourself. Hence we must assume that if something appears to be conscious after much testing, then it is conscious. If a robot can simulate a brain and respond to stimuli like a brain then it is in essence a brain. Otherwise no one but yourself is conscious.
at 1:23 where the reply or answer came from, is there any particular answers already provided to the AI for the particular questions by expert system?
I can't find one sentence she wrote without error...... They could have just let the girl copy the text from another piece of paper. Or did she?
In this case, it is the room (with the person who can't understand Chinese inside of it, using the codebook) as a whole which understands Chinese.
It's almost like saying that a foot or arm can't walk, but not that the person as a whole can walk.
However, it is a response to the Turing Test, which is to find if AI ( the person ) knows Chinese.
💟
Then we can say, there is no such thing as understanding.
The understanding is always a team effort. No one party solely can have an understanding. The moment at least two systems communicate with one another successfully there is an understanding 💟
That doesn't work, unfortunately.
Where the heck did this girl learn to write 说 like that? It's "say", with a "say" radical!
The person who compiled the Chinese book of rules; was intelligent and was able to assemble relevant answers, so there was consciousness.
+zebonaut smith Spellcheck has consciousness?
zebonaut smith sounds like you believe in "intelligent design" then?
But that would be the "programmer." Knowing the programmer has conciousness doesn't help us.
Is everyone going to ignore the beautiful music in the background?
It’s “To Build A Home” by The Cinematic Orchestra.
You’re welcome 😊
Once computers can create unique thought experiments that illustrate some previously unknown concept, then, we will know that it's conscious.
Hold my joint
No, it would still be acting according to some complicated algorithm
@@mrepix8287
A complicated algorithm doesn't discredit consciousness.
@@jakestockton4808 Only if it’s an emergent property, the computer is nothing different than a piece of scratch paper I write equations on, the causality is in me, not in the equations or the scratch paper. Thus, we can say code is nothing more a virtual, super-complicated scratch paper totally inert except for the causality we give it
@@mrepix8287
It sounds like you've made up your mind that computers can never be conscious. I don't see any reason to continue discussing this with someone who has no wiggle room for interpretation.
Good day.
0:25 what character is that? after 会. she wrote it wrong?
supposed to be 说, but she wrote it incorrectly.
Who wrote the rule book? Someone who understands Chinese. Its basically a very cumbersome test of the persons mind that wrote that book.
Why I may disagree with the axioms of the Chinese Room and the way consciousness may be evaluated as a test of the observers perception as opposed to proof of subjective discernment aka preference - proof of individuality? please God let me find someone nice to have a conversation with regarding this.
What perhaps interests me about the Turing test and the Chinese room question would maybe not be whether or not we are able to perceive it as sentient. Would a better test not be to see whether, when it is given two equally valid options to choose between, it is able to prefer one over the other and as per what reasoning, and whether any kind of solid consistent and replicable process went into it or whether it proved “subjectivity” within judgment, which was why the axioms of the Chinese room experiment perhaps did not sit as well with me as Turing. I love what little I’ve discovered of Turing so far.
I got an ai ad before this
Is this video part of a documentary or is it just this single clip?
Is it so hard to BBC to even find someone who really knows Chinese? She wrote it wrong, and what Marcus wrote was also grammatically wrong that a native speaker wouldn't think he is a native speaker.
What I know is that tens, perhaps hundreds of thousands of psychologists make a living by helping people understand themselves. Whether psychologists truly understand what their clients comprehend is questionable. And yet, or perhaps because of this, a complete understanding and awareness of what goes on in our minds seems to be absent. The question for me is whether self-awareness as a function of self-reflection by machines might be closer than we think.
A computer can really understand a new language! If a person programs it to.
+Ian Bralkowski It wouldn't understand the meaning of the language in the way we would. It would understand syntax rather than semantics. Functionally however we may not be able to tell the difference even though there is a difference.
+sgt7 That is exactly what machines like Google Translator do semi-decently, it follows the syntax rules of each language and applies it's correlation to another base on indexed material publicly available on the web that is why it is not available in languages with scarce written material like guarani yet. A machine capable of semantic understanding like humans do would make for a true universal translator, but such thing seems unachievable now.
This is better than having him write. If he wrote she'd instantly know he isn't chinese
She wrote 你是上是电脑, which makes no sense. 上 means on top of, above, or to start (like 上班). Neither of them speak Chinese. This makes it a much better demonstration.
… and? Ending the video where it gets interesting.
How about exploring existing publications about the Chinese-Room argument, for example the ones between Bennett, Searle and Hofstadter?
The video would have much more value that way.
The more you think and understand about Chinese Room Experiment (or Chinese Room Paradox or Chinese Room Dilemma) the more confusing and depressing it gets...
It could mean that a machine can never really think... Or even, it can convince us that it is thinking, but it never is... Damn, its confusing...
Or even more scary or depressing might be that either a) it is the processing of certain kinds of information that create consciousness. ie not requiring a meat brain at all, and that all our phones, computers and even key fobs are conscious and we're turning them off and on, making them die and bringing them back to life, for grins and giggles. Or b) that I am the only truly entity in the universe and everyone and everything else is just a sophisticated Chinese 'man in a box' system.
Even if AI got so sophisticated that it could fully learn and adapt it not only wouldn't be self aware it wouldn't be able to evolve - it's way of thinking would never be able to change. Humans not only are aware of their own thoughts and own being they are capable of making significant changes in themselves, their own thinking even - the capacity to grow, to evolve. AI could never do this because it would have to not only be sophisticated enough to adapt and learn, it would have to be sophisticated enough to rewrite it's core programming, it's way of thinking and interpreting itself and it's environment. Maybe in some strange way we can't yet fully comprehend or see it may be possible with quantum computing but It will never be possible with digital computers. No matter how amazing it may seem it's nothing more than an incredibly sophisticated package of hardware and software.
Well said
Aren't we just incredibly complex hardware with software that constantly updates based or outer imputs?
@@UnknownDino Kinda, but we are definitely not only that, we have a certain level of conscientious and inventiveness that no type of machine that has existed until now never was able to posses (even chat GBT), and with how well this system has been working and evolving so fast, their is no reason this fundamental characteristic of machine learning will change any time soon or in the future
The point of the experiment is to show that even if our current machines had such an absurd amount of training to be able to respond to any question exactly like humans do, it wouldn't mean they came to that conclusion the same way or actually understood the meaning of what they said
it's like knowing 2+2=4 and understanding why that is, and being told that if you see 2+2 you should write that it =4, in your test they will both be the same 2+2=4, but the human would understand the reason why, and the machine would just have "memorized it"
Haha look at me guys I can write Chinese characters. Isnt that so cool? Arent i so cool? The fact that i can point out mistakes in a random person's handwriting totally has something to do with the actual thought experiment!
I have the same concern about living people
the questions and answers are all wrong
Kind of tiresome with these BBC shows that spend 3 minutes on an elaborate setup just to show someone copy-pasting answers from a predefined list of questions and answers. I think the nature of this demonstration could even make people miss the point cause it's so simplistic people will just go "Yeah of course, if you have a book of questions and answers, sure. But what if you ask a question that's not in the book?"
An example in lecture 2 of 人工智能
Does anyone know the song from the beginning of the video?
to build a home by Cinematic Orchestra
If the girl uses some metaphor then the guy will be in trouble...
They should have had a real Chinese who can really write Chinese to participate the experiment lol.
There is no point. Only something alive can have sentience. The mind is flesh.
Zero Escape Anyone?
You sure, governor?
people here are missing the point... chinese (or the correctness of it) is pretty much irrelevant in the context of this experiment
all those people complaining about her chinese grammar totally miss the whole point 0.0
No, we all get the point but have nothing else to comment about that fact
Right
sending the biang noodle character to the guy..
and she wrote chinese characters wrong probably immigrant just ignore that ok..
The man wasn't a Chinese speaker, the book is
song?
Is this the same man from the BBC doc "The Secrets to Modern Living Algorithms"?
Kd os BRs da Data Science Academy? Dá o like aê.
what this thought experiment is asking is not if knowledge can be transferred without a common link, it is asking if the common link is already embedded within the linguistic nature of humans.
They should upgrade this analogy to google translate.
To speak of ‘awareness’ necessarily implies SENTIENCE - semantic meaning that goes beyond mere symbolic syntax…
I truly understand why anyone who is wedded to a purely physicalistic view-if-things would want to resist all notions that the mind has its locus anywhere outside of the blob of oatmeal that fills our craniums - a dualistic understanding of consciousness (or ‘mind’ or ‘spirit’ or ‘soul’) necessarily pushes one into the realm of the transcendent, and that just sounds too much like religion where God could possibly have created man in His image (Gen 1:26,27) even before He gave form to the creature (Gen 2:7)…
That we are so much more than the sum total of our physiologies, our sociologies and our psychologies is a view of man that is simply unavoidable to anyone who is prepared to take an intellectually honest take on things
how the heck does this only have 119 thousand views after almost 7 years have passed?
Because it's more satisfying being worried about concious killer robots and self-aware AIs than to learn the fundamentals of why neither is possible.
At the point when you know what exactly you are saying while communicating. And what's the meaning of the things that you are saying. Then it's your own decision to say a certain thing - decide whether it's appropriate for a specific situation. And not only follow a specific instruction "set of signs (then) -> another set of signs".
The chinese room argument is useless because it can be applied to the brain as well. Replace the slit with sensory input, the handbook with the wiring of the brain and the activity of the agent with neuron activity. In the same way you demonstrated that the chinese room has no understanding you demonstrated that for the brain as well.
So, you are using words without knowing what those words mean?
Cool.
so CHAT GPT??
Same thing
Good philosophical question- where is the threshold? But nowadays, Western countries want to impose their Democracy or precisely their Experience of Democracy into China n that is to say Western countries know the Chinese Experience of governing than the Chinese. Literally , Western countries have their universal “threshold” .
The chinese room argument is ridiculously naive because it implicitly supposes that the chinese speaker only sends the room simple one-time questions, which can theoretically be listed in a look up table. Once the chinese speaker asks follow up questions which requires a context, or ask something related to mental states, or do a little cognitive test, or does some joking, etc., this rule based system immediately breaks down. Searle had no idea how language works.
btw i don't think functionalism is true. But Searle's arguments are on the level of a a highschooler's objection.
Probably you are the one who doesn't get the point though. This was a simple illustration video. The "look up table" in Searle's thought experiment is a computer program, an arbitrarily long algorithm. It can be a library of millions of books and the person has infinite time to answer. The person is the processor. Searle illustrated that given any algorithm and a processing unit, and a system that passes the "turing test" there would be no understanding involved. This is an argument against machine consciousness.
If someone knows how language works it is John Searle. What an arrogant comment.
i love how you people are focusing on the Chinese girl and completely missing the point of this experiment... faith in humanity lost again.
TH-cam is like a giant classroom of lazy idiots screaming from the back, trying to distract from their own stupidity.
The fact that she misspelled too many words and failed to construct sentences in the right form had already defeated the foundation variable of this experiment.
@@bobrolander4344 Good point..
@@bobrolander4344 The experiment itself is simple enough to be understood even with words. This video seems like a textbook full of typo explaining machine learning 101.
@@pd5784 How?
I speak Dravidian. Told to me in the Linguists MOOC.
Who else is here after playing The Turing Test?
Do Machines need to understand or think? They are just like mixers, grinders and dishwashers. They are quite cool. Nowadays they even do things like prove theorems (graph coloring) or recognise images. AI is the new electricity. Get on with it. Maybe there will be a theory of AI or maybe not. It may be just data, linear algebra and curve fitting. I think the way AI is packaged today appeals to the mathematical type of people. You understand mathematics when you understand things like sets, real analysis, calculus from starting axioms. Otherwise you are just cramming for your mathematical test. I do not know about AI axiomatically but Dr Max Tegmark has provided some physical basis in a series of books like Life 3.0. Dr Stuart Russell has written a book called Human Compatible mostly on problems like the trolley. I am now reading about AI weaponry and the AI race that has been ignited. China and USA are leaders. India is catching up. NLP/NLU has great opportunity in India.
How does he know the brain follows a set of instructions?
It seams to me an unsubstantiated hypotheses.
what the heck that girl wrote a sentence that does makes zero sense....
I was never impressed by this one. It is just a conscious being inside a room being denied information. How is that an analog for computer software? It isn’t.
It is.
Text prediction is just statistics, the computer memorized what token statistically follows, but it has no idea what words mean.
It's not even supposed to, the program works ok without that knowledge.
@@ChristianIce my issue is that there is nothing in the thought experiment indicating that the system couldn't know what the words mean, only that it doesnt in Searle's telling of it. It is a weak experiment.
@@Subtlenimbus
It doesn't, that's the premise.
That's how thought experiments work
THe premise can't be a lie, otherwise the exercise would be futile.
@@ChristianIce the Chinese Room doesnt understand Chinese. My contention is that there is nothing in the design that says it CANT EVER understand Chinese. If the inputs were better, the person inside COULD come to understand Chinese. That is the weakness of the TE. Searle is trying to convince people that machines could never think like humans, but that inability isnt simply due to lack of input.
@@Subtlenimbus
It can't ever understand chinese because there's no *new* element you can add to equation at a later time.
The conditions are static.
IF there was a way to learn chinese, it would have known chinese in the first place.
......Weird
你是不是电脑?
This is more interesting because neither 会说 Chinese.
I believe consciousness is only an emergent property. Its not real
define real.
Bad attempt at exampling this thought experiment
No biological processes, no consciousness.
LOL imagine someone 50 years ago giving an analogy of why we can't catolog our DNA or why a future "smart phone" could never exist , the "Chinese room problem " is a creative excuse for a problem that hasn't been conqered yet.
of course I know nothing lol.
The Chinese Room is only a reframing of the fundamental question: how can information processing give rise to consciousness? Replacing the CPU with a man accomplishes NOTHING. No information has been added and no claims have been debunked. We can assume that information processing gave rise to consciousness in the case of human beings since consciousness is correlated with brain function (though still roughly,) so some alternate explanation or mechanism would be required to argue impossibility, and you would have to PROVE that computers are incapable of said function. No such mechanism has been discovered and the computational theory of mind is our best guess at the moment. The Chinese Room is the equivalent of aerodynamic arguments against the possibility of airplanes, despite the obvious observation of birds flapping overhead. If human consciousness is possible then Strong AI IS possible, we just don't understand how yet. This is a silly argument and a waste of time.
Even granting the possibility of a human equivalent intelligence without consciousness, how would you demonstrate the lack thereof? Completely unfalsifiable. Only the AI itself would be capable of knowing. This whole thought experiment is pure sophistry. An argument from ignorance.
Chinese Room: "Strong AI is impossible!"
Reasonable person: "Why?"
Chinese Room: "Because we don't understand how to create it!"
Reasonable person: "That's a dumb argument."
I find it ridiculous as well, the following instructions part highlights the misunderstanding WHAT WROTE THE INSTRUCTIONS in the case of strong AI the instructions would be generated by the AI which implies that the person in the room just following the instructions is not the strong AI but is instead the processor.
It is however a useful thought experiment to delineate between strong AI and complex machine learning.
But it certainly isn't a refutation of strong AI and I don't understand why otherwise intelligent people act like it is.
@@AvNotasian If that's what you think the Chinese Room argues, then you don't even understand the argument.
Her Chinese handwriting is worse than mine, and I don't speak Chinese natively, I'm in year 2 of learning it at school
Please not say your Chinese is perfect while your own Chinese is shabby as hell like her does
Hantao Jian But anyone who has the elementary knowledge of Chinese in the written form won't make those mistakes 😂
lol nice hand writing.
she probs cant speak chinese
Ok? That's not the point of the video
Couldn't watch because of the unnecessary music
One would suppose BBC could afford to pay actual Chinese writing/speaking person.
Apparently they can't even afford someone who could trace Chinese characters properly lol
This is the dumbest argument ever. Maybe made sense in the 40s, but even that’s a stretch.
If its so dumb refute it
The argument was published in 1980 and it's still not even close to being refuted.
The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese.
But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine).
Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.
I am 愛
I am glad my loving wife agreed that the person does not understand Chinese. 🙏🙏👍👍❤️❤️❤️ Count 2 for Searle.