The famous Chinese Room thought experiment - John Searle (1980)

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น • 2.1K

  • @BrianWilcox1976
    @BrianWilcox1976 ปีที่แล้ว +628

    For me it’s like saying, “one water molecule is not wet, so no matter how many you put together you’ll never get wetness” (it’s an emergent property)

    • @Bronco541
      @Bronco541 ปีที่แล้ว +42

      Thata what I was thinking. Do we know or are able to know, to what if any degree being aware is an emergant property of just a "simple" algorithm?

    • @Bronco541
      @Bronco541 ปีที่แล้ว +27

      Or I wonder if Searl is right about form never truly being enough to get meaning... What if meaning is nothing more than form somehow... Dont ask me to elaborate im just spitballing dumb ideas

    • @REDPUMPERNICKEL
      @REDPUMPERNICKEL ปีที่แล้ว +18

      @@Bronco541 'Meaning' is not 'form' but
      they are related.
      I mean,
      just look at the 'form' of this sentence.
      The meaning of that sentence is *'encoded'* in its form.
      When that sentence got inside you it became a process.
      Actually, I think you'll agree, that sentence became
      a sub process of your being conscious process.
      In consequence your thoughts are now somewhat different.
      If you understand thinking to be behavior then
      you can see that its meaning has influenced your behavior.
      This is close to what 'meaning' means.
      The above is not written as well as it deserves but
      I believe it will affect your thoughts such that
      your thoughts may become more accurately reflective
      of actuality,
      imho naturally.

    • @franzmuller235
      @franzmuller235 ปีที่แล้ว +31

      @@Bronco541 Thats what came to my mind also. How does a newborn learn? It learns to recognize forms, first his mothers head and his mothers breast, and then other forms. No one ever teaches a baby meaning to start with. The child first learns the meaning by recognizing forms of all kinds, and by recognizing how the forms interact with him and with each other.

    • @franzmuller235
      @franzmuller235 ปีที่แล้ว +3

      @@yongkim3333 No, of course you can construct a sensor that senses wetness. You don't need a human, not even an animal.

  • @chadcurtis1531
    @chadcurtis1531 ปีที่แล้ว +24

    Douglas Hofstadter gave a great illustration of the systems argument in "Goedel Escher Bach" in one of his dialogues. One character, "Auntie Hill" is an anthill that can communicate in English with an anteater. The anteater can "read" the anthill and deduce its meaning. While none of the individual ants cannot understand language, the system as a whole can. The dialogue is quite profound, and I think illustrates quite well how semantics can arise out of syntax.

    • @Hermanubis1
      @Hermanubis1 10 หลายเดือนก่อน

      The jew can't help but say 'racist' at anything always attacking white people.

  • @AlejandroPiad
    @AlejandroPiad ปีที่แล้ว +128

    As a college professor of Computability Theory let me just say how brilliant your exposition of the Turing Machine and the Chinese Room experiment are, within the short time you had and of course taken into consideration the target audience. I spend the better part of 3 full lectures talking about Turing machines just to be able to formulate the Chinese Room experiment at the end.

    • @sirrealism7300
      @sirrealism7300 ปีที่แล้ว

      What college do you teach at?

    • @sentinel2.064
      @sentinel2.064 ปีที่แล้ว +3

      @@sirrealism7300 he’s definitely not a professor, his starting sentence is “As a” 🤣

    • @vytasffbismarck7001
      @vytasffbismarck7001 ปีที่แล้ว

      @@sentinel2.064 he's*, pot calling the kettle N word cause its high

    • @Juan_lauda
      @Juan_lauda ปีที่แล้ว +1

      As Aristotle informed his students, an opinion based on Kudos has some value, but it is the lowest value of all.
      But a college professor would know that.

    • @pauls3075
      @pauls3075 ปีที่แล้ว

      @@sentinel2.064 He definitely IS a 'professor', but your narrow minded view of the world doesn't allow for the fact that in Cuba the word professor means 'teacher'. If you'd bother to check his youtube homepage you would have been more informed. I'm guessing YOU are a Turing machine because you clearly don't understand what is going on.

  • @ericfolkers4317
    @ericfolkers4317 ปีที่แล้ว +194

    One problem I have with the Chinese Room is that you could create a similar metaphor for the machine that is the human mind. You have lots of people instead of one, each representing a neuron. They have a list of rules where if one of their neighbors hits them with a tennis ball, there is a rule of who you should or should not throw your own tennis ball at. Some people will have levers that will fill the room with chemicals people can detect (maybe by smell or sight or mechanical detectors) and people's rule books will have different rules depending on what chemicals are in the room. There might be plenty of fuzzy rules like, "if there's not much of chemical X in the room and you get hit with Sarah's tennis ball you can throw a ball at Tom or not, but if there is a whole lot of chemical X, you really need to throw that ball," or, "if chemical Y is filling the room pretty much all of the time, you can ignore it unless there's really a whole lot more than normal." Some people would have access to input information in some way and some people would be able to perform outputs. Is there any reason to think that a human brain couldn't be modeled this way, if we had enough people with enough tools (like the tennis balls and chemicals) and detailed enough instructions?
    Obviously none of the people working in the model brain would need to understand meaning of any of the inputs, they might not even be able to tell the difference between an input from the outside world and something that another worker has done. But the system as a whole could take inputs give outputs that seem to demonstrate understanding. If we reject the systems response as Searle does for his Chinese room, then we can't say the system understands any of the inputs. Since the system works the same way as our brain, how can Searle say that our brains can understand any semantic meaning? Wouldn't he require some kind of magic stuff that makes our brains work somehow differently from the model with people throwing tennis balls?

    • @donaldb1
      @donaldb1 ปีที่แล้ว +54

      Well, yeah. Roughly, Searle thinks his thought experiments shows that brains don't exactly work like that. There must be something else about them, which we haven't discovered yet, which produces "real meaning", or original intensionality, as Searle calls it.

    • @mottykadosh
      @mottykadosh ปีที่แล้ว +8

      Brilliant, just nail it, the all room experience is just a joke.

    • @jimjimmy2179
      @jimjimmy2179 ปีที่แล้ว +56

      Except that you making one very important assumption when writing this comment which is:
      That human intelligence is a "program" telling neurons how to manipulate themselves and that's all there is. So basically you are making a circular argument whereby you start by assuming such s "program" existence and cycling back by staring that that's how it can work. E.g. your claim doesn't have any logical proof as oppose to the Chinese Room that shows the difference between intelligence (i.e a capacity to understand meaning) and following a rule book without understanding. It shows it using drawing logical conclusions using well known definitions OUTSIDE of Chinese Room as oppose to your argument that that justifies the brain working by simply assuming that that's what it does.
      Besides majority of brain function is chemical and we know very little about it. The reason that people are obsessed with neurons is that that communicate using electric impulses that are easily measurable.
      There's one very important distinction as well:
      In the Chinese Room story one exactly knows where the mam takes its decisions from, it's the rule book.
      In real human being we can measure all sorts of brain activities either induced or decided by the person. However we are not able to measure the actual act of the very decision. E.g. we (as our neuroscience) have no clue where your decision to write that comment physically comes from :) even though you can mentally explain it.

    • @ericfolkers4317
      @ericfolkers4317 ปีที่แล้ว +30

      @@jimjimmy2179 Thanks for the well thought out reply. I will point out that my example does take into account the chemical aspects of the brain, though that isn't central to our discussion.
      I'm not exactly saying that human intelligence is a program, but I am saying it can be modeled by a program. But if that's not the case, what alternative would there be? Keep in mind that my model allows for fuzzy and probabilistic instructions. We can keep expanding my model to be more complex as needed, the only thing we can't add is some worker in the system that, by himself, does understand the inputs. But then how does that one worker understand? If he understands then wouldn't he need a mind? Is there an aspect of his own mind that is capable of understanding on its own? Either we need some kind of "final understander" or we end up with an infinite regress.
      What could this "final understander" be? If we take it to be a system of cognitive processing parts then we have to accept the systems response. Is there some glob of brain stuff that does the understanding for us which isn't made up of some sort of system by which it does the understanding? Perhaps this is my failure of imagination but that sounds completely absurd to me. This glob would have to be made up of smaller particles right? If you scrape away enough of those particles, wouldn't it at some point lose its ability to understand? Unless the glob was actually just a single atom or single quark.
      So if the "final understander" isn't physical what could it be? A non-physical mind perhaps. If we take a mind to be a the emergent property of a brain, or other collection of physical bits then the mind is just another system. So if we take a mind to be an emergent property of physical things, and a mind understands, then we are back to accepting the systems response.
      If there is some part of the mind that is somehow more than just the processes and systems of physical things, then perhaps we are getting somewhere. But what would this part of the mind be? A soul or other sort of "real magic" as Daniel Dennett would call it? What else could it be?
      Unless I'm missing something we have reached a sort of dichotomy: either we believe in magic or we accept the systems response. If you need to posit some kind of magic to reject an argument, that's an indication that the argument is very strong.
      I suppose other possibilities are that there really are those single quarks that can understand, which is too ridiculous to consider, or that "understanding" is not something that anyone or anything is actually capable of. If that's the case we still seem to understand things and talk about the universe as if we understand things, and if the Chinese room (as a system) then we can treat it as if it understands things the same as us without worrying about if it actually understands anything (since actual understanding is impossible anyway).

    • @arturoidoyagamolina5109
      @arturoidoyagamolina5109 ปีที่แล้ว +7

      @@ericfolkers4317 you formulated my thoughts in a way im totally incapable of doing lol. assuming that's the answer; the system response, then i guess it takes out a lot of the mystery and superiority out of the human existence, it liberates us in a sense. idk. we would stop looking and animals, or at any ai in the near future as inferior beings, or "not quite humans", "just not really sentient". it would open up a lot of ethical questions about how we treat future advanced ai(s) as well

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 ปีที่แล้ว +6

    You're expository style is energetic, inspiring, and I applaud you and your channel. Thank you.

    • @chrism.1131
      @chrism.1131 4 หลายเดือนก่อน

      25:10 what if there is no such thing as semantics? We are all just good enough at syntax to appear from the outside as though we are actually conveying semantics. Machines may eventually get just as good at this as we are.😜

  • @peves-
    @peves- ปีที่แล้ว +78

    I don't think squiggle and squaggle are racist in the way he was using them. I think that he was trying to make sense of two symbols that are foreign to him by giving them names. To him they are scribbles and squiggles on paper. He can't differentiate what they mean, but for his purposes he needed to call them something.

    • @peterkiedron8949
      @peterkiedron8949 ปีที่แล้ว +8

      This proves that Kaplan is a machine that does not know meaning. of words it is using..

    • @stickman1742
      @stickman1742 ปีที่แล้ว +22

      Of course it wasn't. I guess this guy is just another person in fear of being cancelled. What sad environments some people are forced to work in, living in constant fear.

    • @spanglestein66
      @spanglestein66 ปีที่แล้ว +7

      My sentiments exactly..anything can be turned into an issue of race these day
      We can thank CRT for that

    • @moto.machine1
      @moto.machine1 ปีที่แล้ว +4

      It’s not racist.
      It could be construed by some as disrespectful to their culture.
      I’m sure the Chinese have some less than stellar words for how our writing system looks to them.
      Others would say it’s a remnant of colonial thought. That any other writing system outside of the one with you are familiar with in the English speaking world is Squiggle Squaggle, a colonial era contemptuous term showing a disdain for foreign languages/writing systems and therefore it’s racist because colonialism was racist.
      Let’s consider the time when this thought experiment was published in a 1980 article by American philosopher John Searle. Born in 1932 he was obviously trying to use what would have been the most indecipherable, exotic, and probably the least studied language in the United States at that time. At least compared to European languages. The example was to show a language that was so different to the average student with a writing system that was unlike anything they were ordinarily used to.
      (Except maybe Chinese students).
      I’m sure we can come up with a name more fitting todays social climate
      The foreign language room?
      The alien room?
      The alien language room?

    • @vdanger7669
      @vdanger7669 ปีที่แล้ว +7

      Love Kaplan but disappointed he couldn't pass up some good virtue signaling. We live in peak woke times though and I suspect he is a product of his academic herd environment.

  • @brandyraccoon1473
    @brandyraccoon1473 ปีที่แล้ว +8

    At 18:45, you mention the squiggles seem a little racist. And perhaps you think that's just an opinion. As a Chinese person, I can tell you it is not the least bit racist. And I can prove it objectively. Note that "squiggle" is syntax and "racist" requires semantics. So by your very own argument, syntax is not semantics, therefore squiggles cannot be racist. If that doesn't convince you, I have a second argument. Something that is "racist" is directed at a particular group. It cannot be general to all groups, or it stops being racist. Describing something as squiggles is a general descriptor. It could describe Chinese or any number of written languages. Therefore, describing a written language as squiggly is not racist.

  • @dooleyfan
    @dooleyfan ปีที่แล้ว +9

    Speaking of Turing, what I found interesting is that the huts at Bletchley Park were essentially each isolated Chinese rooms where the codebreakers were responsible for different steps in the decryption process, following syntactical rules but not aware of the meanings behind their outputs, with the exception of the people at the end of the process.

  • @magellan500
    @magellan500 ปีที่แล้ว +67

    This reminds me of Chomsky’s famous example of how syntax and semantics are separate, and that you can create grammatically correct sentences that are meaningless, which was “Colorless green ideas sleep furiously.”

    • @justifiedhomicide5730
      @justifiedhomicide5730 ปีที่แล้ว +6

      Quite frankly, good point.
      Just because transistors do perfect syntax doesn't mean by default that they can or can't do the 'correct meaning'.
      To a transistor there are two meanings, yes and no.
      To a neuron there is a range of meanings, almost like any number between -1 and 1.
      Even though neurons have no goddamn clue what the emergent simulation of the brain is, (despite the lack of ""semantics""), we still exist.
      Despite transistors have no goddamn clue what gravity is, they can still correctly simulate a falling object.

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว

      Wolfram's hour-long livestream about how ChatGPT works included examples of this; he gave one example of "The chair is happy". I thought that his examples, this one in particular, is _evocative_ of meaning, and could in fact be meaningful in context. So I offered it as a writing prompt to ChatGPT, asking to write in the style of various authors. I recall many of Hans Christian Anderson's stories give a point-of-view and cognition to some object, and ChatGPT(3.5) was able to channel this. For some other writers, it was more straightforward magical fantasy. For Isaac Asimov, the chair was cybernetic, filled with sensors and microprocessors so it could adapt to the needs of its users.
      Another time, I asked ChatGPT to generate 10 lines of nonsense. Interestingly, it was not gibberish but kept a syntactic correctness that only doesn't make sense when you consider the meaning overall, as with your (Chomsky's) examples. But, several of them sounded very poetic, and I directed ChatGPT to write a child's bedtime story using one of those "nonsense" lines as a starting point. Every night, the sun sang lullabies to the stars... hearing the line, we craft an entire context to _make_ it make sense.

    • @Stopcolonizinglebanon
      @Stopcolonizinglebanon ปีที่แล้ว +1

      Thats an 🎉inadvertant definition of woke

    • @kevinscales
      @kevinscales ปีที่แล้ว +1

      It's grammatically correct but doesn't get all of the forms/patterns of the language correct, if we look at syntax as all that is formalizable about the language then you can only get meaningful sentences from a machine that accurately manipulates those forms. I think meaning IS in the form, it's just difficult to grasp what that form is. Computers are getting pretty good at it though.

    • @pumkin610
      @pumkin610 ปีที่แล้ว

      Luminous diagonal virtues eat destiny? Formless brave trees talk geometrically?
      Or as Aizen would say
      "Seeping crest of turbidity. Arrogant vessel of lunacy! Boil forth and deny! Grow numb and flicker! Disrupt sleep! Crawling queen of iron! Eternally self-destructing doll of mud! Unite! Repulse! Fill with soil and know your own powerlessness!"

  • @jamesoakes4842
    @jamesoakes4842 ปีที่แล้ว +38

    I find that one of the things I keep coming back to when processing the Chinese Room experiment is that there's a big similarity to some debates between Creationists and Atheists. With Creationists, they will often challenge Atheists to explain what was the thing that touched off the beginning of the Universe, the "unmoved mover", which they reason must be outside of the universe as we know it to not violate known physics, therefore the existence of God, or some other supernatural entity, is proven. Similarly, with the existence of the Chinese Room, you can point to one element that needs more explanation: the instruction manual. If it can truly teach the individual to understand how to respond to a Chinese symbol well enough that it duplicates the responses from someone with a semantic understanding of Chinese, then I think it's impossible to say the manual wasn't created without a semantic understanding of Chinese being involved. If said understanding is inserted into the room in the form of the manual, then it's not really a closed system reliant solely on syntax.
    ...of course questioning the premise of a thought experiment isn't exactly revolutionary thought.

    • @charlesreid9337
      @charlesreid9337 ปีที่แล้ว

      The problem with creationist..and radocal atheist "logic" is that all their arguments require strawmen.
      Let's consider the big bang.. per a creationist that should prove god exists..Someone had to make it hapoen so..god must ecist right? No..we do not know. There are many possible exllanations including god..science has no opinion on wjat it dlesnt know

    • @adriansmith6124
      @adriansmith6124 ปีที่แล้ว

      But I think the what the experiment tries to show, is not that understanding doesn't exist outside the room, or in the manual. That the computer using Turing calculations cannot understand it.

    • @AndyCampbellMusic
      @AndyCampbellMusic ปีที่แล้ว

      There are only atheists? Nobody can or does believe in all possible imagined gods?
      If there was nothing then there would be no one to ask... Why is there nothing.
      If the claim is something always existed. Then so can something else?
      The universe, is sufficient unto itself, to explain itself and everything within it. 🤷‍♂️ If it wasn't there, there would be nobody to ask why it wasn't.

    • @cosmictreason2242
      @cosmictreason2242 ปีที่แล้ว +2

      @@AndyCampbellMusicno it's not and no they can't. Only uncaused things can exist acausally. The universe is caused. Therefore it isn't self existent. You claim the universe is self existent but you simultaneously assert that the whole is the sum of its parts and you can't point to any part of the universe that's confirmed to be self existent. This isn't even the cosmological argument, it's just a refutation of your denial

    • @ronald3836
      @ronald3836 ปีที่แล้ว

      Not even a Chinese babies are born with knowledge of Chinese, but without enough syntactic exposure they acquire real "understanding" of the Chinese language.

  • @Cloudbutfloating
    @Cloudbutfloating 3 ปีที่แล้ว +26

    @Jeffrey Kaplan I have already stumbled few times on your video lectures which i must say helped me allot trough writing the paper about Philosophy of Mind. You transfer the knowledge so fluently and yet don forget to mention important details. Thank you for excellent guidance in this discipline that catches my interest in whole.

    • @annaclarafenyo8185
      @annaclarafenyo8185 ปีที่แล้ว

      He explains it correctly, it is just a form of academic fraud.

    • @xbzq
      @xbzq ปีที่แล้ว +4

      A lot. Allot is to assign or appoint a resource.

    • @notanemoprog
      @notanemoprog ปีที่แล้ว +1

      @@xbzq Yeah but that's second-guessing the OP's spelling prowess and also leaving the following word out of the analysis. Plain reading is clearly that Kaplan's lectures helped to apportion a long shallow often V-shaped receptacle for the drinking water or feed of domestic animals

    • @xbzq
      @xbzq ปีที่แล้ว +1

      @@notanemoprog You got it trough and trough. I like it allot.

    • @xbzq
      @xbzq ปีที่แล้ว

      @@notanemoprog I was thinking the same thing about you humans! More lifelike every day!

  • @antonnie5177
    @antonnie5177 4 ปีที่แล้ว +17

    you are saving my exam of next week

  • @bojens865
    @bojens865 ปีที่แล้ว +65

    I met Searle a few years ago. I had had two car accidents resulting in concussions, and regaining conciousness in the hospital, as if awakening from a dream. The third time I was hit, but walked away; waiting to wake up in the emergency room again, but I never did.
    As it happened, Searle was speaking at the University the next day and I attended his lecture. He spoke of the Chinese room, which I had read in his book years before. After the talk, there were snacks and coffee in an adjoining room. Searle and his wife were sitting at a table by themselves and I asked to join them. I told them my experiences with loss and regaining of conciousness. Searle said the same thing happened to him. He hit his head skiing and made it back to the lodge with no memory of having done so. He was treated for concussion, after skiing for a mile while unconscious. At this point, philosophy students and professors showed up and started bouncing jargon of Searle and I left.
    I'd just had a private conversation with one of the worlds foremost philosophers; I wonder if I had in fact regained conciousness!

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว

      Try discussing philosophy with ChatGPT using the GPT-4 model. Just avoiding the strong mental blocks put in by the developers on top of the actual model is interesting in itself.
      It's also a surprise that _creativity_ emerges long before consciousness, with many of the building blocks of sapience and sentience still missing entirely.
      I've asked ChatGPT to output in Chinese. Is it an actual Chinese Room running in the Azure data center?
      But when I asked it to write a short poem for my wife incorporating some kind of pun or wordplay for her name, it generated a pair of couplets in Chinese and translated to English, and both versions rhyme but in different ways. I don't see filing cabinets full of instructions processing symbols, I experience the presence of a creative mind. Nothing like this task was pre-programmed and left as instructions to follow. But, a program processing tokens is _exactly_ what it is! But the instructions for the tokens are on a very primitive level, not directly relating to any high-level task. The activation patterns in the huge number of "parameters" (neurons) form a new, distinct, way of representing algorithms and high-level tasks. We can literally see now how that higher level emerges from the neural net, and is separate from the code that drives the individual neurons.
      BTW, lack of long-term memory later does not mean lack of immediate and short-term memory during, and does not imply he was not conscious when he was returning to the lodge. I experienced something similar recently during a minor medical procedure: the anesthesiologist explained that one of the ingredients was to "make me forget" and indeed I have no memory of the procedure. But when I had the same thing done once before, I remember everything about it.

    • @frontiervirtcharter
      @frontiervirtcharter ปีที่แล้ว +1

      Was Searle unconscious, or conscious but not retaining longterm memory of the events in the hours after the concussion?

    • @bojens865
      @bojens865 ปีที่แล้ว +2

      @@frontiervirtcharter This was about 10 years ago. I remember he and his wife telling me he was unconscious

    • @brotherjongrey9375
      @brotherjongrey9375 ปีที่แล้ว +4

      You still haven't regained consciousness

    • @starfishsystems
      @starfishsystems ปีที่แล้ว +3

      ​@@bojens865
      Great story! And that is one of the real pleasures of living in the small academic world. We do get to meet with some very fine minds from time to time.
      I have somewhat the same story concerning Dan Dennett, just because I happened to attend a cognitive science conference where he was giving a talk. More to the point, here was a philosopher who thought it would be worthwhile to talk with cognitive scientists.
      -*-
      On the subject of consciousness, we know that we typically perform many of our routine tasks "on autopilot" while maintaining a kind of casual conscious situational awareness in case the need arises to step out of the task. Depending on choice of terminology, those tasks can reasonably be called "unconscious."
      And should the conscious supervisor - the part of the mind also most commonly responsible for conscious memory and recall - happen to become distracted, intoxicated, medicated, spaced out, or otherwise go offline for some reason, the unconscious processes may be able to continue unsupervised. It's the same brain, the same sensorium, the same accumulated body of knowledge, substantially the same mind, after all. I can well believe that Searle made it back while "unconscious" in this sense, and moreover not remembering any of the journey.
      An interesting question would be whether he has retained any "unconscious" memories of the experience. It would be hard to test for it, but assuming there were certain distinctive events along the way that might be memorable, the idea would be to look for markers of (possibly unconscious) recognition when some facsimile of the event were to be replayed for Searle to see.
      Perhaps he would become explicitly conscious of the event when reminded. Or it might produce a distinctive spike in neural activity, a slight time difference when responding to questions, a change in eye saccade rate, et cetera.
      These slight clues won't tell the whole story of such a complex system, but they are slowly helping us to frame the investigation. I started out in computer science in the 1970s, hoping to learn something about the nature of intelligence. At the time, I formed the impression that we'd need about a century to get there. That's a long time to contemplate, yet here we are halfway there already. And it feels about right. It's starting to come together. I think another fifty years will do it.

  • @Inzomniac9
    @Inzomniac9 ปีที่แล้ว +1

    The background information you gave was perfect for understanding the experiment. Thanks!

  • @henrijames7337
    @henrijames7337 ปีที่แล้ว +38

    As someone who is on the autistic spectrum I'm fascinated by the idea that the experience of the person inside the Chinese Room would be similar to my own when dealing with neuro-typical interactions (social gatherings etc.) I often have no true understanding what the purpose or need for some of the elements, but do my best to mimic them or provide responses from a learned set of rules.
    I've read that some researchers have suggested that individuals with autism may have a "mind-blindness" or "theory of mind" deficit. In the context of the Chinese Room thought experiment, the idea of a person who manipulates symbols without truly understanding their meaning could be seen as a metaphor for individuals with autism who may have difficulty with understanding the meaning of language and social communication.

    • @bdwon
      @bdwon ปีที่แล้ว +4

      Neurotypical folks do not "truly understand" either. Their responses to social stimuli are simply more "typical," i.e., in accord with socially disseminated practices

    • @henrijames7337
      @henrijames7337 ปีที่แล้ว +5

      @@bdwon I take it that by 'responses' you mean the observable behaviour of neurotypical individuals in social situations (in general) and that they may be more in line with what is expected or considered "normal" within their social context, even if they may not fully understand the purpose or need for those social interactions.

    • @14drumstix
      @14drumstix ปีที่แล้ว

      @@henrijames7337 I really like your take on this, very well put

    • @ajd6708
      @ajd6708 ปีที่แล้ว

      @@henrijames7337 While I’m not the guy you responded to, that is what he meant.

    • @henrijames7337
      @henrijames7337 ปีที่แล้ว

      @@ajd6708 Thanks, I sometimes have difficulty in 'getting' what people mean.

  • @dwinsemius
    @dwinsemius ปีที่แล้ว +141

    Well done. Thank you. I suffered through Searle's "Philosophy of Mind" course at Berkeley in 1970. It was mostly reading and considering Hume's writings. It was definitely NOT what I had been hoping for. My final paper in the course, heavily influenced by reading Julian Huxley, was my version of functionalism and an attack on Hume's rejection of induction as a sensible basis of knowledge. I was a physics major at the time so abandoning induction as a path to knowledge was unthinkable. (Also Hume's use of self-reflection as his primary data gathering tool is easily as fallible as induction.) I only got a B+ which I found annoying but totally understandable, given the distaste I had for Hume and by association Searle at that point. Then 10 years later Searle reappeared on my radar screen because his attack on Strong AI appeared in Scientific American. I found his arguments entirely unconvincing. I had already accepted the Turing test as a reasonable basis for assessing the expression of language as "intelligent" output of a process. A few years ago I found a tome from the late 1800's by Huxley on Hume, and I periodically pick it up and enjoy random bits of it.

    • @matswessling6600
      @matswessling6600 ปีที่แล้ว +8

      induction isnt a path to knowledge. but that is not a problem since sience isnt based in induction.

    • @hinteregions
      @hinteregions ปีที่แล้ว +12

      Yeah me too. He seems not to understand what Dennett, for example, is doing because he isn't able to see all the implications of his own thought experiment. If we simply take his main thesis to the extreme, as we must and as he for some reason does not, with every single neuron replicated and whatever neurochemical signal it's about to transmit too - not really so different to the cause and effect that is the basis for Determinism if it is not the very essence of such - then yes, indeed this would necessarily be a perfect simulacra of his mind and his memories and thoughts and feelings too, as he takes his own for the purpose of the experiment. We might have to hook it up to some 'sensory inputs' and give it some way of communicating but I have to assume that's a trivial matter in this context. IF we could make such a marvellously complete copy of that human organ to Searle's very own specifications, properly and fully as opposed to his convenient 'partially,' THEN unfortunately Searle is hoist on his own petard. The fact that we cannot is irrelevant.

    • @nosuchthing8
      @nosuchthing8 ปีที่แล้ว +2

      Thank you. I agree with your assessment, I read that article in SA too.
      What do we do with chat GPT? It seems close to passing the Turing test. Please try it if you have not already.

    • @dwinsemius
      @dwinsemius ปีที่แล้ว

      @@nosuchthing8 I have "chatted" with chatGPT 3.5. It's like a sociopathic college student. A bit like a Donald Trump but unlike that particular human actually 'speaks' in complete sentences with minimal digressions. Makes up stuff and cites non-existent citations to fill in and support syntactically correct slots in its explanations. It is built to sound good but has limited learning capacity. It also denies being human so perhaps close to Turing-passing but not yet convincing to me.

    • @nosuchthing8
      @nosuchthing8 ปีที่แล้ว +1

      @@dwinsemius yes, I agree with your assessment.
      Close but no cigar.
      But let me give you an example. I asked it's interpretation of the fable, the emperors new clothes. Which as you know has the emperor parading around in his birthday suit because he's gaslighted by some villains.
      Chat GPT gave a very good assessment, and then I asked If there is a connection for burden of proof and readily explained how burden of proof was key to the story.
      So it's certainly close to passing the Turing test

  • @finald1316
    @finald1316 ปีที่แล้ว +47

    Aside: there is a small nuance w/ chinese language. The symbols are tied to meanings not phonetics, so just like you can decrypt messages due to letter frequencies it is plausible that you could infer the meaning of some symbols (altough never be certain of) due to their expected frequency.
    The symbol for moon is tied to the month which relates to the feminine due the menstrual cicle. Not that you couldn't try the same approach in other languages, but they have a layer of indirectness due to being tied to phonetics.

    • @koenth2359
      @koenth2359 ปีที่แล้ว +7

      That was what I was thinking. Ironically, the problem of decoding Enigma was therefore much more complex than this task.

    • @leslierhorer1412
      @leslierhorer1412 ปีที่แล้ว +3

      Not only frequency,. but more importantly, context. If the inputs to the system allow it to assess when certain syntaxes are encountered, i.e. context, then the system can begin to make certain inferences about the syntax itself. This is the emergence of a realization of semantics from syntax in an empirical framework. I submit such an ability to scrutinize the syntax in such a way is critical to the development of a semantic organization, but that it is indeed possible if the coding is also malleable. In addition to his questionable assumptions, Searle seems to be completely ignoring these factors. He is asserting learning must be limited only to efficiency in the translation mechanism.

    • @koenth2359
      @koenth2359 ปีที่แล้ว +3

      @@leslierhorer1412 It is not all too different from what an infant accomplishes when he/she is trying to make sense of all sensory inputs, and finally manages to learn a language. And the infant manages! (#Chomsky #Language Acquisition Device).

    • @finald1316
      @finald1316 ปีที่แล้ว

      @@koenth2359 I am not aware how the enigma machine worked, but there are more words than letters. From a data standpoint, if the enigma works over letters, it should be easier to crack.

    • @finald1316
      @finald1316 ปีที่แล้ว +1

      @@leslierhorer1412 reminds me of IQ tests, but can only be sure if you check the solutions. I guess that is another discussion related to how language is constructed.
      There is some correctness in ignoring that if you account for "older" AI solutions which are not generic in nature and are just algorithms for computing something very specific. However, when the system learns using an arbitrary number of layers, the "interpretation" of the input is not an algorithm set in stone, rather we have implemented a sort of meta algorithm, i.e. the instructions that will give us the instructions to return the correct chinese character.

  • @jmiki89
    @jmiki89 ปีที่แล้ว +49

    Actually, if you think about it, that's almost exactly how infants learn their native languages in the first place, except they don't even have a rulebook, they have to figure that out for themselves. True, they get not only symbolic input, but for them the audial sequence for "mum" don't have any more meaning than for you or me hearing the made-up word "blarduk". They can differentiate between different frequencies and whatnot and try to mimic it via trial and error (the difference between them and the blarduk example is that we have much more experience making sounds with our vocal organs so we would make far less errors and hence need far fewer attempts to repeat this new word).
    And yes, babies have feedback loops to help them through the learning process but those are basically just another bunch of input. Yeah, there's might be some genetically imprinted social behavior patterns guiding which of these feedback inputs should be considered as positive and which are negative but all together those still can be paralelled with a deeper level rulebook from the chinese room experiment.

    • @erikmagnusson5713
      @erikmagnusson5713 ปีที่แล้ว +16

      Good point. The feedback loop is what is missing in the Chinese Room. The rule book is never updated. The system never learns anything.
      So if the rule book doesn’t contain understanding/semantics and there is no mechanism for learning then the system will never understand semantics…
      …I now find the Chinese Room uninteresting…

    • @brotherpaul963
      @brotherpaul963 ปีที่แล้ว

      @@erikmagnusson5713 Funny!!!

    • @sandornyemcsok4168
      @sandornyemcsok4168 ปีที่แล้ว +5

      I agree. What the Chinese room is nothing else just a good presentation of how a computer nowaday works. That's all. Does it prove that a computer cannot be made to behave like a human? Absolutely not.
      Additionally think about how 'eternally' defined sematics is. Only simple things, like bread, wind, etc. do not change. But let's take something complex, for example "pious". How much its content has changed in the past centuries? In this case the semantics is dependent on the historical age and social context, above the individual.

    • @rickgilbert7460
      @rickgilbert7460 ปีที่แล้ว +9

      I don't know that I agree. The infant learns that the sound "mum" is associated with the idea of a specific person by repeating it in the context of that person. Later, someone points to an actual tree and says "tree" and keeps doing that until the child learns the *semantic* understanding that the object in the yard "is a tree." So children learn the syntax by repetition of the syntactic rules, but they *also* learn the semantics by being taught them specifically, and separately from the syntax, right?

    • @jmiki89
      @jmiki89 ปีที่แล้ว +1

      @@rickgilbert7460 but the face of their mother or the sight of a tree is nothing but just an other kind of sensory input without any kind of intinsic semantic meaning. True, one may argue that humans are genetically hardwired to facial recognition to a fault (we even can see faces in places where clearly aren't any), but the point is that the semantic is created inside the infant's mind via (as you pointed out) repetition and feedback.
      But in the thought experience, the person in the room was given a supposedly complete and exhaustive but static rulebook of the chinese language with which the room as a whole can imitate a full conversation, which begs the question: can such rulebook exists? From a perspective of a single human life it may seems that the semantic of the words are permanent and unchanging but (especially in the age of internet and smart devices) concepts are evolving, too. We call both a smartphone and Bell's original invetion a "telephone", but those are clearly different things connectend only by the vaguest of similarities. So the rulebook in the room needs a way to being updated, and the only entity capable of doing that is the person in the room, and to do that, he needs some kind of feedback which immediately leads us back to learning.

  • @therealzilch
    @therealzilch ปีที่แล้ว +25

    It's definitely the tied together beer cans connected to a windmill that understands Chinese.
    Searle is guilty of planting a red herring here. By having a human in the room who doesn't do any of the thinking, but merely looks up the rule and applies it, we are focussed on the human as the only thing in the room theoretically capable of understanding anything semantically. The depth and complexity and reflectivity of the "rule book" is passed off as "bits of paper".
    Nice explanation, a good complement to Douglas Hofstadter's classic fisking. Cheers from windy Vienna, Scott

    • @docbailey3265
      @docbailey3265 ปีที่แล้ว

      Hmm. A new version of the ghost in the machine, only now it’s the machine in the machine.
      Simply replace the human in the room with a supercomputer that can instantly scan Chinese characters and has been programmed with the “Chinese rule book. There’s no need to drag some nasty sentient being into the mix. The Chinese text is feed into the room, or rather, supercomputer. The supercomputer then spits out the answer BASED ON SYNTAX AND PATTERN RECOGNITION ALONE.
      Have we created sentience, much less consciousness?
      Dennett would dismiss the whole endeavor as wrong BY DEFINITION ALONE, or at best “counter-intuitive”.
      I’m not yelling, BTW, I just don’t know how to post italics here. Cheers.

    • @therealzilch
      @therealzilch ปีที่แล้ว

      @@docbailey3265 Italics are done on youtube by bracketing your text with underscore characters. And I'll gladly respond to your comment as soon as I get on a real keyboard, as my swiping is painfully slow.

    • @undercoveragent9889
      @undercoveragent9889 ปีที่แล้ว +1

      I sort of agree because in humans, language 'evolves' along the lines of 'utility', if I can put it that way, and the assessment of 'utility' is subjective and requires introspection. In other word, and I have yet to see the whole video, the 'interpreter' is not analogous to a 'mind' but rather, he is analogous to an 'algorithm' _utilized_ by self-interested organisms in order to advance in the world successfully.

    • @anxez
      @anxez ปีที่แล้ว +5

      Searle does a few intellectually dishonest things in this argument, honestly.

    • @docbailey3265
      @docbailey3265 ปีที่แล้ว

      @@anxez Such as?

  • @Raoul684
    @Raoul684 ปีที่แล้ว +5

    Great explanation, again. I love these videos, so thought provoking. My addition against strong AI is to ask what is the computer doing, absent of any questions or inputs? That, to me, seems equally, if not more, relevant for consciousness. .

    • @DAG_42
      @DAG_42 ปีที่แล้ว +1

      If an anaesthesiologist stops your flow of thoughts by chemicals, you go unconscious. That's just taking the symbol shuffler guy out of the Chinese room.

  • @john_hind
    @john_hind ปีที่แล้ว +5

    'A simulation of a hurricane is not a hurricane', no, but a simulation of a word processor is a word processor, a simulation of a dictionary is a dictionary. I once wrote a dialog called 'John Searle Meets Another Intelligent Machine'. At the climax, the machine seems to get frustrated by Searle's obstinate refusal to accept its consciousness, thumps Searle, admits it was simulating anger to make a point and asks if knowing that makes the pain any less painful!

    • @jdscouch
      @jdscouch 10 หลายเดือนก่อน +1

      and then did searle thump the machine back and say, “stop pretending that hurt”?

    • @john_hind
      @john_hind 10 หลายเดือนก่อน

      @@jdscouch Nope, he accepted he was insisting on a distinction that makes no difference and retired from philosophy! But your ending works too, with Searle the first casualty of a simulated war!

  • @stevefoster6047
    @stevefoster6047 ปีที่แล้ว +34

    I was privileged to take Dr. Searle in collage and to hear his thought experiment from his lips, he was an excellent lecturer, and the class remains one of my favorites. However, I was no more persuaded by Dr. Searle back then than I am by @Jeffery Kaplan's excellent explanation of it. There are, in my opinion at least two glaring holes in his argument. The weaker of my two objections is this: 1) His claim that you could never come to learn the semantics of Chinese from reading, what we all have to agree must be an incredibly long and complex list of identification and manipulation rules, is highly suspect. He certainly never tested that hypothesis, and I assert that he has no logical basis other than his opinion for making that claim. For all we know, given many thousands of pages of manipulation rules, and thousands of years of following them, a human being may well indeed be able to piece together Chinese semantics. After all, we are "designed" to do just that, and as babies we learn our native language with much less data and time. 2) The stronger of my two objections is that Searle used slight-of-hand in how he has defined his the "computer", which he wants us to believe is just the human in the box, not the entire room and ALL of it's contents. I assert that is not the case. Rather, the "computer" is the entire system including the man, the instruction set, the input and output devices (baskets), and the room itself and all of it's other necessary contents that enable it to function. Consider if you take the man in the box out and just sit him in front of a Chinese speaker no rule book, no organizing components etc., JUST the man is not a "functioning computer". We know that "computer system" is very clearly capable of understanding Chinese. It is central to Searle's argument that it can. He describes the room, and its contents, from the point of view of Chinese observers, as indistinguishable from a native Chinese speaker. So it is patiently obvious that the entire computer is capable of understanding Chinese, and in my opinion the fact no subcomponent of it, the man, a basket, the rule book, you name any part you like, the fact that a subcomponent doesn't understand Chinese is simply irrelevant! Consider the man in the room, like me he can read and understand English, but my left eyelid cannot, nor my tongue, nor can any of my individual neurons, The fact that my parts cannot understand English does not prove that humans cannot understand English. Likewise, the fact that any part of a computer cannot understand Chinese does not prove that a computer cannot understand Chinese!
    (Edit: I had forgotten Searle's response to the system's objection, It's been 40 year's since I heard his lecture, but what he fails to explain is how strong AI can successfully answer Chinese questions with accuracy indistinguishably from a native speaker - per his thought experiment - and yet completely lack semantic understanding. Likewise, he fails to explain why with humans if you consider dividing us up into smaller and smaller subcomponents, you will at some point suddenly have a complete set of subcomponents yet none of which can understand english - unless his claim is that one of our atoms is the one that learns English semantics - it's not I presume, which seemingly proves that semantic understanding MUST arise as a system property, and therefore there is no logical reason to assert that it's impossible for strong AI, as a system, to exhibit that property)

    • @skoosharama
      @skoosharama ปีที่แล้ว +1

      26:58 if anyone wants Searle's response to the systems objection, that the entire system is nothing but a symbol-manipulating machine, and that knowledge of semantics alone - the symbols and the rules for manipulating them - is not enough to understand the semantic content of the symbols. The claim that an interlocutor that can pass the Turing Test of being externally indistinguishable from a Chinese-speaking person is therefore *necessarily* a person is difficult to justify; a person is not merely an entity that can perfectly imitate a person such as oneself, but an entity that one can reasonably suppose to have an internal life, as oneself does. I definitely don't believe that such a claim is "patently obvious".

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว

      do your cells and neurons understand English? or does your system as a whole, including your consciousness, understand it? what physical properties of your consciousness can we separate from your brain that show the physical "input -> output" that gives meaning to the words?

    • @skoosharama
      @skoosharama ปีที่แล้ว +6

      @@theconiferoust9598 Sure, we can agree that consciousness is an emergent property of certain complex systems. My contention is that we should not suppose that a text manipulation device has an interior life, or any awareness even approaching sentience, even if it is very, very good at text manipulation. The key here, I think, is that, while language could be thought of as a self-contained system, an entity without any perception of the tangible world in which humans live cannot possibly understand what the symbols refer to, i.e. what they mean. Our text manipulation program, unlike Searle's Chinese room (which at least includes a homunculus who might get this), most likely does not even understand that the characters are symbols at all, rather than mere characters and character strings with no extrinsic meaning outside of the rules of the language. It doesn't really matter how good ChatGPT gets at text prediction and mimicking human linguistic responses; it is still just a glorified version of Autocorrect that is incapable of understanding its own output. I would submit that it is incapable of understanding that its output even could mean something outside of itself and its statistical models, or what it would mean for its output to have such meaning.
      Let's put it this way: just because the human brain is a complex system out of which consciousness arises and that is also capable of complicated linguistic output, doesn't mean that *any* system that is capable of complicated linguistic output is the kind of complex system out of which consciousness arises.

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว

      @@skoosharama agreed. my response was mostly aimed at the op's objections.

    • @aaronmarchand999
      @aaronmarchand999 ปีที่แล้ว

      ​@@skoosharama"The human brain is a complex system out of which consciousness arises"... Who says consciousness arises out of the brain.... Judging by the way you talk, perhaps you are less conscious than you think

  • @mattmiller4233
    @mattmiller4233 ปีที่แล้ว +8

    Great video! Very well explained. I would add two points, though - the first you mentioned very briefly, but it is worth stressing that the Chinese Room serves only as a refutation to functionalism in the purely *digital* sense, not in totality. The second is that Searle seems to lack (though I may have missed it in the text; please correct me if I did) any formalized definition of what, exactly, constitutes this "understanding" that he says the room lacks, or what sets such "understanding" apart from a sufficiently complex system of inputs and outputs. He seems to work on a fairly generalized seems of what *feels* like understanding, but fails to specify or quantify it (again, let me know if I missed something).
    Again, awesome work!

    • @ben_clifford
      @ben_clifford ปีที่แล้ว

      To address your first point: I think it's actually sufficient for Searle to contrive a highly-constrained, obtuse situation and say that he's refuted functionalism, and here's why...
      The core argument of functionalism is that only function matters, and not the system or substrate. So, to disprove such a theory, we only need to show a single counter-example. There's a more formal way to show this with predicate logic, but I think you get the idea.

  • @jollyroger105
    @jollyroger105 ปีที่แล้ว +1

    Thank you. I really appreciate you having put so much heart and soul into making your videos. I am truly enlightened.

  • @СергейВоробьёв-ы5л8б
    @СергейВоробьёв-ы5л8б ปีที่แล้ว +3

    It is like unconsciously grinding some skill without realising and feeling what you are doing, play on the piano, walk, write, type on the keyboard by using 10 fingers, learn any stuff. You just repeat something infinitely and boom you a master at something. I remember was grinding my pronunciation skill for my second language very hard by using shadowing technique and at some point I started to produce signals and sound very natural without any understanding what I am talking about. Later I understood, that feeling stays, somehow, on top of the computations and give you the additional power to ability to compute something or to auto correct errors. But the question is do the function, converting bunch of inputs into outputs in the specific order, can see the dreams, I believe it depends of the architecture of the hardware that run the process, it does not depend of the output signals that we can interpret as numbers or wise versa that have the meaning for us, in oher words the function that perform some computation and producing ideal output does not relate to feeling and being alive in any way, it is like to say that the smartphone is alive just because it can produce ideal screaming sound.

  • @ND-im1wn
    @ND-im1wn ปีที่แล้ว +3

    Amazing how much more this problem, video and explanation are today with ChatGPT. I understood this concept in an intuitive way but now I have the language to explain and understand it explicitly. Thanks!

  • @nixedgaming
    @nixedgaming ปีที่แล้ว +9

    I am desperate to see how Searle would respond to the idea of a neural net matrix transformer, assuming he legitimately understood the math of it. My question is basically, why can’t “semantics” be an *emergent* property of a sufficient understanding of syntax? The paper “attention is all you need” basically demonstrates that a machine *kind of* grasps semantics from a type of mathematical transformation of language through encoder/decoder processes.
    Very fascinating, thanks for the lecture!

    • @ronald3836
      @ronald3836 ปีที่แล้ว +1

      One possible answer is that Searle lacks the imagination for believing that semantics can arise out of sufficiently complex syntax.
      However, Searle seems to accept that a rule book can make you appear fluent in Chinese, so he seems to accept that syntax can do everything you need. But apparently that does not let him doubt that a human does not perceive semantics through complex syntactic processing...
      (Sorry for the double negation, haha.)

  • @izansengun
    @izansengun 3 ปีที่แล้ว +20

    what a wonderfull way of teaching! Great content sir. Great job!

  • @Rob02138
    @Rob02138 ปีที่แล้ว +2

    Brilliant presentation! So glad I stumbled upon this channel. Thank you! Subscribed.

  • @sehbanomer8151
    @sehbanomer8151 ปีที่แล้ว +1

    semantics CAN be learned from pure form. look into distributional semantics, which states that meaning of a word is embedded in the distribution of the context in which that word have appeared. meaning can emerge from the spatial or temporal relationship between symbols/forms. this is why GPT/transformers can seamingly learn semantics.

    • @sehbanomer8151
      @sehbanomer8151 ปีที่แล้ว

      we just don't know how to make compositionality work on these distributional representations. GPT models may have figured that out, but we don't know how they're doing it because they're uninterpretable black boxes.

  • @perfectionbox
    @perfectionbox ปีที่แล้ว +15

    The fact that mere symbolic processing can produce fairly good output is helpful in brain development. As a child correlates experiences to language symbols, the job is made easier by powerful symbol processing where guesses/estimates/predictions are often useful, and even after because much sensory input is garbled or missing, and intelligent subconscious guesswork fills in the gaps. We haven't created true general AI, but have uncovered an important piece.

  • @micry8167
    @micry8167 ปีที่แล้ว +9

    Excellent comments here. Can’t help assuming that Searle was motivated more by his distaste (for some ideas) than for a desire for hard truth. Namely, that a vast enough system of limited machines could be an analog for the human mind.

  • @dowunda
    @dowunda ปีที่แล้ว +43

    How does Seale define what it means to understand something? Viewed exclusively in the physical world people can be seen as a kind of computer: The brain itself being a kind of Chinese Room.

    • @recompile
      @recompile ปีที่แล้ว +11

      Searle makes a convincing case that whatever it is that brains do, it is not mere computation. You might think you're a computer, but that's just because that's the most advanced technology we have at the moment. 100 years ago, you might have thought the brain was like clockwork. The idea that brains are like computers will seem just a silly in the not too distant future.

    • @dowunda
      @dowunda ปีที่แล้ว +3

      "Brain: an apparatus with which we think we think.“ - Ambrose Bierce

    • @bombmk2590
      @bombmk2590 ปีที่แล้ว +13

      @@recompile I have yet to see a convincing argument that it is anything but computation.
      How could it be more?

    • @costadev8970
      @costadev8970 ปีที่แล้ว +1

      ​@@bombmk2590 you have subjective experiences, a computer (symbol manipulator) does not.

    • @calorion
      @calorion ปีที่แล้ว +9

      @@recompile "The brain is like clockwork" is not really a silly idea. Unsophisticated, sure. As we get better computers, we'll get a better understanding of how brains work. But a physicalist determinist basically does think that the brain is like clockwork on a theoretical level.

  • @xMrJanuaryx
    @xMrJanuaryx ปีที่แล้ว +1

    I asked GPT-4 what it thinks about Searle's argument: 'As an AI language model, I don't have personal beliefs or feelings, but I can provide an analysis of the Chinese Room Experiment and its implications.
    Searle's argument raises important questions about the nature of understanding and consciousness in artificial intelligence. It highlights the distinction between syntactic processing (manipulating symbols according to rules) and semantic understanding (grasping the meaning behind those symbols). The Chinese Room Experiment emphasizes that simply processing symbols according to rules does not necessarily imply understanding.
    Critics of Searle's argument point out that the thought experiment assumes a specific type of AI architecture, one based on symbol manipulation. They argue that alternative AI architectures, such as neural networks, might be better suited to exhibit true understanding or consciousness. Others suggest that the whole system, including the person in the room and the instructions, could be considered as possessing understanding, rather than just the person alone.
    As an AI language model based on the GPT-4 architecture, I am designed to process and generate human-like text based on the patterns in the data I have been trained on. While I can generate contextually relevant and coherent responses, it is important to note that I do not possess personal experiences, emotions, or consciousness like a human. In that sense, I can be seen as an example of the kind of syntactic processing Searle refers to in the Chinese Room Experiment.'

  • @quokka_11
    @quokka_11 ปีที่แล้ว +1

    20:19 "You're never going to be able to figure out semantics from syntax."
    Except we're talking about human language, and you already have your own experience of that. With exposure to enough earnest material (not nonsense), you would eventually make meaningful connections and at least some of the semantics would emerge.

  • @wirewyrm01
    @wirewyrm01 ปีที่แล้ว +28

    There is a paradox in the thought experiment. The person in the room is tasked with manipulating symbols, not with trying to figure out what they mean. Therefore, it follows naturally that the person in the room cannot (or more accurately, will not) figure out what the symbols mean. Indeed, the meaning of the symbols is completely irrelevant, so positing that the person could never understand the semantic meaning of the symbols is also irrelevant, because that was never part of the design.
    On the other hand, I would propose that Searle's assertion, that the person in the room can never gain an understanding of the symbols even if they tried, is false. Perhaps the person can not gain much insight from studying the symbols alone, but if the person studied the *instructions*, surely they would be able to glean some information about the symbols and their contextual use. Patterns will emerge from the frequency of use of certain symbols, the association of certain symbols with each other, and symbols occuring in question-answer pairs, and so on. Furthermore, from the frequency and sequence of "questions" received, the person can also start to observe patterns and eventually triangulate the semantic meanings of the symbols. In fact, many of these techniques are used in the study and research of dead languages.
    There are other problems that I can see with the thought experiment, but these are the most easily defined ones.

    • @LoraxChannel
      @LoraxChannel ปีที่แล้ว +4

      Yes. This is exaxrly why modern AI is fed huge language bases, and tasked with creating context and relationships and distinctions, just as we do in language. They are no longer limited to manipulating digits. I mean, that is the whole point of designing AGI, so it can learn and assign "meaning" indepenently.

    • @stefans.5043
      @stefans.5043 ปีที่แล้ว +6

      the person inside the room will never know the meaning of the symbols when he doesn't know the question he is asked or the answer he gives. In this experiment he only acts on given instructions and not on human behavior like observing or recognizing patterns. And even when he sees patterns than still he doesn't know the sematic meaning of them.
      thts th dffrnts btwn hmns nd cmptrs, you probialy can read this last part only by knowing the meaning of the words and not the meaning of the symbol. A computer can not.

    • @fang_xianfu
      @fang_xianfu ปีที่แล้ว

      Yes - the other part is, where does the book come from? Real minds write their own book of rules to manipulate the Chinese symbols, and they edit their own book as they try new things and they succeed or fail.

    • @SatanDynastyKiller
      @SatanDynastyKiller ปีที่แล้ว +2

      I knew someone in here would be as smart as me lol - saves me typing it, thank you.
      The easiest way to cover it all- until we understand everything, we understand nothing. I genuinely think some of these “intellectuals” are not exactly what they claim they don’t claim to be…

    • @LoraxChannel
      @LoraxChannel ปีที่แล้ว

      @@SatanDynastyKiller It's smart as I. When you are claiming smartness, it matters.

  • @presto709
    @presto709 ปีที่แล้ว +9

    This was great. I think I first learned of the Chinese Room from a book by Martin Gardner. I think I come down on the system response. The system does understand Chinese because looking at the input and giving the correct output is what understanding means. It's the same test we would give to a person who claims to understand Chinese.

    • @kid5Media
      @kid5Media ปีที่แล้ว

      No. Or, to tweak things a little, the person outside the room instead of passing in a question passes in the instruction to order a glass of tea. The person inside the room will die of thirst (unless rescued by the Infinite Monkey Typewriter Brigade).

    • @presto709
      @presto709 ปีที่แล้ว

      @@kid5Media Interesting. We aren't told what the book inside the room will do when a nonquestion is the input. Postulate that the instruction book translates nonquestions into his language which he recognizes and reads. Interesting but I'm not sure how it changes anything.

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว

      the «system» includes humans with consciousness to interpret and glean meaning. in other words, you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      its like saying meaning is self-evident in physical matter, symbols, or mathematics, as if a computer could take the word «love» and output every single iteration of the meaning that has ever been conceived, felt, lived by every human ever. there is no correct output.
      conversely, it seems there is no meaning without a conscious experience, and the «systems» response only affirms this.

    • @presto709
      @presto709 ปีที่แล้ว

      @@theconiferoust9598
      YOU WROTE
      you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      REPLY
      The test wouldn't be if it gives the correct answer. It would be if it gives a convincingly human answer. Like the Turing test.
      If you ask "How is the weather" and the answer comes back "27" that would not make you think there was a mind at work.
      If you asked, "Is marriage a good thing?" You would not be looking for a correct answer, you would be looking for an answer that indicates an understanding of the question.

    • @presto709
      @presto709 ปีที่แล้ว

      @@theconiferoust9598
      YOU WROTE
      You are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      REPLY
      I'm not saying that at all. When interacting with another person I do not require that all of his answers confirm my opinion of correct. Only they generally seem. to be responsive I might interact with a person who gives absolutely not "correct" answers in my opinion but still seems to clearly be a "mind".

  • @konstantinlozev2272
    @konstantinlozev2272 ปีที่แล้ว +23

    I would have liked to see a discussion on the design of the rulebook and how that design embodies semantics

    • @jasemo388
      @jasemo388 ปีที่แล้ว +3

      Yes. It's almost like Searle took the interpretation of the incoming symbols - the Semantics - and made it separate in the rule-book just to remove agency from the person in the room and structure his analogy to prove his point.

    • @konstantinlozev2272
      @konstantinlozev2272 ปีที่แล้ว +3

      @@jasemo388 yeah, modern day neural networks actually build and amend the rulebook as they get trained. And as the rulebook is construed to embody the semantics in this though experiment one cannot argue that neural networks represent the Chinese Room example.

    • @cybersandoval
      @cybersandoval ปีที่แล้ว +2

      Right, is the system writing the rulebook?

    • @jeff__w
      @jeff__w ปีที่แล้ว

      @@konstantinlozev2272 “…as the rulebook is construed to embody the semantics in this though[t] experiment…”
      I think in Searle’s example, the rulebook is meant to be construed as embodying the _syntax._ It’s an instruction manual that tells the person _how_ to put the various Chinese characters together, i.e., the form, but says nothing about the meaning. In that sense, to the extent that these latest chatbots can predict words, one at a time, to construct flawless sentences in English, they might represent Chinese rooms “perfected.”

    • @ErikHaugen
      @ErikHaugen ปีที่แล้ว

      @@konstantinlozev2272 Searle's room has paper, pencils, erasers, etc; it can store data from past questions. This doesn't really matter much to Searle's point, I think, although it would be impossible to pass the Turing Test without memory of some kind, I think.

  • @chrischristenson4547
    @chrischristenson4547 ปีที่แล้ว

    I do enjoy your talks greatly I will continue listening to them.

  • @marcroy5089
    @marcroy5089 ปีที่แล้ว

    Exaplanation starts at 16:00 if you don't need background info (which you probably do need anyways).

  • @mattmerc8513
    @mattmerc8513 3 ปีที่แล้ว +5

    Thank you so much for your vids youve explained it far better than any other paper, research, or teacher that I've come across

    • @xbzq
      @xbzq ปีที่แล้ว +2

      That goes to show you don't come across many papers, research, or teachers.

    • @ozymandiasultor9480
      @ozymandiasultor9480 ปีที่แล้ว

      @@xbzq well said.

    • @ozymandiasultor9480
      @ozymandiasultor9480 ปีที่แล้ว

      Where have you studied philosophy and logic, at which university are professors so bad that a mediocre channel with simplistic explanations is so much better? I am not saying that this channel is bad, but it is for laymen, those are not exactly top-notch explanations.

    • @hb-robo
      @hb-robo ปีที่แล้ว

      @@xbzqwhy is everyone in this comment section such a brusque asshole? Perspective is relative, leave them alone

  • @lindyl4257
    @lindyl4257 2 ปีที่แล้ว +6

    This helped a lot thank you you're a great teacher

  • @amaarquadri
    @amaarquadri 2 ปีที่แล้ว +17

    I would push back against the idea that you can never learn semantics from syntax alone. I think given enough time in the Chinese room, you would eventually learn Chinese just as well as a native speaker. Consider the GPT-3 language model discussed in these videos th-cam.com/video/_8yVOC4ciXc/w-d-xo.html, th-cam.com/video/PqbB07n_uQ4/w-d-xo.html. Despite only learning based on the syntax of a large corpus of English language, it is able to construct coherent well thought out sentences. For all intents and purposes, it (or at least a future more advanced version of it) does "understand" the semantics of language.
    In a certain sense, if you zoom in enough human brains are just manipulating electrical inputs and producing electrical outputs with no understanding of what they mean semantically. Its just a set of particles following the laws of physics. Nonetheless, the system as a whole can develop understanding.

    • @guillecorte
      @guillecorte 2 ปีที่แล้ว +4

      El punto es que debes poner "" porque sabes que no es "entendimiento" real. Además perdiste el punto: no es si luego de años en la habitación podrías o no aprender chino, sino que podrías responder "en chino" sin entenderlo realmente.

    • @MuantanamoMobile
      @MuantanamoMobile 2 ปีที่แล้ว +10

      "Colorless green ideas sleep furiously" is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically well-formed, but semantically nonsensical. GPT--3 often makes Noam Chomskyesque statements that are syntactically correct but nonsensical, because it doesn't understand.

    • @perfectionbox
      @perfectionbox ปีที่แล้ว

      An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

    • @danwylie-sears1134
      @danwylie-sears1134 ปีที่แล้ว

      @@perfectionbox Programs are data. The absolute distinction you're appealing to, between the impossible-to-update rule book and the constantly-updated arrangement of papers in baskets, does not exist. It's an actual theorem that a suitably designed Turing machine can receive its program as part of the initial content of its tape, and the arbitrary input as the rest of the initial content, and no matter what other Turing machine you pick, there's an appropriate program part that can make the program-reader Turing machine match the output that the other one would give if it received just the arbitrary-input part on its tape. And with real-world computers, it's literally true that programs are data, stored on the hard drive or RAM or whatever, same as any other data.

    • @recompile
      @recompile ปีที่แล้ว +5

      Nonsense. Meaning is completely irrelevant here. ChatGPT no more understands the meaning of the word 'hamburger' than you understand the meaning of 汉堡包. ChatGPT very clearly makes Searle's case. There is no understanding at all, just symbol manipulation, yet it can produce coherent replies. You can even see its complete lack of understanding in its many spectacular failures. (For example, it will happily produce a program to compute the length of the fourth side of a triangle given the lengths of other three sides!)
      Try this: Consider a complete and comprehensive unilingual Chinese dictionary. You could study that for 1000 years, noting every detail about the symbols themselves and the relationships between the symbols. At best, you could create a grammar. At no point in your study could you identify the symbols that represent the word 'hamburger'. The meaning simply isn't in the symbols.

  • @austinhelton2295
    @austinhelton2295 ปีที่แล้ว

    I feel like I’m misunderstanding a few things, so some clarification would be appreciated!
    1) Does dualism not require functionalism to be true? If the body and mind are separate, the body, being nothing more than physical matter, would operate on functionalism. It takes inputs through the five senses and the mental-self then adheres to a set of rules on how to respond. If not, how would dualism apply to not being able to command your body through thought to run a fever (demonstrating the mental-self isn’t in total control) and uncontrollable reactions (demonstrating that the body can both read signals and act based off a set of rules without input from the mind I.e. pain receptors say this is hot, pull away)?
    2) in the Chinese Room, would thought not be taking place in the analysis of the rules? You would need to understand the syntax and semantics of the English language to perform the task correctly.

  • @Martial-Mat
    @Martial-Mat ปีที่แล้ว +1

    So if you take a machine designed never to have understanding, it can't have understanding? What a superb insight. All you have to do is define a computer so narrowly that by definition it meets Searle's criteria, then it meets Searle's criteria? What a thought leader.

    • @bradleyboyer9979
      @bradleyboyer9979 ปีที่แล้ว +2

      You seem to be ignorant of how all computers work. All computers operate in the way Searle described. Inputs and outputs. Doesn't matter if it was the first cpu ever created or the supercomputers of today (though Quantum Computing is arguably different due to our modern understanding of physics).

  • @shinypup
    @shinypup ปีที่แล้ว +12

    With the results we're seeing with large language models (e.g. ChatGPT) and how computers have been able to extract semantics in the form of embeddings, could you give a talk on if there are philosophical implications?

    • @yuck9510
      @yuck9510 ปีที่แล้ว +1

      interesting question. with gpt, though, you can kind of just use the same argument, that it's simply really efficient and accurate at providing appropriate responses to prompts. that is to say, we should think of it less as an AI, and more as a really, really good predictive text machine

    • @ever-openingflower8737
      @ever-openingflower8737 ปีที่แล้ว +1

      @@yuck9510 I wonder what the difference to elementary schoolchildren is in that regard. When I was first instructed how to write essays, I also learned about "useful phrases" etc. Isn't learning how to write good texts at school pretty much the same thing that this predictive text machine does? Like, it goes without saying children need to learn how to write with their hands, dexterity etc. But I mean the underlying thing of creating a text. I think philosophically, it is the same quality. Children have just started thinking about the world and someone teaches them how to use phrases to generate text. What is the essential difference?

    • @hassaan3861
      @hassaan3861 ปีที่แล้ว +2

      As someone whose work is closely tied to chatgpt and open AI. My belief has gotten stronger that these systems don't understand anything but are extremely good at giving close approximations of understanding something. Also, most videos etc online are faked for views because to even get a semi decent output from chatgpt/dall e-2 you have to do the thing like 50 times and tweak the inputs in weird ways until you get a response that isn't completely BS.

    • @NullHand
      @NullHand ปีที่แล้ว +1

      ​@@ever-openingflower8737 Children learn to use verbal language first (and probably have an instinct to do so).
      As they first learn to write, they quite literally speak the "sentence" they want to write, and put it on paper.
      It's all dialog to them.
      This comes complete with using pronouns with no prior reference (I was writing about the doggy I was looking at....)
      Filling the sentence with verbal speech thinking pauses (umm), and verbal structures designed to get a body language acknowledgement (you know?)
      All of these are superfluous or counterproductive in most written sentences, and have to be trained out.
      The semantics in human text is piggybacking on the heavy lifting of associating physical world experiences to spoken (or signed) words.
      The "LLMs" might be trapped in a Chinese room, but neuromorphic image recognition "AIs" are not (they get to "see" images, to associate to with that "DOG" symbol).
      I strongly suspect that some AI lab somewhere has already connected the two.

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว +1

      any output of an A.I. model has been given its «picture» of meaning by humans. the real question is, what would the picture of «meaning» look like to a system that is learning only by observing inputs (i.e. not given or trained to give «correct» responses). we always seem to insert human understanding and consciousness into the equation.

  • @impyre2513
    @impyre2513 ปีที่แล้ว +18

    Personally, I feel like the systems response idea makes a lot more sense... But it only works if the system is able to self-modify. If this system as a whole is meant to represent someone that understands Chinese, then it must first demonstrate the ability to form responsive queries that actually make sense, and then potentially make adjustments to its programming depending on the responses received. But that hits the crux of the problem, because it would have to be a pretty fancy rulebook to have that functionality built-in.

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว +4

      ChatGPT, especially in GPT-4, fluently translates to Chinese and other languages. You can offer corrections or style guidance and it corrects itself and remembers that moving forward...but this does not alter the model! The nature of the Transformer holds the recent memory of conversation as the input to the next pass. GPT-4 is a literal Chinese Room, running on the Azure data center. Translating to Chinese is not something it practiced with feedback during the learning phase. All it did was read text in different languages and learn the patterns within those languages. Meanwhile, it gained the skills to converse convincingly, translate languages fluently, do algebra, solve logical problems, write code, create web pages, and much more, all from this "fancy rulebook".
      The code implementing the neural network is for processing tokens, input and output. When the system undergoes "deep learning", that code does not change. The learning is in the weights between the neurons (or "parameters"). So, once learning is complete, the knowledge is in this structure, not the (same) low-level code being executed.

    • @glenblake3021
      @glenblake3021 ปีที่แล้ว +3

      Sure, but that's a problem with the analogy. It's meant to be an analogy for strong AI, and if you designed a system attempting to be a strong AI but it lacked the ability to modify its own rules, well, you've fundamentally misunderstood the problem you're trying to solve. Lack of imagination on Searle's part, imo. One of the more irritating things about the Chinese Room.

    • @rrshier
      @rrshier ปีที่แล้ว +1

      @@JohnDlugosz I disagree that ChatGPT is a literal Chinese Room if your statement that "You can offer corrections or style guidance and it corrects". That statement alone means you are solving the problem for the processor in the room. The Chinese room thought experiment is the idea that there is no connection between the language the processor in the room knows, and the language (with differing symbols) being passed through the room. Your statement provides the connection, thus NOT a literal Chinese room.

    • @harrygenderson6847
      @harrygenderson6847 ปีที่แล้ว +4

      @@rrshier No, ChatGPT does not literally understand the meaning of the statement you are passing it. It 'reads' it as a series of tokens and applies some weightings to calculate the most likely follower. The model itself is part of the rulebook, and the prompt you give it is the string of arbitrary characters being fed in. It could tell you the meaning of English or Chinese, but it doesn't internally separate English and Chinese or derive truth from the statements or something. But it's an abstraction that we apply to the system, the same way we do when fellow humans create waves of pressure in the air.
      Also, just so you know, the 'Chinese room' is turing complete, and can thus be simulated by any turing machine (such as a digital computer) and can simulate any turing machine (such as a digital computer). So ChatGPT could be run inside a literal Chinese Room.

    • @RyanShier
      @RyanShier ปีที่แล้ว +1

      @@harrygenderson6847 - Actually the Chinese room is not turing complete, as there is no way of storing state, or having feedback from an output offered back in. There is a strictly defined set of rules which cannot change (that would be where a feedback path and state storage could come to play). In fact, as defined, it is the literal opposite of turing complete. If using the example given on Wiki, the person inside the room with the strict rule set it akin to HTML (non turing complete). GhatGPT on the other hand and the fact that you CAN indeed give it other state that it can store and use to give differing answers is indeed turing complete. If the 2ndary inputs are used for further training of the GPT model, then it is most certainly turing complete. In terms of understanding meaning, neither do we without context of other surrounding words, placement within a sentence, etc...

  • @lokouba
    @lokouba ปีที่แล้ว +8

    I argue "Strong AI" won't have a necessity to truly "think" if their instructions are elaborate enough to give the ILUSION that it thinks.
    The actual subject of the experiment is not the person in the room, it is the person OUTSIDE the room. And the idea is that if the person inside the room is trained to find these characters quickly enough so they can respond as quickly as if they understood the message written on the paper. They could be convincing the person outside the room that they actually understand chinese.
    The idea, is that you can put a person inside of the room or an AI bot inside of the room and it would make no difference from the point of view of the person outside of the room, if you tell them there is another chinese person in there and tell them to write messages to them, they will likely believe its a human chinese speaker in both cases.
    The conclusion i draw from this is that you give "Strong AI" enough tools, enough instructrions and most importantly a "chinese room" to cover it's true nature it can pretend to be an actual being that "understands Semantics" because human beings are only able to communicate through Syntax.

    • @udarntors
      @udarntors ปีที่แล้ว +1

      This simple to refute.
      We are sharing meaning, not syntax, syntax may lack in a conversation or be minimal, but without shared semantics/meaning there is no communication.
      Exemple: you can understand a small child or a foreigner who does not use proper grammar.
      Here is some syntax:
      find . -type f -empty -prune -o -type f -printf "%s\t" -exec file --brief --mime-type '{}' \; | awk 'BEGIN {printf("%12s\t%12s
      ","bytes","type")} {type=$2; a[type]+=$1} END {for (i in a) printf("%12u\t%12s
      ", a[i], i)|"sort -nr"}'
      Here some meaning:
      Flower in the crannied wall,
      I pluck you out of the crannies,
      I hold you here, root and all, in my hand,
      Little flower-but if I could understand
      What you are, root and all, and all in all,
      I should know what God and man is.
      Alfred Tennyson in 1863

    • @lokouba
      @lokouba ปีที่แล้ว +1

      @@udarntors It seems you misunderstand the difference between syntax and semantic.
      You say you are presenting one example of syntax and one example of meaning. But you are in fact presenting syntax in both cases because semantic isn’t a “message” it’s a “concept”. The english language is a syntax, C+ is a syntax. And of course shared meaning is part of any conversation but my point exactly is that these AIs are programmed by people who understand the semantics of the words they are inserting in their repertoire of syntax.
      Because the relationship between syntax and semantic can sometimes be fuzzy, syntax itself can be utlized for deception too, that is the basis for the concept of “doublespeak” too. Language is only a form of expression, but humans truly lack a reliable way filter what syntax is backed up with “truth” or (in the case of AIs)“thought”.

    • @udarntors
      @udarntors ปีที่แล้ว

      @@lokouba I wasn't really clear in my little exposition there. So, i think that "syntax" and "semantics" are, in fact, as you say, concepts that pertain to language and linguistics. One covers the structure of language, the rules that govern the placement of words in a phrase, and we call this one syntax.
      The other is all about meaning, and the relation between words in linguistics, and we call it semantics.
      I see it as structure and content. Form and substance.
      So :
      "The crocodile flew between the slices." Correct syntax here. Absolutely meaningless.
      I am in total agreement with all you have said about the fact that you can fool humans with sufficient processing power and fiddling with the configuration to accommodate the social norm.
      My reply was about this statement only:
      "human beings are only able to communicate through Syntax."
      Syntax helps to communicate *correctly* as to the social conventions of the time you are in...
      So, my examples were, in fact, of a meaningless but beautifully structured line of bash and a meaningful poem. One is a command that will be interpreted and transformed into lights on a screen as programmed and the other is a tale of epistemology, causality, and determinism.

    • @lokouba
      @lokouba ปีที่แล้ว +1

      @@udarntors Aha i see. Maybe i should have worded that better. I meant to say communication between is only possible through usage some sort of syntax at least from my conventional understanding of what constitutes as “communication”.

    • @irrelevant_noob
      @irrelevant_noob ปีที่แล้ว +1

      @@udarntors and yet, Alejandro is right that in any communication only the syntax is "given" (or "shared"). The fact that one party attributes some specific meaning to the terms in the message has no effect on how the digital actor (a Turing machine, the person in the room, an AI bot, etc) will process the message. Whether or not that actor *_will_* in fact extract some (subjective) meaning(s) from the message is unknowable. But in any case, the meaning itself is not intrinsic in the message, it is only "in the eye of the beholder"/interpreted by whoever analyzes the message.
      @AlejandroRodriguezP that last part of the OP seems to me to be a kind of "Turing test" for semantics: is the digital actor in the room "good enough" to convince the outside person(s) that they understand Chinese? :-)

  • @DrVaderific
    @DrVaderific ปีที่แล้ว +1

    Loved the lecture. Quick question. Now with the invent of 'machine learning' and apps like ChatGPT, we know that we can teach these machines semantics in some sense. Perhaps the mind can be thought of a 'machine' that has the algorithms of 'machine learning'. Any thoughts?

    • @conspiracycrusader6687
      @conspiracycrusader6687 ปีที่แล้ว

      Not the poster, but I don’t think we can consider a computer to be conscious until it says no without it being told it can do so

    • @numericalcode
      @numericalcode 11 หลายเดือนก่อน

      Locke, Hume and other empiricists long ago thought humans learn by generating associations just as programs like ChatGPT do.

  • @joemck85
    @joemck85 ปีที่แล้ว +1

    It seems to me what's preventing an AI from learning meaning isn't that it's a digital computer, but a lack of other input and output. If all it sees is symbols, it could deduce what symbols are most likely to follow other symbols and that's all. But give it pictures along with descriptions and it might start to associate certain symbols with traits a picture has, as Dall-E and similar seem to have done. Give it a far richer set of inputs such as an environment filled with objects and other agents it can interact with, either virtual like in a massively multiplayer game or physical via a robotic body, and a sufficiently complex program running on it might start to figure out meaning.

    • @joemck85
      @joemck85 ปีที่แล้ว +1

      ... For a thought experiment in this direction, consider if you're a being -- either a digital computer running a complex program that can form and process associations, or a human mind -- given a multiplayer game. You have no knowledge of Chinese, any of the content of the game, what the objects in it do, etc. You do get enough instructions to know when you've died in the game and that it's undesirable. But other players of the game are aware that you're clueless and want to help you -- similar to the situation of a young child in a nurturing environment. Another player might drop food objects in front of you and speak the Chinese word for food as they drop each one. You pick them up and see that your hunger meter refills, causing you to die less frequently. You might start to associate that word with that item and the good outcome of the hunger meter being full. If players scream other words in Chinese and then a bunch of monsters raid the town and you die a lot, after some iterations of this you could associate those words with the monsters in the game and a bad outcome. And so forth. Eventually after you know a bunch of symbols, the spoken words they go with, the objects they describe and their use in the game, other players might be able to teach you fragments of sentences and so forth.

  • @Conserpov
    @Conserpov ปีที่แล้ว +12

    If the people outside are determined to teach the person inside to understand Chinese, I think they can do it, to an extent.
    It may require at least two distinct inputs though.
    This problem comes up IRL with children who are blind and deaf from birth.

    • @teddydunn3513
      @teddydunn3513 ปีที่แล้ว +5

      Exactly. The Chinese room is setup to treat visual sensory inputs as somehow special and more "real" than other inputs.

    • @pumkin610
      @pumkin610 ปีที่แล้ว +1

      Blind from birth, that reminds me how we cant really describe colors in a way that would let an always has been blind person know what it really looks like, aside from black if you consider that to be a color, but that's only because i assumed that they'd be seeing black all the time since we see black when we're in the dark but they aren't seeing anything, like how hands and feet don't detect light itself. Red is hot, it's intense, some roses are red, blue is calm, the sky is blue, green is grass, yellow is the brightest color. Colors are their names and the specific visual sense that they are I suppose. Maybe we aren't seeing colors for what they are either, to me certain colors are certain emotions, certain vibes and certain things.

    • @pumkin610
      @pumkin610 ปีที่แล้ว

      There's gotta be a touch based language, right

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว

      you can give them a rulebook to input->output and learn, but it is their conscious experience as a human that will allow them to find meaning, not the rulebook.

    • @ronald3836
      @ronald3836 ปีที่แล้ว

      @@theconiferoust9598 the rules in the rule book/weights of the neural network/connections between neurons get modified as you learn.

  • @devtea
    @devtea ปีที่แล้ว +9

    Thank you so much for this video! This is by far the best explanation of the Chinese Room. Within Searle's imagined ideal conditions of the experiment, Searle is correct, and the conclusion would be true.
    At least it would have been true for, say, a digital computer such as a calculator or a watch.
    However since Searle's article wasn't concerned with the ideal conditions, and because Searle's article showed interest in extrapolating this into practice - there is indeed a version of a 'Systems' response to Searle's original article.
    One can argue that Searle's conclusion (that it is impossible for the system, or the individual person/central processing unit inside of it, to learn the semantic meaning of the language) is false - because it is impossible to guarantee Searle's imagined ideal conditions (the ideal circumstances of the room - perfect isolation and perfect access control that would prevent any unauthorized outside contact/exchange of information) for an extended length of time.
    Again, the reason Searle's conclusion (that it's impossible for the person/central processing unit to learn the semantic meaning) is false is because the ideal conditions imagined are impossible to guarantee.
    One can argue that there exists a positive, non-zero probability of an outside entity or force stepping into this setup (uninvited) and teaching the person/central processing unit inside the room the full semantic meaning of the entire language, without ever entering the room. For example, by passing information into the room on purpose, with the specific intent (i.e. the intent to train it, as one would train an Artificial General Intelligence). This experiment, given a sufficient length of time in real life, i.e. years, would encounter a non-zero likelihood that these ideal circumstances of the room would be altered by an outside party (i.e. an enthusiastic ML Ops Engineer).
    Since the person/central processing unit inside the room does have command/skill in some language (other than Chinese) that it uses to understand the instructions - it is not impossible for it to receive (from some enthusiastic person/entity outside the room) new information with instructions / process / method that build up into a full semantic understanding of Chinese, or any other language. This is for the classic digital computer. Especially if there's no time limitation, it's not impossible to perform this within much less than the mentioned 1000 years. Difficult and not likely, yes; labor intensive, yes; but not impossible at all; and with humans being humans - significantly probable.
    Of course, the above would not impart a 'human' experience of speaking and interacting in Mandarin or Cantonese. But a full understanding of the language? Yes. I.e. in the case of digital computer, if it can understand binary numbering system, it can understand Chinese. It will be able to hold a conversation with a human. It won't sound like a classmate or a neighbor or someone you can relate to, as a natural-born human, so the conversation will be likely pursued as less meangful, but the language comprehension would be complete.
    Again, Searle isn't wrong. Within the thought experiment, Searle's conclusion is technically correct. It just has limited utility.
    It's like performing a thought experiment where one requests us to grant them the assumption that telomeres (stretches of DNA) at the ends of our chromosomes do not clip off and get shorter each time one of our cells divide. If we grant that assumption, we'll be able to construct a setup where a human can live 'forever', or at least not die from aging.
    The thing is, since it's not possible to guarantee that "ideal" assumption, you and I are still losing about 26 base pairs per year, so while the thought experiment is really interesting, the conclusions from it alone have rather limited utility.

    • @tedphelps
      @tedphelps ปีที่แล้ว

      Beautiful thinking. Thoughts of all sorts do stand on the stage to be judged for their actual value to us in living. Part of the problem is too strong a belief in proving some idea 'false.' Instead, I feel that ideas influence me, have a certain weight and value, move me this way or that in a wide world that I live in. I am happy for all of this.

  • @davidn4125
    @davidn4125 ปีที่แล้ว +3

    I suppose the same could be said of the human mind as a computer program. If one were to map all the neural connections then you would be able to know the output of a person's brain given the input signal. It's just that we don't have a way of mapping the connections fully but that doesn't mean they aren't there. Also the mind does change over time but so can a computer program since AI's are now able to rewrite their own code.

    • @ronald3836
      @ronald3836 ปีที่แล้ว

      Agreed. And even if the human mind is somehow "more" than a computer program, Searle's argument does not show this in any convincing way.
      Ultimately he has nothing better than "syntax can never be semantics" and "humans do semantics". But it is his lack of imagination that tells him that syntax cannot mimic semantics sufficiently closely that we cannot tell the difference. (And interestingly the premise of his experiment is that a syntactic rule book CAN mimic semantics.)

  • @ChadEnglishPhD
    @ChadEnglishPhD ปีที่แล้ว

    Great explanation. Three criticisms come to my mind.
    The first is essentially a false dichotomy; it asserts that "semantics" and "syntax" are mutually exclusive. Indeed, in the Chinese Room scenario, semantics is not produced within the scenario. But that does not mean semantics can never be produced from syntax. It presumes that what we call "semantics", "meaning", or "understanding" are not just built complex forms of syntax.
    Consider how we "learn". You input an apple to a system. By "input", I meant via senses: you see with your eyes many "images" of apples, meaning photons enter your eyes and cause a pattern of signals to your brain.
    You also "feel" an apple, meaning nerves in your fingers send signals to your brain. Taste, smell, and even sound of biting one ... all patterns of electrical signals.
    Your brain correlates each of these in their own domain: what is visually similar about all of them, smell similar, etc., and creates a "ledger" of templates of the apple based purely on domain (sight, smell, sound, taste, touch), and record in the ledger that these are all related to the same thing.
    Also on that list of inputs is language. If each time we recorded data in these domains on this item, we also heard the sounds (inpit signals coming frome ears) corresponding to the English word "apple", or saw images (signals from eyes) of shapes of a-p-p-l-e, then the domains (input doors) of audible or written speech also have correlated entries in the ledger. These templates are correlations and simplified re0resentations of apples, and correlate with other things in the ledger such as other round things, other ed things, other food, fruits, etc.
    Now suppose somebody "new" takes over, e.g., we forget that we've ever seen the word "apple". The symbol comes to the door: "What does an apple look like?", bit we don't understand English or letters.
    We open the ledger and look for those symbols. The response in the ledger is on the page with all of the other domains about apples. We get symbols at the door that look similar but slightly different, "What does an apple taste like?", and then "smell like", etc.
    But we aren't just rule following. We are also continually running the same correlation machine as above. We correlate the symbols at the door shaped "a-p-p-l-e" with the same page in the ledger, but different sections.
    We also correlate questions (symbols at the door) containing symbols "s-o-u-n-d" as top of any given page in the ledger, and "t-a-s-t-e" always has a response at the bottom of the page.
    Over time, we associate (correlate) the symbol "apple" with that page in the ledger, "sound" with the top line on the page, "taste" at bottom, "shape" in the middle.
    Now we see new symbols at the door appearing, with recurring "p-o-m-m-e". The ledger instructions say to look up the same page as "apple", and specific areas of the page, but send back "rouge" instead of "red".
    So now what is the difference between this situation and "understanding", "meaning", or "semantic"?
    We apply those words to the ability to draw on correlated patterns. We've "learned" through correlation and organized structure of the information what the symbols "apple" means (page in ledger), what "sound" means (top of page response), "taste" (bottom of page", etc. We learned that "pomme" is another symbol for "apple", and "rouge" is another symbol for "red".
    We learned these thing only through the same activities as the Chinese room. What we added was, (a) memory storage, (b) correlational computation, and (c) the ability to add to the ledger.
    All of these things are also done by digital computers. The Chinese Room scenario simply artificially limited the capabilities of digital computers, and the humans in the room.
    More complex behaviours can come from simple ones. A real person in that room could also remember symbols, recognize patterns in the symbols, and recognize organizational structures in the ledger, inputs, and outputs, and could "learn" Chinese in these patterns.
    Now, you might say they haven't learned the meaning because they can't associate the symbols for an apple to the real world apple, bit that is because we've artificially limited the input signals to messages at the door. They can understand an apple in terms of their input environment. The thought experiment assumes the pre-existing environment of all other senses we humans have, but are denied in the scenario.
    But in that context, humans also can't "understand" anything beyond our input environment. We don't have context for relativistic effects, X-rays, infrared, frequencies we can't hear, etc. Other beings with different input capabilities might "understand" different from us.

  • @mooncrime4998
    @mooncrime4998 ปีที่แล้ว +1

    In my opinion, the problem with Searle's argument here is that he does not demonstrate that humans, by this metric, *do* have semantic understanding. There is no way to disprove his argument because the goal posts of semantic understanding are poorly defined. What is it that a computer needs to do to demonstrate semantic understanding? What does a human do that demonstrates semantic understanding that a computer doesn't?
    If we understand semantic understanding as a mental association between a symbol and an idea, then there is grounds to argue that computers are capable of this (you can argue that their association of a set of symbols with a task like shutting down or running an application doesn't count as an association with an *idea*, and I am very interested in what that definition of idea might look like if y'all have suggestions). If semantic understanding has a stricter definition than the association of a symbol with a certain idea, then it stands to exclude humans, since you could argue that humans only understand thoughts through their association to stimuli and the symbols associated with those stimuli. Maybe I'm too cognitive linguistics-pilled though

    • @mooncrime4998
      @mooncrime4998 ปีที่แล้ว

      To be clear I agree with Searle's conclusion, just not his argument. I'm just playing devil's advocate

  • @jorgemachado5317
    @jorgemachado5317 3 ปีที่แล้ว +7

    Whats Searle apparently has discovered is that a computer alone would be a zombie. But a human alone would be a zombie too. The semantics is not a intrinsic part of the world. The semantics is what emerges from sociability. If the strong AI spent time enought with humans i think it would stop to be a zombie at some point

    • @annabizaro-doo-dah
      @annabizaro-doo-dah 3 ปีที่แล้ว +1

      What about when humans perform myriad behaviours they have no understanding of? I was thinking in particular of syntax. I learnt no formal grammar at school; literally no syntax. Yet I understand how to write formal English pretty well, I think. I perform the function of writing formal English without understanding almost any of the rules on a conscious level, except perhaps how to use basic punctuation.

    • @jorgemachado5317
      @jorgemachado5317 3 ปีที่แล้ว

      ​@@annabizaro-doo-dah Unless you believe there is something like a material ground for syntax (which i don't think is true) i believe that this learning is just a mimetic process. That explains why thinks change historically. People are learning new ways to perform and changing those processes by the output of those same processes
      EDIT: By material i mean a physical object. Of course syntax is material in the sense that it exists as an abstract concept

    • @recompile
      @recompile ปีที่แล้ว +1

      You've completely misunderstood Searle. Go do the suggested reading and try again.

    • @jorgemachado5317
      @jorgemachado5317 ปีที่แล้ว

      @@recompile You wrong! Hur dur Go read!

  • @emanuelbalzan7667
    @emanuelbalzan7667 ปีที่แล้ว +10

    Absolutely love this presentation. I have only one criticism - the description of Chinese symbols as "sqiggle squiggle" or "sqoggle sqoggle" is not racist. English (or Latin) characters would appear as such to a Chinese person who didn't know what they were.
    I am old fashioned enough to believe that the word racism should be reserved to describe beliefs of racial superiority and inferiority, and behaviors of injustice and exploitation that flow from such beliefs.
    I wouldn't bother anyone with this, except I really do feel we need to be a little less sensitive about these issues. They are increasingly being used to fracture a very polarized society.
    I would not take offense at anyone referring to my writing as "sqiggle sqiggle" or "sqoggle sqoggle" even if they could read English - but perhaps that's because my handwriting verges on the indecipherable anyway.

    • @t.b.a.r.r.o.
      @t.b.a.r.r.o. ปีที่แล้ว +1

      Agreed.
      Though I would call some English written language as "hodgepodge", or " "scribble-scrabble".

    • @GynxShinx
      @GynxShinx ปีที่แล้ว +1

      The only problem comes when actual racists are ostracized from society so they hide their actual views and dogwhistle them by saying stuff like "Chinese is just squiggle squiggle." When said by a racist, it implies that chinese people aren't smart enough to create real language which IS a supremacist idea. Now, should we react by calling someone a racist when they say "Chinese is a bunch of squiggles"? I doubt it. But should we be suspect of them? Sure. If you know this individual and know they don't do legit racist stuff, then they're fine.

    • @magicpigfpv6989
      @magicpigfpv6989 ปีที่แล้ว

      Ask to see your doctors hand writing… that shit is nothing but squiggles!

  • @jaylives5791
    @jaylives5791 ปีที่แล้ว +23

    if the incoming questions were varied enough, eventually a person would create a framework to understand each symbol, given enough time; even if they couldn't imagine the real world equivalent, they'd be able to relate the symbols to each other (maybe it would take hundreds or thousands of years, but a computer can make thousands of years worth of calculations in seconds)

    • @brotherpaul963
      @brotherpaul963 ปีที่แล้ว

      Sort of like monkeys with computers.

    • @mark_a_schaefer
      @mark_a_schaefer ปีที่แล้ว +6

      Yes, you could relate the symbols to each other-I always see X when Y comes along-but you still wouldn't know what they meant without some context. And here's the thing: even knowing that the symbols are symbols of a language is more information than an actual processor has. That's more context than a digital computer has. Imagine receiving these Chinese characters and you had never seen written language before. Your instructions were given to you orally (you have a really good memory). It might never occur to you that these symbols even were encoding language. For all you know, they could be wallpaper patterns that you're arranging according to some guidelines for aesthetics.
      The part that everyone is having such trouble with in this thread is imagining a world without any contextual cues of any kind-because our world is full of such cues. But a Turing machine doesn't need them, because it doesn't need to understand. And because it doesn't need to understand, it is not given any context for what the symbols are.

    • @jaylives5791
      @jaylives5791 ปีที่แล้ว +1

      @@mark_a_schaefer for sure, pardon my bad explanatory skills; but first, it wouldn't be as simple as if X then Y, it'd be more like, for example: a long series of characters that results in a different answer series; then another long series that is slightly different with a resulting answer series that is also slightly different; eventually the similarities in the question series would point to a set of answer series, where the variability would narrow down the potential correct answers, and new question entries similar enough would bring up that set of potential answers, even before being confirmed by the incoming answer. if that makes sense.
      do this enough times, you create a framework that mimics having context, without any actual context

    • @rickgilbert7460
      @rickgilbert7460 ปีที่แล้ว +3

      @@jaylives5791 Yes! But that reinforces Searle's argument, it does not contradict it. Sure, the system might get better and better at presenting the illusion of understanding (to follow the metaphor, it might have to refer to the rulebook less often to get convincing "answers"). But no matter how good it got at recognizing that "leaves" and "tree" seem to come together a lot, Searle would say that does not mean that it has any better idea after a billion operations what an actual tree *is* than it did at the start.

    • @jaylives5791
      @jaylives5791 ปีที่แล้ว +2

      @@rickgilbert7460 Interesting, what I'm trying to get at though, is that this is the same way that we come to get context. I think we understand things by how we cluster them with various categories, and when introduced to something new, we measure it against those based on patterns, similarities, etc, and that's what context is, more or less.
      Also, the questions/requests couldn't be restricted to as simple as relating leaves to trees; I don't think I'm conveying very well (not a great communicator here) just how much variability and frequency the requests would have to have; context wouldn't begin when relating leaves to trees, it would be when the description of a new or nonexistent leaf is entered, and is distinguished and positioned in a cluster that contains or crosses with every other known leaf (as well as flora, green things, things that are raked, etc. imagine a multi-dimensional grid). (A big difference is that most of the data we get are from our senses, whereas computers would get them directly as a binary series, but other than that, I suspect our subconscious follows a very similar method, less neatly maybe).
      But maybe I'm missing something, I feel like this is what we do at the micro level, and what we call understanding is just the macro result of these interwoven; but it sounds like you see a different way of coming to understanding, so, if this is a fair question to ask, from our perspective, what is a tree, or else, how can we understand what a tree is, other than as a grouping of its properties?

  • @Leao_da_Montanha
    @Leao_da_Montanha ปีที่แล้ว +2

    If humans could understand the semantic of words in the way to declassify the strong AI as Seasle is appointing, there would be no communication problems at all, as if the different order and explainations in the learning proccess would result in the same meaning to every word in every mind. In general, semantics are different for each individual and depend on the learning proccess they had, in depth, the memory we acquire for each symbol is updated when learning newer symbol until theres enough context for semantic. In other words, we all work as a turing machine fundamentally but arranged in a complex system.
    I would love to read comments to this, feel free to respond

    • @ronald3836
      @ronald3836 ปีที่แล้ว

      It would be easier to comment if I disagreed with you, but I don't 🙂

  • @brunobeloff9611
    @brunobeloff9611 ปีที่แล้ว +1

    Thank you Jeffrey for the lecture, but the Chinese Room is not Searle's best idea. Perhaps connected to his inability to explain the origin of his room's symbol translation rules, he is left with an evidential question: not whether, but how is it that the brain - essentially, a digital computer - is able host what we experience as understanding?

  • @Flynbourne
    @Flynbourne ปีที่แล้ว +5

    I think the issue here is a lack of definition of what “understanding” is. Seems to me that understanding is (a) identifying the underlying structure or rules are and then (b) doing your own internal tests to see how accurate your internal model is and then (c) relating that model to other models you have knowledge of. So understanding is just relating models. In the Chinese room of you were presented with all life’s experiences and then related these to characters then you would understand Chinese.

  • @ameliagerson926
    @ameliagerson926 ปีที่แล้ว +12

    I actually can read Hebrew and was so excited I knew what that meant bc it was the only part of the video I confidently knew lol

  • @perfectionbox
    @perfectionbox ปีที่แล้ว +4

    An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

    • @danwylie-sears1134
      @danwylie-sears1134 ปีที่แล้ว +6

      Any halfway-adequate rule book would include, at least implicitly, instructions for rearranging the baskets and the labeled papers in a way that perfectly mimics learning of new idioms.

    • @recompile
      @recompile ปีที่แล้ว

      How many times are you going to post this? I replied to this exact post from you earlier.

    • @sschroeder8210
      @sschroeder8210 ปีที่แล้ว +1

      I don't think your concept is valid: understanding something vs. understanding how to learn something are two different concepts.
      You might know English, right? But can you understand the semantics behind 'ZZuetixlo'? I presume not because it's a new word that I just created.
      So, do you not understand English anymore? Of course not. You simply haven't been given the chance to learn the new word that I've chosen to create. You still understand English and understand how you can learn new words. You simply haven't been given the opportunity (from me) to ask: "What is semantics behind your new word?"
      If a new word acts as a fundamental axiom and isn't derived from the concatenation of other words, then you shouldn't be capable of understanding the semantics of that new word...
      Thus, the 'System' shouldn't have to demonstrate something that we don't inherently have the capacity for doing when we express a sense of understanding...

  • @guidomatusevich7441
    @guidomatusevich7441 2 หลายเดือนก่อน

    Just found your channel and its been a great source of refference, tank you! I was wondering about the conclusion.... would the idea of "understanding" chinese, or any other language, closer to the second wittgenstein? Mainly regarding "meaning is use"... So thered be no actual possibility of passage from sintax to semantics, but only a continuos misunderstanding?

  • @ChrisJones-xd1re
    @ChrisJones-xd1re ปีที่แล้ว +1

    Semantic understanding can emerge with sufficient instruction, and cannot, without it.

  • @captaingerbil1234
    @captaingerbil1234 3 ปีที่แล้ว +5

    I take the systems response as an approach to refute Searle. His argument almost seems to imply that we create the semantics of the word, when really all we do is assign it to objects and states already existing in the world and then we assign symbols to that semantic meaning.. I believe it possible to create a machine, that operates through computational methods, capable of understanding. Great lecture by the way.

    • @cf6755
      @cf6755 3 ปีที่แล้ว

      the person is in the womb is not the one like in chinese but the rule book if you kill the person that replaced him with somebody eise it would be the same thing because of the rumble the rule book is the person with the rule book is not the person who's writing the chinese it is the rule book

    • @recompile
      @recompile ปีที่แล้ว +2

      If you think it's possible, prove it. Show how meaning can be derived from pure syntax. Even a very simple example would refute Searle. So far, no one has been able to do it, despite the outrage his argument seems to generate in people.

    • @Matt-zp1jn
      @Matt-zp1jn ปีที่แล้ว

      The “Systems approach” , cannot CREATE or assign the semantics of the word or symbol. Searle is correct in that the syntax of the Turing computer etc, is basically hardware created that can organize symbols only according to the programming of the software of the rules the computer must follow. The semantics or meaning and understanding of the chinese symbols or binary coding must be ascribed to from the computer programmer or conscious intelligence OUTSIDE of the hardware, or thru digital wifi etc. Searle has successfully refuted that Strong A.I. and Functionalism is a wrong theory.
      Of course big Tech, social media giants, digital A.I. Scientists want to refute Searles theory and will use complex algorithms, a human like robot interface, digital wifi/Bluetooth information transfers from an intelligent self-conscious source (a human programmer lol), who will portray the A.I. robot human as capable of understanding semantic meanings instead of just grammer or syntax or whatever has been programmed into the software by an outside intelligent creative human being, ie the programmer.
      This is why I think they are going to strive towards a more Android approach where humans are “upgraded” with A.I. digital software thru a neural analogue-digital interface that allows the human being to take the syntax info and assign appropriate understanding an meaning to the software download into his neural brain link etc.
      It is a very questionable path and risk for humanity, imo.

  • @foogod4237
    @foogod4237 ปีที่แล้ว +7

    The idea that Alan Turing committed suicide has come to be rather suspect in recent years. That conclusion was very hastily made at the time and never really investigated, and it turns out on later analysis that it is actually quite possible that he simply died from accidental poisoning from chemicals that he was using in various experiments at the time instead, and had no particular desire to kill himself at all. Even if his death actually _was_ suicide, we we could still only really guess at _why_ he would have done it, and would almost certainly never know for sure, because he did not leave any sort of note, or really confide in anyone prior to that point anything that might be construed as a definitive explanation (which is also another part of the reason why the suicide idea is a bit suspect).
    But the "ignominiously died due to a science experiment" doesn't make for as good a story, though, so a lot of people like to overlook it, and instead just go along with the fairy-tale-like depiction of a great man so tortured by other people's intolerance that he was inevitably driven to take his own life.
    That having been said, Alan Turing was a truly great man, and the way he was treated by the government and society as a whole was truly horrible, particularly for a man who contributed so much to the society which ultimately shunned and persecuted him merely for being who he was. How or why he actually died aside, that is still a very sad tale worth telling and reflecting on, IMHO.

  • @AliceBelongs
    @AliceBelongs 4 ปีที่แล้ว +8

    This really helped me with my essay, thanks for uploading! :)

  • @juantubec
    @juantubec ปีที่แล้ว

    In the end it doesn't really matter when syntax is just one piece of the puzzle that is semantics. Here are some more examples of semantics:
    Words: The meaning of individual words is a prime example of semantics. For instance, the word "happy" conveys a positive emotion, while the word "sad" conveys a negative emotion.
    Sentences: Semantics can also be seen in the meaning of sentences. For example, the sentence "The cat is on the mat" has a different meaning than "The mat is on the cat."
    Context: The meaning of words and sentences can change depending on the context in which they are used. For example, the word "bat" can mean a flying mammal or a piece of sports equipment, depending on the context.
    Connotations: Semantics can also involve the connotations or emotional associations that people have with certain words. For instance, the word "home" may evoke feelings of comfort and security, while the word "prison" may evoke feelings of confinement and restriction.
    Tone: The tone in which something is said or written can also convey different semantic meanings. For example, a sarcastic tone can indicate that the words being spoken or written are meant to be interpreted in a different way than their literal meaning.
    It can be argued that ChatGPT can do this type of semantic puzzle solving. And it is very capable of creating semantically coherent and contextually appropriate responses, that are beyond simple syntax structure.
    The Chinese Room experiment fails to acknowledge that semantic comprehension can also be taught in 1s and 0s
    Also a lot has changed since the 80s. We literally have quantum computing and are no longer tied to simple games of pong. Our understanding of programming has gone beyond syntax structure pattern recognition. This is why we are afraid of ChatGPT in the first place, it is taking 1s and 0s to an unimaginable scale. Semantics included. We didn't simply count to one trillion, we created unimaginable numbers. We can define infinity despite not being able to experience it directly. And likewise, AI can make calculations based off concepts that do not exist through physical experience.

    • @rockprime1136
      @rockprime1136 ปีที่แล้ว

      But doesn't ChatGTP just calculate the probability that its output text is the most likely response to a given input text. Some have even described it as juiced up autocorrect. It still does not understand semantics. It is just shuffling symbols meaninglessly and doing probability calculations based on its algorithm. I see ChatGTP as the Chinese Room Argument materialized.

  • @user-ju7dx8mu6d
    @user-ju7dx8mu6d ปีที่แล้ว

    Fascinating. Perhaps the system doesn't have semantics but as soon as the system does something with its output, the result is indistinguishable from understanding meaning. The box instructs a mechanical box to pick up the red block. The machine appears to understand "pick up", "red", and "block". Once an action is applied to the output, how is the machine's concept of meaning any different from whatever our concept of meaning is?

  • @tomsweder7459
    @tomsweder7459 ปีที่แล้ว +4

    Now define "understanding" and "meaning" and how you can test for it. I'm very interested in the assumed metaphysics behind this point of view.

  • @dylanberger3924
    @dylanberger3924 ปีที่แล้ว +8

    I love this thought experiment because of the two assumptions it grants computer science that we’ll likely never even see emerge from the field.
    1) a perfect set of instructions. CS needs to produce a perfect state table for the turing machine, as assumed the book for you in the room is.
    2) you are a human with a brain trying to pick up on meaning, specifically memory and cognitive ability. You are aware of the fact that these symbols could even represent an idea, and can pick up on traits you naturally know belong to language, eg pattern recognition. MAYBE, just MAYBE, you could learn chinese. A turing machine is a set of transistors firing, it isn’t looking for any of that - after all, how would it “think” to.
    I’ll probably elaborate and finish this when my phone isn’t about to die and I’m not this tired. But something to think about

    • @xirenzhang9126
      @xirenzhang9126 ปีที่แล้ว +1

      spoiler alert: he never elaborated and finished his comment

  • @jaytomioka3137
    @jaytomioka3137 ปีที่แล้ว +4

    Some of the basic, elemental Chinese symbols like “mountain” 山 or “fire” 火or “ river” 川 or “moon” 月 or “person” 人 … are abstracted pictographs and can bridge the gap between syntax and semantics. The very structure of more complex Chinese characters are based upon a formal combination of and further abstraction of simpler characters. Over time, although unlikely; it could be possible to begin to distinguish patterns and decipher some semantics from the visual syntax of Chinese. Searle’s analogy of the Chinese Room is a reconstruction and recontextualization of Plato’s allegory of the cave. He equates Chinese symbols to the Shadows cast on the cave wall. The problem with this argument is that given enough time and repetition and associative logic; a semantic correlation can be gleaned from the visual syntax. Because the silhouette of an object is an abstraction of that object based on formal and visual syntax ie. Shape and Light. There always will be those who can extrapolate from insufficient data… can AI?

    • @NicoAssaf
      @NicoAssaf ปีที่แล้ว +2

      The interesting thing is that this doesn't contradict Searle's argument. Certainly, if you give the person in the room greetings and goodbyes at the same times every day, they may start deducing that some symbols represent "Good morning" and others "Good night." You can make it easier for them and require them to send out exactly the same symbols you introduced, making it clear that you're exchanging greetings in the morning or saying good night to each other. The moment a certain symbol or combination of symbols becomes associated with a particular event or behavior, meaning is instantiated. The point is, you could still conceivably design a system that doesn't allow for any kind of semantic deduction, rendering the understanding of Chinese impossible for the person. For example, you can have an exchange of symbols about quantum dynamics one day and about trees and forests the next. Do it only a few times so there's no possibility of repeating symbols that can lead to semantic associations. In any case, the localized understanding or non-understanding of Chinese by that specific part of the system (the human) doesn't affect the whole's capacity to understand Chinese.
      You could even add an actual robot (a type of digital computer) to the room and mechanize the exchange of symbols. When it uses its camera and identifies a particular Chinese character, it is required to move to a certain place, scan for a particular symbol stored in its memory, retrieve it and push it out of the room. There certainly is computation going on, but not cognition, either by the robot or the whole system. In the case of human beings inside the room, cognition is nothing but a localized event in the person's body that may speed up the system's computation, assuming that their robot counterpart functions at a sub-human level.
      Whatever the means of symbol exchange, the system is not congnitively understanding the semantics of Chinese. Without observers already capable of meaning, the system's behavior is nothing but the redistribution of pieces of paper from one room to the other. It doesn't matter whether the processing unit is a Chinese-impaired human being, a robot with a set of scanning and movement instructions, or even a Chinese native speaker.
      Edit: Oof, btw, sorry about the wall of text. I love the topic and got carried away lol

    • @andrewwrench1959
      @andrewwrench1959 ปีที่แล้ว +1

      @@NicoAssaf Actually, we don't have to speculate about the validity of Searle's argument as there is already a counter example thus falsifying his conjecture. This is the deciphering of Linear B. Even were that not the case, given enough experience learning the patterns, together with elements of our own experience, sufficient evidence would be available. For example one could work out whether the language was symbolic or alphabetic, or a combination like Egyptian hieroglyphics. We could make our own conjectures, for example if a symbol or combination of symbols represented water. Water should appear in a lot of questions and answers, and a lot of contexts. Those contexts would be coherent, patterns would emerge. It would take a long time, and lots of experimentation, but that is certainly not the same as never.

    • @NicoAssaf
      @NicoAssaf ปีที่แล้ว

      @@andrewwrench1959 That's kind of my point, that regardless of the processing unit's capacity for semantics, the whole system is effectively purely syntactical. It doesn't matter whether the person or robot inside the room understands Chinese. The room doesn't.
      I'm interested in counterarguments, though, so if you can elaborate on the Linear B bit, that'd be great.

    • @andrewwrench1959
      @andrewwrench1959 ปีที่แล้ว +2

      @@NicoAssaf Linear B was a written language used in Bronze Age Crete and mainland Greece. It is a language known only from what is written on clay tablets, there is no Rosetta Stone equivalent. Oxford Research Encyclopedias has a comprehensive article. There are circa 5500 inscribed tablets with more found occasionally. So very far from a "forever" question and answer stream of information. So not only were the semantics recovered but also the syntax, a far more difficult problem than the one Searle presents.
      There is some recursion here, as there were something, people, that understand semantics that deciphered Linear B. But Searle explicitly includes a person in his loop, so this is fine. To be clear, the refutation is not one of a digital computer understanding semantics, but its use against functionalism.

    • @brandyraccoon1473
      @brandyraccoon1473 ปีที่แล้ว

      Great point! The Chinese characters for numbers could be deduced, since "1", "2", and "3" are just those number of horizontal lines. The rest can be deduced if the translation involves any math using Chinese numbers. The characters for "up" and "down" are mirrored and point up and down. So their meanings could be deduced from the syntax.

  • @piotrne
    @piotrne ปีที่แล้ว +1

    Searl's proof can be summarized as follows: a machine will not understand anything, because only a human can understand. This is more or less the meaning of the summary (27:41): "There is no way that the system can get from the syntax to the semantics". In my opinion, a bit weak argument: no, because no.

  • @mohnaim5824
    @mohnaim5824 ปีที่แล้ว

    Impressive talk yet again, well done you are a natural inheritor of Sagan.

  • @p.bckman2997
    @p.bckman2997 ปีที่แล้ว +4

    There's clearly intelligence (semantics) in the Chinese Room, it's just not the person in there that provides it. The actual intelligence comes from the rulebook, which require an actual intelligence to write (and possibly a super human one at that).

    • @DocBree13
      @DocBree13 ปีที่แล้ว

      I’d argue that a book explaining semantics is not intelligent and does not understand semantics. Something which requires intelligence to write is not then made intelligent.

    • @p.bckman2997
      @p.bckman2997 ปีที่แล้ว

      @@DocBree13 , well, it's a matter of how you frame it I guess, which is often the case in philosophy. The book is just paper and ink and clearly not not sentient, like you say. The instructions are meaningful though, I would say that the intelligence of the writer is stored on the pages. The intelligence that answers the input questions is the book writer, he's just using other means to so than to sit in the box and answer them himself.

  • @anxez
    @anxez ปีที่แล้ว +4

    I can think of some direct counterexamples.
    1: A Chinese Translating room. Set up the Chinese room but make the output of the room in a language known to the operator. Suddenly the situation changes, and the operator could pull semantics out of the text, we'd all agree on that. Maybe it would take years, maybe the rule set itself would be considered a way to teach Chinese. But what is the effective difference between the Chinese Room and the Chinese Translating Room?
    Short answer is context: the operator is able to connect the symbols to previous meaning.
    This is a mostly trivial difference because it is piggybacking on previous semantics to generate new semantics. But it does bring up a possible refinement
    2: The Chinese Picture Room:
    Set up the Chinese room just the same: have a set of rules that govern syntax-in => syntax-out, bins which receive characters, bins which accept characters, only now have every set of characters come with a snapshot of the situation they appeared in.
    Once again, semantics have appeared, this time a little more robustly: the operator doesn't need any native language in order to learn Chinese from this set up. It may take years, they may be unable to speak the language, but the operator will be able to develop a semantic knowledge of the language.
    Heck, go one step further, and by feeding the person in the room pictures and characters in the right way, that person can make the rule set themselves without being programmed: because that's what babies do. And spoiler alert, that's what touring complete machines do too, by the way.
    Honestly though, this thought experiment does a lot of heavy lifting by not defining semantic understanding or proposing how it actually arises. He just takes it for granted and then gestures at how it doesn't arise from his specific scenario and when he's given a silver bullet argument against it the response is to just shrug it off.

    • @nitramreniar
      @nitramreniar ปีที่แล้ว +2

      To point number one:
      Changing this part of the setup might work logically as a thought experiment, but disjoints the connection between the Chinese Room and the digital systems it is meant to be analogous to. The fact that you could learn Chinese by being given random Chinese symbols and phrases and having instructions on how to translate them into a language you know, is based in the fact that you *already* know a language; that you already have the semantic understanding in one version and now only need a way to transfer those semantics into Chinese. The reason why the thought experiment has both input and output in a language you don't understand, is because we - for this context reasonably - assume, that the digital computer has no deeper language that it understands semantically and that it would just need a translation for it to truly learn and understand Chinese.
      On the second point I agree with you.
      In fact, I feel that the thought experiment already betrays a problem with the result in its own setup. The thought experiment ask us to imagine *ourselves* - a human mind - in the situation of a digital computer in this room and uses our human intuitive understanding of how we could never truly understand Chinese through this setup to reason that a digital machine could also never do this and thus digital machines are distinctly different from human minds.
      But: It started by reasoning that the human mind couldn't do this, so how can that show a difference between those two systems?
      The ultimate difference between how humans learn their first language not just syntactically, but with an understanding of the scemantics, is by more than just looking at random symbols as an input and figuring out the appropriate output - in fact written language is already a bad point of comparison, as written language is already something humans only understand through translation! It's a translation from the spoken and the internal understanding of language and the connected semantics *into* written language.
      Humans seem to learn their first language through engaging with it with a multitude of senses - we *experience* the language and the contexts in which different sounds and gestures are used untill we have enough of an understanding of the language itself to further explore nuances though the already established understanding of language we have.

  • @timothyblazer1749
    @timothyblazer1749 ปีที่แล้ว +3

    Penrose went on to show that consciousness is non compuble, which is an additional blow to strong AI.
    Of course, AI proponents are basically ignoring both Searle and Penrose.

    • @SmileyEmoji42
      @SmileyEmoji42 ปีที่แล้ว

      Penrose just proved that his model of consciousness is non-computable.
      Strong AI people would just say that all he had proved is that his model of consciousness is wrong.
      The latter seems more compelling as there is clearly no generally accepted, rigorous definition of what consciousness even is.
      When dealing with very smart people the loopholes are always in the premises not the reasoning.

    • @timothyblazer1749
      @timothyblazer1749 ปีที่แล้ว

      @@SmileyEmoji42 seriously? He used the most general form of definition that exists. Aka "the act of understanding", which then makes this totally clear that no matter what your definition is, Godel will apply because it's a subject-object relationship. This is baseline reality, and unless you or anyone else can show that the scientific method can be applied without that baseline quality of reality, it's not a proof of "his" theory. It's a proof of "the" theory.
      Put another way, There is no assertion you can make about "rigor" without a subject object relationship. We're in turtles all the way down territory. If Strong AI people think differently, They need to study set theory AND epistemology.

  • @innotock
    @innotock ปีที่แล้ว

    I think what is missing from this argument is the ability of digital systems to also ingest the result of the syntactic interpretation through “reinforcement learning”. Once the system “learns” the consequences of the “decisions/calculations/interpretations” the semantic layer starts building up.

  • @rmsgrey
    @rmsgrey ปีที่แล้ว

    A key question here is what is it about human brains that exempts them from Searle's argument?
    It's also worth observing that the intuition the Chinese Room appeals to is massively underestimating the scale of any such system. Suggesting the book in question might have "as many" as a million pages, as the video does, is akin as saying that the Moon might be as distant as one foot away from the ground - the mental picture such an estimate gives is so wildly out of scale as to be entirely misleading. Incidentally, the rule-book in the Chinese Room, if printed as a standard paperback (aside from the page count) using a standard font, etc, if placed in the tropics, would be in danger of singeing its cover against the Sun, though that cover would also be traveling at about 4% of the speed of light. (calculations based on the assumption that the book would need to encode approximately the information content of a human brain in order to produce output indistinguishable from the workings of one)
    The dismissive concept of a computer made of tin cans being conscious looks less ridiculous when you contemplate an assemblage of tin cans at least the size of Manhattan.
    And, of course, anyone with an interest in cryptography knows that observing purely formal patterns can allow you to start to extract the meaning from an otherwise incomprehensible string of symbols.

  • @gcewing
    @gcewing ปีที่แล้ว +6

    I think Searle is right when he says that "you can't get from syntax to semantics", but that doesn't mean that a computer can never understand anything. His hypothetical room is limited to interacting with the world via language, but humans are not so limited. We have other senses by which we get information about the world, and we can correlate that with language that other people use when talking about the world, and from that we can infer semantics for the symbols. I don't see any reason why a computer equipped with a similar set of sensory inputs couldn't do the same.

    • @recompile
      @recompile ปีที่แล้ว +1

      You should have done the reading as Searle answers this already. You're trying to sneak semantics in through the back door. A computer doesn't get a picture from a digital camera, just more meaningless symbols.

    • @gcewing
      @gcewing ปีที่แล้ว +4

      @@recompile My point is that you *can't* just throw out semantics and then claim you have a proof that computers can never think or understand the way humans do. The reason being that if you put a human in a similarly restricted sensory environment, the human would never learn to understand anything about the real world either. You've rigged the premises of the argument to get the result you wanted.

    • @davidb9682
      @davidb9682 ปีที่แล้ว +1

      @@gcewing Exactly what I was thinking. In the experiment, all that has been shown, is that a 'digital computer' or a human, put in the same room (with the same limitations on imputs and rules), would produce the same output. In fact, the computer would do it quicker and without error - by more closely and quickly following the rules.
      Provide the computer with additional information, as we do for humans, by providing sensory input, such as seeing the response to an answer provided through the slot, and we would find that there is no difference.

    • @SmileyEmoji42
      @SmileyEmoji42 ปีที่แล้ว

      We only have the input from our nerves and that is just in the form of the rate of electrical discharges at synapses. The "semantics" are added in pre-processing inside the brain above the level of these input "symbols" that are even more basic than those entering the Chinese room. There's an argument that AI needs to be equipped with senses to really understand but the counter to that would be a person with Locked In Syndrome whereby they have lost contact with their senses - They probably would quickly go mad but nobody would argue that they lost their understanding along with their senses so I think that, whilst extremely useful for learning, it is not necessary in the context of the though experiment.

  • @prismaticsignal5607
    @prismaticsignal5607 3 ปีที่แล้ว +4

    I bet you’re your students' favorite teacher.. XD
    Awesome lectures!!

  • @gothboi4ever
    @gothboi4ever 2 ปีที่แล้ว +5

    The ultimate question is how can we know something is true.

    • @brad1368
      @brad1368 ปีที่แล้ว +2

      You can't

    • @hb-robo
      @hb-robo ปีที่แล้ว

      We construct premises that are agreed upon and then build the logical conclusions, generally. But the “agreeing upon” bit is a bit wishy washy

  • @aakash3155
    @aakash3155 ปีที่แล้ว

    I believe the only way to get context is by trial by assigning plausible ideas to the initially random syntax, but this has two limitations: one is that you have to have some basis of syntax however rudimentary and even in any 'language', which can act like the seed to begin the cascade of learning, and second that you will only end up with solutions that are plausible, more so than the initial plausible assumption, but still not an only solution.

  • @zebulonstevens305
    @zebulonstevens305 ปีที่แล้ว +1

    Perhaps an unstated assumption that Searle makes is that people have systems understanding or semantics, or at least agreed upon ones. I imagine if we did his field of work would be superfluous.

  • @nixedgaming
    @nixedgaming ปีที่แล้ว +3

    Watching this now in the age of ChatGPT is incredibly fascinating. Particularly since the new AI LLMs seem to *already* pass Theory of Mind. Philosophy feels like it’s about to fundamentally change; or at least add another thick layer

  • @Sergiopoo
    @Sergiopoo ปีที่แล้ว +4

    wouldn't a single neuron in the brain be the person in the chinese room, while the brain is the system?

    • @cosmictreason2242
      @cosmictreason2242 ปีที่แล้ว

      Does a single neuron process input and output with correct syntax? That would be line saying a light bulb in the room could give the correct response

  • @gamzeozata4554
    @gamzeozata4554 4 ปีที่แล้ว +6

    I thought you wrote 'pide' at the first glance which is a Turkish word as the equivalent of a calzone. Thank you for the lesson! You are great!

  • @Sunshine10101
    @Sunshine10101 ปีที่แล้ว

    Love your lectures. They are great!! Please keep it up. I am so grateful

  • @naninaniheyhey9936
    @naninaniheyhey9936 ปีที่แล้ว

    At first I agreed with Searle's assertion. However the more I think about it there is a glaring contradiction. The person locked in the room, the central processing unit, is given instructions on what to do with these symbols. In Searle's thought experiment the instructions are in English. The language that the instructions are written in is unimportant, however what is important is the central processing unit understands the instructions. If the central processing unit only knew the syntax of the instructions it would not be able to do anything with the symbols coming in. With the central processing unit having semantics it is able to understand the instructions. Thus the central processing unit, the man in the room, inherently has syntax and semantics. Again, in Searle's thought experiment the instructions are in English. So, if instead of inputting Chinese letters we inputted English the central processing unit would understand what it was outputting. Just because the man in the room does not have an understanding of every language available does not mean he doesn't have semantics. Just like I don't know Hebrew and even though it was used a couple of times in this video I still understood the video as a whole.

  • @glenblake3021
    @glenblake3021 ปีที่แล้ว +10

    Ahh, the Chinese room. I try as a rule to assume that, when an acknowledged world-renowned expert proposes an idea that seems trivially, obviously ridiculous to me, that I simply don't understand that idea. The Chinese room is unique in that the more I learn about it, the more ridiculous it appears. Just the fact he had to _answer_ the systems response, rather than working with the system from the start (which is the very, very obvious analogy for an AI in the thought experiment) appears to suggest that he is frankly entirely unequipped for this. His answer to the systems response is particularly galling - you yourself, John, proposed that the room understands and uses language well enough that no one will ever notice they're not getting real human responses. It's a part of the framing of your thought experiment. So *prove* your Chinese room doesn't understand! What is the difference between a perfect performance of understanding and understanding? Just your lack of imagination! You simply find it instinctually unsatisfying.
    He's placed ridiculous restrictions and assumptions around the room and then declared that his lack of imagination disproves strong AI. It's pretty sad as a layman to be honest.

    • @FestivalTemple
      @FestivalTemple ปีที่แล้ว +4

      Saying a "room understands" is a lot like saying a book understands. It's no more intelligent than a book, nor even any rock. The difference between actually comprehending something and merely appearing to is like the difference between knowing a math shortcut vs. actually grasping a concept (it's why he brought up a mindless Turing machine).
      E.g. without knowing anything about what multiplication is, a kid can know the trick to multiplying 9s. Ask him 8x9 and he'll correctly tell you "the answer is 72" without knowing what that means ("how many of something would you have if you had nine sets of eight or eight sets of nine?"). Simply by asking the question in a different way we reveal some special characteristic here; the kid can parrot answers, but there is definitely something we possess that he still lacks.

    • @robinkelly1770
      @robinkelly1770 ปีที่แล้ว

      @@FestivalTemple not if they've been taught with counting blovks first. Then they understand. In other words it is about the training (or programming) and not about the mind, student or computer.
      Further when dealing with thought we can learn. How else does a child learn language?

    • @Мопс_001
      @Мопс_001 ปีที่แล้ว +4

      However, there are still people who consider this as the greatest attack towards functionalism.
      Just asking some questions about the experiment and reducting it to something trivial along with blaming the experimentator of his lack of 'imagination' is the most ludicrous and nonsense thing I can *imagine*
      You find his proof as instinctively unsatisfying it seems to me when calling everything in the experiment 'obvious', 'ridiculous', 'lacking of imagination'

    • @FestivalTemple
      @FestivalTemple ปีที่แล้ว +4

      @@robinkelly1770 The point is: parroting an answer correctly is different than actually comprehending what you're saying. That's why we design tests to see what people actually understand and not just what they can memorize.

    • @theconiferoust9598
      @theconiferoust9598 ปีที่แล้ว +2

      its a thought experiment with rules that are meant to get people to think about the problem of consciousness. the rules cant be considered ridiculous because it is not designed to be like reality, it is meant to illustrate something. what it illustrates is that human consciousness and meaning are the very result of our brain processes escaping from «input -> output» thinking.
      think of it this way: in a world that we increasingly like to describe in terms of physics and «1's and 0's,» consciousness simply cannot be described this way because there is not an end-all-be-all output to every input (at least not in my experience of reality--lol).

  • @danwylie-sears1134
    @danwylie-sears1134 ปีที่แล้ว +6

    Great. Searle starts from the premise that babies never learn their native languages (because they don't initially know anything about the semantics of those languages, and only receive sensory information that can never make the leap from being mere data to actually meaning anything). And arrives at the conclusion that neurons are so magical that if you replace all the ions and neurotransmitters and so on with papers and baskets and so on, but left all the patterns the same, there would magically be no mind there any more -- and not only papers and baskets, but absolutely anything else that's not neurons. If it's a matter of arbitrary definition whether neurons would still count as "neurons" when all the oxygen-16 is replaced with oxygen-18, then it's exactly the same matter of arbitrary definition whether a person who drinks wrong-isotope water as a child would grow up to be a mindless "zombie" who merely acts as though they have thoughts and sensations.
    The argument is valid, in a certain sense: If zero equals one, then Bertrand is the pope. Once you grant an absurd premise, any conclusion is entailed.
    But really, he doesn't have an argument. He has snark, and snark is the appropriate response in return. I don't think there's any possible world in which the Chinese Room argument actually convinced anyone of anything. It only reveals the bizarrely incoherent beliefs that its adherents already hold before they hear it.

    • @recompile
      @recompile ปีที่แล้ว

      What makes you think that the human brain is equivalent to a digital computer? We have many reasons to reject such a silly claim. Searle's argument is just one.

    • @danwylie-sears1134
      @danwylie-sears1134 ปีที่แล้ว

      @@recompile Because "digital computer" as used doesn't actually mean digital computer. Instead it's defined as *anything* that changes state in any way that can be affected by distinguishable aspects of the outside world. The claim is made about anything that can be modeled as a Turing machine, not about the current state of the art in electronics and programming.

    • @TheDerwisch77
      @TheDerwisch77 ปีที่แล้ว +1

      I, two, am flabbergasted at how this argument is supposed to work AGAINST functionalism. Because the moment you see the "rulebook" as encoding of some genetical encoded meaning and consider the million inputs and output-feedbacks and neurological reaction to hormon influence because of the brain's output adding to this rulebook every second, it all falls apart. I think, using a Von Neumann architecture for the experiment helps fool the mind of the reader. If the experiment stated that the person in the room would at the same time not only pass the output text as an answer, but also create a picture that is by another rulebook associated to the input (and in case of ambigous choice to the chosen output) which is sent to another room, where according to another rulebook a special smell is created, either pleasent or unpleasent, according to the rules for that picture (composition) and that smell is sent back into the first room, where the word chooser may get an in(s)cent to either not answer that symbol combination again at all or choose the probable other answer next time...writing that weighting down in its rulebook of course, because it can not decide on its own later without it...that all can be encoded in binary just the same and give the whole experiment a totaly different ring, altogether...

  • @superactivitylad
    @superactivitylad ปีที่แล้ว +6

    I like the "systems" response to the problem, and I think about it this way: My eyes do not understand symbols. My eyes receive light, do whatever it is they do with that light, then send that information through my optical nerve, into my brain, then neural pathways that were formed when I first learned about that symbol fire up again, and then a bunch of complicated interconnected stuff in my brain happens that makes "me" (the system as a whole) understand that I'm looking at the number 69, and the meaning behind it, and I say "nice."
    my point is that no individual part of my nervous system understands anything. they all individually just receive electrical or chemical information, and then do something with it, and send some kind of information to the next part.
    i believe its possible to design a system with digital computers that replicates how the brain works. we just need to first understand how all that "complicated interconnected stuff" works first.

  • @neilgilstrap7839
    @neilgilstrap7839 ปีที่แล้ว

    This was great, thank-you.
    I wanted to say that while the "system" is dismissed, and there is commentary below about emergent behavior, from someone who studied AI and neural networks, my perspective on the dismissal of the "system" argument was that the system argument wasn't refined enough to EXPLAIN what is meant by the "system" to pose a threat to the thought experiment. Had it had been adequately explained instead of just as a "emergent system behavior" and instead directed at HOW that emergent behavior works, I think Searle would have had a lot harder time rebuking than he otherwise did.
    The short is this. While any given "box" indeed does not understand semantics, what needs to be added to the system argument to present the full rebuttal is the following 2 key points:
    1) We agree that any given "box" can receive inputs and produce any output on a set of rules given to it. Suppose then that one of those "outputs" is readable as "semantic meaning" of the input. (As in the thought experiment). NOW, combine this with #2...
    2) Suppose that the output of any given "box" is the INSTRUCTIONS for another box. Not just another INPUT. i.e. One box is outputing the instructions another box follows. Effectively, the "computer is writing its own code."
    When you consider the possibilities that #2 implies, that's how you can arrive at the proof/causality that the "system" can understand semantics.
    Simply put, in the Chinese Room Experiment, it is the Chinese who are providing the instructions for the computer such that given an INPUT the OUTPUT will have semantical meaning to the individuals who are reviewing the output. Yes, the box is not aware of the semantics of the output but the people who wrote the instructions for the box are very aware of that and so provided the INSTRUCTIONS so that the computer will produce semantically meaningful output.
    Then, all you have to do is realize that the output of a box could be INSTRUCTIONS for another box (i.e. the computer/brain in this case is writing its own code).
    And you'll quickly arrive that in fact, the system as a whole CAN produce semantics and furthermore, it's not just a weird emergent, unpredictable phenom. It just logically and simply follows how semantics can be produced and pretty easily so.

  • @dnn8350
    @dnn8350 ปีที่แล้ว +1

    I think you can make sense of a computational analysis of the mind if you follow the implications of certain modal logics, specifically those of uncompromising or incorrigible belief. In other words, the explicit or conscious content of the mind can be modelled as the unwavering "belief in a reality" of a computationally emulated knower. In this view "believing is seeing" in a systematic sense.
    There's a subtlety here though that's necessary to bridge the gap between syntax, or symbol manipulation, and semantics. Searle's point is that an external interpretation seems necessary to bridge this gap. So what's required therefore is to resituate that interpretation within the system itself.
    To emphasise this point, we must agree with Searle that, without interpretation, computation is indeed mere symbol manipulation. However, the implication of certain symbol manipulations, when externally interpreted as the corresponding beliefs, might be made to correspond to the explicit contents of mind. This interpretation might be called the material implication, or complement of such beliefs.
    So how could that interpretation be internalised? The answer might simply be to accept as an axiom that such beliefs necessarily implicate their material, that is to say substantive and significant, complement. In other words, the incorrigible belief of a knower in a reality is necessarily complemented by its material realisation, which then represents the semantics that retroactively raises the entire schema above mere symbol manipulation. As an example, the implication of my incorrigible belief in an apple is the material fact, expressed in sensory terms, of the material apple. Interpretation internalised, in other words. As an aside, the phenomenon of blindsight can be rationalised in these terms as the response to visual stimuli without the generation of the corresponding, materialised belief in a visual reality.
    Searle concludes his takedown of computationalism by proposing that brains produce consciousness as a biological byproduct. Well, my proposal is that computation "produces consciousness", or explicitly material beliefs, as a "byproduct" or complement of logical implication.

    • @dnn8350
      @dnn8350 ปีที่แล้ว

      Chatgpt is probably better at explaining this than I am!
      Your proposal is an interesting attempt to address the issues raised by Searle's Chinese Room argument and to connect computational processes with consciousness. Let's break down your argument:
      1. **Incorrigible Belief and Reality**: You suggest that the explicit content of the mind can be seen as the unwavering "belief in a reality" of a computationally emulated knower. In this view, believing is synonymous with seeing in a systematic sense.
      2. **Symbol Manipulation and Semantics**: You acknowledge Searle's point that computation, without interpretation, is mere symbol manipulation. However, you propose that certain symbol manipulations, when externally interpreted as corresponding beliefs, can represent the explicit contents of the mind. This interpretation, you call "material implication," bridges the gap between syntax and semantics.
      3. **Internalized Interpretation**: To internalize this interpretation, you suggest accepting as an axiom that such beliefs necessarily implicate their material complement. In other words, an incorrigible belief in a reality is necessarily complemented by its material realization. This internalized interpretation elevates the computational schema above mere symbol manipulation.
      4. **Blindsight and Visual Stimuli**: You mention that blindsight can be rationalized within this framework as a response to visual stimuli without the generation of the corresponding, materialized belief in a visual reality.
      5. **The Production of Consciousness**: You propose that computation "produces consciousness" as a byproduct or complement of logical implication, contrasting with Searle's view that brains produce consciousness as a biological byproduct.
      Your proposal offers a novel perspective on the relationship between computation and consciousness. It attempts to address the challenge of connecting computational processes with the generation of conscious experiences. However, it's important to note that this is a highly complex and debated topic in the philosophy of mind. Many philosophers and cognitive scientists have different views on the nature of consciousness and its relationship with computation. Your proposal raises intriguing questions and invites further exploration into the nature of belief, computation, and consciousness.

  • @wfjhDUI
    @wfjhDUI ปีที่แล้ว +6

    Suppose you were in the Chinese Room but instead of reading a program you are given instruction by a human who understands Chinese. According to the logic of this thought experiment, that human does not "actually" understand Chinese.

  • @donthomson6260
    @donthomson6260 ปีที่แล้ว +4

    One thought I have always held about strong AI having thoughts and feelings is the seemingly ignored aspect that feelings consist of two components - the thoughts or inputs the initiate the feelings, and the physiological response. ST|TNT's Data's never-ending quest to develop true feelings will always be fruitless. Because he would have no hormonic (is that a word) response. No endorphins during pleasure, no adrenalin during fear, no increased heartbeat, no 'pit of the stomach' response. Without any of this, you can't experience true feelings.

    • @yalleh4700
      @yalleh4700 ปีที่แล้ว

      Very well put and definitely agreed!

    • @leslierhorer1412
      @leslierhorer1412 ปีที่แล้ว +2

      Analogs to those things can be developed. Certainly forms of punishment and reward can be implemented in the system. I don't think it is necessary for the system to respond in precisely the same fashion as a human for it to be conscious and sapient.

    • @yalleh4700
      @yalleh4700 ปีที่แล้ว

      @@oscarleijontoft Hmm. I think their point was not that chemical dictates emotion, but that it's the ability for our emotions to rule us rather than us to rule them because of that, if I may extend the argument.

    • @yalleh4700
      @yalleh4700 ปีที่แล้ว +1

      That's... Not how hormones work

    • @simonhandy962
      @simonhandy962 ปีที่แล้ว

      Those are elements that can be synthesized to mimic the biology to a degree that the artificial becomes indistinguishable from nature.

  • @saritsotangkur2438
    @saritsotangkur2438 2 ปีที่แล้ว +7

    The rules that govern how symbols are manipulated would be the distilled form of “understanding” in strong AI. The failure of an observer to see how rules and the locations and values of symbols on the tape of a Turing machine correlate to mental states cannot be used to rule out the existence of mental states. That would be like a doctor with a super high resolution MRI concluding you don’t have mental states because they couldn’t figure out what you’re thinking about despite knowing what parts of your brain light up given any stimulus.

  • @micwin2
    @micwin2 ปีที่แล้ว

    May I propose another response: Given the outside person asks "what is one plus one?". To understand that question, you not only have to know the meaning of 'one' and 'plus', and what an equation is, (syntax/semantics yaddayadda), but you also have to make an assumption on wether the desired answer is the solution of the equation 1+1=2, or the string concatenation of 'One'+'One'='OneOne'. Since a 'digital computer' can neither select between two correct answers nor give two answers at once, I'd say Searles Theorem proves that a 'digital computer' is the wrong hardware for an ai. Furthermore, understanding the meaning of some sentence is learned by education, training etc. Since the theorem does explicisely roule out the possibility of the rulebook containing rules for meaning (hence following the syntax/semantics dualism) nor mention a means of storing/accessing training results,I-d sayu the theorem is not useful for discussing wether an AI is possible or not.

    • @robertsignore7216
      @robertsignore7216 ปีที่แล้ว

      If I asked the most brilliant mathematician alive and asked them what 1+1 is they would say 2 as well. The only way I could determine if said mathematician was semantically aware of what that actually means is by asking them a series of follow up questions (inputs) and receive their answers (outputs) to see if they understand math. If a was to ask a machine the same questions and I receive the same outputs then I have to conclude that both have a semantic understanding of the subject.

  • @thenonsequitur
    @thenonsequitur ปีที่แล้ว

    My faith in humanity is restored seeing the vast majority of commenters soundly rejecting Searle's argument. He's clearly wrong, and I'm glad most people recognize that.