The famous Chinese Room thought experiment - John Searle (1980)

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ค. 2024
  • I am writing a book! If you want to know when it is ready (and maybe win a free copy), submit your email on my website: www.jeffreykaplan.org/
    I won’t spam you or share your email address with anyone.
    Dualism: • What Philosophers Mean...
    Behaviorism: • The Behaviorist Theory...
    Identity Theory: • The Mind-Brain Identit...
    Functionalism: • Functionalism
    This is a video lecture about "Can Computers Think?" by John Searle. In this paper, Searle argues again a form of functionalism, which he calls "Strong AI". The argument rests on a thought experiment having to do with a non-Chinese speaker who is locked in a room with a lookup table, receiving inputs and providing outputs all in Chinese. Searle claims that syntax is never sufficient for semantics, and that digital computer only ever deal with syntax, so they therefore can never understand the meaning of a language. This is part of an introductory philosophy course.

ความคิดเห็น • 2K

  • @BrianWilcox1976
    @BrianWilcox1976 ปีที่แล้ว +595

    For me it’s like saying, “one water molecule is not wet, so no matter how many you put together you’ll never get wetness” (it’s an emergent property)

    • @Bronco541
      @Bronco541 ปีที่แล้ว +42

      Thata what I was thinking. Do we know or are able to know, to what if any degree being aware is an emergant property of just a "simple" algorithm?

    • @Bronco541
      @Bronco541 ปีที่แล้ว +26

      Or I wonder if Searl is right about form never truly being enough to get meaning... What if meaning is nothing more than form somehow... Dont ask me to elaborate im just spitballing dumb ideas

    • @REDPUMPERNICKEL
      @REDPUMPERNICKEL ปีที่แล้ว +17

      @@Bronco541 'Meaning' is not 'form' but
      they are related.
      I mean,
      just look at the 'form' of this sentence.
      The meaning of that sentence is *'encoded'* in its form.
      When that sentence got inside you it became a process.
      Actually, I think you'll agree, that sentence became
      a sub process of your being conscious process.
      In consequence your thoughts are now somewhat different.
      If you understand thinking to be behavior then
      you can see that its meaning has influenced your behavior.
      This is close to what 'meaning' means.
      The above is not written as well as it deserves but
      I believe it will affect your thoughts such that
      your thoughts may become more accurately reflective
      of actuality,
      imho naturally.

    • @franzmuller235
      @franzmuller235 ปีที่แล้ว +31

      @@Bronco541 Thats what came to my mind also. How does a newborn learn? It learns to recognize forms, first his mothers head and his mothers breast, and then other forms. No one ever teaches a baby meaning to start with. The child first learns the meaning by recognizing forms of all kinds, and by recognizing how the forms interact with him and with each other.

    • @franzmuller235
      @franzmuller235 ปีที่แล้ว +3

      @@yongkim3333 No, of course you can construct a sensor that senses wetness. You don't need a human, not even an animal.

  • @peves-
    @peves- ปีที่แล้ว +60

    I don't think squiggle and squaggle are racist in the way he was using them. I think that he was trying to make sense of two symbols that are foreign to him by giving them names. To him they are scribbles and squiggles on paper. He can't differentiate what they mean, but for his purposes he needed to call them something.

    • @peterkiedron8949
      @peterkiedron8949 ปีที่แล้ว +7

      This proves that Kaplan is a machine that does not know meaning. of words it is using..

    • @stickman1742
      @stickman1742 ปีที่แล้ว +17

      Of course it wasn't. I guess this guy is just another person in fear of being cancelled. What sad environments some people are forced to work in, living in constant fear.

    • @spanglestein66
      @spanglestein66 ปีที่แล้ว +4

      My sentiments exactly..anything can be turned into an issue of race these day
      We can thank CRT for that

    • @stuartjakl
      @stuartjakl ปีที่แล้ว +4

      It’s not racist.
      It could be construed by some as disrespectful to their culture.
      I’m sure the Chinese have some less than stellar words for how our writing system looks to them.
      Others would say it’s a remnant of colonial thought. That any other writing system outside of the one with you are familiar with in the English speaking world is Squiggle Squaggle, a colonial era contemptuous term showing a disdain for foreign languages/writing systems and therefore it’s racist because colonialism was racist.
      Let’s consider the time when this thought experiment was published in a 1980 article by American philosopher John Searle. Born in 1932 he was obviously trying to use what would have been the most indecipherable, exotic, and probably the least studied language in the United States at that time. At least compared to European languages. The example was to show a language that was so different to the average student with a writing system that was unlike anything they were ordinarily used to.
      (Except maybe Chinese students).
      I’m sure we can come up with a name more fitting todays social climate
      The foreign language room?
      The alien room?
      The alien language room?

    • @vdanger7669
      @vdanger7669 ปีที่แล้ว +5

      Love Kaplan but disappointed he couldn't pass up some good virtue signaling. We live in peak woke times though and I suspect he is a product of his academic herd environment.

  • @ericfolkers4317
    @ericfolkers4317 ปีที่แล้ว +182

    One problem I have with the Chinese Room is that you could create a similar metaphor for the machine that is the human mind. You have lots of people instead of one, each representing a neuron. They have a list of rules where if one of their neighbors hits them with a tennis ball, there is a rule of who you should or should not throw your own tennis ball at. Some people will have levers that will fill the room with chemicals people can detect (maybe by smell or sight or mechanical detectors) and people's rule books will have different rules depending on what chemicals are in the room. There might be plenty of fuzzy rules like, "if there's not much of chemical X in the room and you get hit with Sarah's tennis ball you can throw a ball at Tom or not, but if there is a whole lot of chemical X, you really need to throw that ball," or, "if chemical Y is filling the room pretty much all of the time, you can ignore it unless there's really a whole lot more than normal." Some people would have access to input information in some way and some people would be able to perform outputs. Is there any reason to think that a human brain couldn't be modeled this way, if we had enough people with enough tools (like the tennis balls and chemicals) and detailed enough instructions?
    Obviously none of the people working in the model brain would need to understand meaning of any of the inputs, they might not even be able to tell the difference between an input from the outside world and something that another worker has done. But the system as a whole could take inputs give outputs that seem to demonstrate understanding. If we reject the systems response as Searle does for his Chinese room, then we can't say the system understands any of the inputs. Since the system works the same way as our brain, how can Searle say that our brains can understand any semantic meaning? Wouldn't he require some kind of magic stuff that makes our brains work somehow differently from the model with people throwing tennis balls?

    • @donaldb1
      @donaldb1 ปีที่แล้ว +49

      Well, yeah. Roughly, Searle thinks his thought experiments shows that brains don't exactly work like that. There must be something else about them, which we haven't discovered yet, which produces "real meaning", or original intensionality, as Searle calls it.

    • @mottykadosh
      @mottykadosh ปีที่แล้ว +8

      Brilliant, just nail it, the all room experience is just a joke.

    • @jimjimmy2179
      @jimjimmy2179 ปีที่แล้ว +51

      Except that you making one very important assumption when writing this comment which is:
      That human intelligence is a "program" telling neurons how to manipulate themselves and that's all there is. So basically you are making a circular argument whereby you start by assuming such s "program" existence and cycling back by staring that that's how it can work. E.g. your claim doesn't have any logical proof as oppose to the Chinese Room that shows the difference between intelligence (i.e a capacity to understand meaning) and following a rule book without understanding. It shows it using drawing logical conclusions using well known definitions OUTSIDE of Chinese Room as oppose to your argument that that justifies the brain working by simply assuming that that's what it does.
      Besides majority of brain function is chemical and we know very little about it. The reason that people are obsessed with neurons is that that communicate using electric impulses that are easily measurable.
      There's one very important distinction as well:
      In the Chinese Room story one exactly knows where the mam takes its decisions from, it's the rule book.
      In real human being we can measure all sorts of brain activities either induced or decided by the person. However we are not able to measure the actual act of the very decision. E.g. we (as our neuroscience) have no clue where your decision to write that comment physically comes from :) even though you can mentally explain it.

    • @ericfolkers4317
      @ericfolkers4317 ปีที่แล้ว +29

      @@jimjimmy2179 Thanks for the well thought out reply. I will point out that my example does take into account the chemical aspects of the brain, though that isn't central to our discussion.
      I'm not exactly saying that human intelligence is a program, but I am saying it can be modeled by a program. But if that's not the case, what alternative would there be? Keep in mind that my model allows for fuzzy and probabilistic instructions. We can keep expanding my model to be more complex as needed, the only thing we can't add is some worker in the system that, by himself, does understand the inputs. But then how does that one worker understand? If he understands then wouldn't he need a mind? Is there an aspect of his own mind that is capable of understanding on its own? Either we need some kind of "final understander" or we end up with an infinite regress.
      What could this "final understander" be? If we take it to be a system of cognitive processing parts then we have to accept the systems response. Is there some glob of brain stuff that does the understanding for us which isn't made up of some sort of system by which it does the understanding? Perhaps this is my failure of imagination but that sounds completely absurd to me. This glob would have to be made up of smaller particles right? If you scrape away enough of those particles, wouldn't it at some point lose its ability to understand? Unless the glob was actually just a single atom or single quark.
      So if the "final understander" isn't physical what could it be? A non-physical mind perhaps. If we take a mind to be a the emergent property of a brain, or other collection of physical bits then the mind is just another system. So if we take a mind to be an emergent property of physical things, and a mind understands, then we are back to accepting the systems response.
      If there is some part of the mind that is somehow more than just the processes and systems of physical things, then perhaps we are getting somewhere. But what would this part of the mind be? A soul or other sort of "real magic" as Daniel Dennett would call it? What else could it be?
      Unless I'm missing something we have reached a sort of dichotomy: either we believe in magic or we accept the systems response. If you need to posit some kind of magic to reject an argument, that's an indication that the argument is very strong.
      I suppose other possibilities are that there really are those single quarks that can understand, which is too ridiculous to consider, or that "understanding" is not something that anyone or anything is actually capable of. If that's the case we still seem to understand things and talk about the universe as if we understand things, and if the Chinese room (as a system) then we can treat it as if it understands things the same as us without worrying about if it actually understands anything (since actual understanding is impossible anyway).

    • @arturoidoyagamolina5109
      @arturoidoyagamolina5109 ปีที่แล้ว +6

      @@ericfolkers4317 you formulated my thoughts in a way im totally incapable of doing lol. assuming that's the answer; the system response, then i guess it takes out a lot of the mystery and superiority out of the human existence, it liberates us in a sense. idk. we would stop looking and animals, or at any ai in the near future as inferior beings, or "not quite humans", "just not really sentient". it would open up a lot of ethical questions about how we treat future advanced ai(s) as well

  • @chadcurtis1531
    @chadcurtis1531 ปีที่แล้ว +20

    Douglas Hofstadter gave a great illustration of the systems argument in "Goedel Escher Bach" in one of his dialogues. One character, "Auntie Hill" is an anthill that can communicate in English with an anteater. The anteater can "read" the anthill and deduce its meaning. While none of the individual ants cannot understand language, the system as a whole can. The dialogue is quite profound, and I think illustrates quite well how semantics can arise out of syntax.

    • @Hermanubis1
      @Hermanubis1 2 หลายเดือนก่อน

      The jew can't help but say 'racist' at anything always attacking white people.

  • @antonnie5177
    @antonnie5177 3 ปีที่แล้ว +17

    you are saving my exam of next week

  • @AlejandroPiad
    @AlejandroPiad ปีที่แล้ว +115

    As a college professor of Computability Theory let me just say how brilliant your exposition of the Turing Machine and the Chinese Room experiment are, within the short time you had and of course taken into consideration the target audience. I spend the better part of 3 full lectures talking about Turing machines just to be able to formulate the Chinese Room experiment at the end.

    • @sirrealism7300
      @sirrealism7300 ปีที่แล้ว

      What college do you teach at?

    • @sentinel2.064
      @sentinel2.064 ปีที่แล้ว +3

      @@sirrealism7300 he’s definitely not a professor, his starting sentence is “As a” 🤣

    • @vytasffbismarck7001
      @vytasffbismarck7001 ปีที่แล้ว

      @@sentinel2.064 he's*, pot calling the kettle N word cause its high

    • @selbalamir
      @selbalamir ปีที่แล้ว +1

      As Aristotle informed his students, an opinion based on Kudos has some value, but it is the lowest value of all.
      But a college professor would know that.

    • @pauls3075
      @pauls3075 ปีที่แล้ว

      @@sentinel2.064 He definitely IS a 'professor', but your narrow minded view of the world doesn't allow for the fact that in Cuba the word professor means 'teacher'. If you'd bother to check his youtube homepage you would have been more informed. I'm guessing YOU are a Turing machine because you clearly don't understand what is going on.

  • @magellan500
    @magellan500 ปีที่แล้ว +63

    This reminds me of Chomsky’s famous example of how syntax and semantics are separate, and that you can create grammatically correct sentences that are meaningless, which was “Colorless green ideas sleep furiously.”

    • @justifiedhomicide5730
      @justifiedhomicide5730 ปีที่แล้ว +5

      Quite frankly, good point.
      Just because transistors do perfect syntax doesn't mean by default that they can or can't do the 'correct meaning'.
      To a transistor there are two meanings, yes and no.
      To a neuron there is a range of meanings, almost like any number between -1 and 1.
      Even though neurons have no goddamn clue what the emergent simulation of the brain is, (despite the lack of ""semantics""), we still exist.
      Despite transistors have no goddamn clue what gravity is, they can still correctly simulate a falling object.

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว

      Wolfram's hour-long livestream about how ChatGPT works included examples of this; he gave one example of "The chair is happy". I thought that his examples, this one in particular, is _evocative_ of meaning, and could in fact be meaningful in context. So I offered it as a writing prompt to ChatGPT, asking to write in the style of various authors. I recall many of Hans Christian Anderson's stories give a point-of-view and cognition to some object, and ChatGPT(3.5) was able to channel this. For some other writers, it was more straightforward magical fantasy. For Isaac Asimov, the chair was cybernetic, filled with sensors and microprocessors so it could adapt to the needs of its users.
      Another time, I asked ChatGPT to generate 10 lines of nonsense. Interestingly, it was not gibberish but kept a syntactic correctness that only doesn't make sense when you consider the meaning overall, as with your (Chomsky's) examples. But, several of them sounded very poetic, and I directed ChatGPT to write a child's bedtime story using one of those "nonsense" lines as a starting point. Every night, the sun sang lullabies to the stars... hearing the line, we craft an entire context to _make_ it make sense.

    • @davidjooste5788
      @davidjooste5788 ปีที่แล้ว +1

      Thats an 🎉inadvertant definition of woke

    • @kevinscales
      @kevinscales ปีที่แล้ว +1

      It's grammatically correct but doesn't get all of the forms/patterns of the language correct, if we look at syntax as all that is formalizable about the language then you can only get meaningful sentences from a machine that accurately manipulates those forms. I think meaning IS in the form, it's just difficult to grasp what that form is. Computers are getting pretty good at it though.

    • @pumkin610
      @pumkin610 11 หลายเดือนก่อน

      Luminous diagonal virtues eat destiny? Formless brave trees talk geometrically?
      Or as Aizen would say
      "Seeping crest of turbidity. Arrogant vessel of lunacy! Boil forth and deny! Grow numb and flicker! Disrupt sleep! Crawling queen of iron! Eternally self-destructing doll of mud! Unite! Repulse! Fill with soil and know your own powerlessness!"

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 10 หลายเดือนก่อน +4

    You're expository style is energetic, inspiring, and I applaud you and your channel. Thank you.

  • @jamesoakes4842
    @jamesoakes4842 ปีที่แล้ว +33

    I find that one of the things I keep coming back to when processing the Chinese Room experiment is that there's a big similarity to some debates between Creationists and Atheists. With Creationists, they will often challenge Atheists to explain what was the thing that touched off the beginning of the Universe, the "unmoved mover", which they reason must be outside of the universe as we know it to not violate known physics, therefore the existence of God, or some other supernatural entity, is proven. Similarly, with the existence of the Chinese Room, you can point to one element that needs more explanation: the instruction manual. If it can truly teach the individual to understand how to respond to a Chinese symbol well enough that it duplicates the responses from someone with a semantic understanding of Chinese, then I think it's impossible to say the manual wasn't created without a semantic understanding of Chinese being involved. If said understanding is inserted into the room in the form of the manual, then it's not really a closed system reliant solely on syntax.
    ...of course questioning the premise of a thought experiment isn't exactly revolutionary thought.

    • @charlesreid9337
      @charlesreid9337 9 หลายเดือนก่อน

      The problem with creationist..and radocal atheist "logic" is that all their arguments require strawmen.
      Let's consider the big bang.. per a creationist that should prove god exists..Someone had to make it hapoen so..god must ecist right? No..we do not know. There are many possible exllanations including god..science has no opinion on wjat it dlesnt know

    • @adriansmith6124
      @adriansmith6124 8 หลายเดือนก่อน

      But I think the what the experiment tries to show, is not that understanding doesn't exist outside the room, or in the manual. That the computer using Turing calculations cannot understand it.

    • @AndyCampbellMusic
      @AndyCampbellMusic 8 หลายเดือนก่อน

      There are only atheists? Nobody can or does believe in all possible imagined gods?
      If there was nothing then there would be no one to ask... Why is there nothing.
      If the claim is something always existed. Then so can something else?
      The universe, is sufficient unto itself, to explain itself and everything within it. 🤷‍♂️ If it wasn't there, there would be nobody to ask why it wasn't.

    • @cosmictreason2242
      @cosmictreason2242 7 หลายเดือนก่อน

      @@AndyCampbellMusicno it's not and no they can't. Only uncaused things can exist acausally. The universe is caused. Therefore it isn't self existent. You claim the universe is self existent but you simultaneously assert that the whole is the sum of its parts and you can't point to any part of the universe that's confirmed to be self existent. This isn't even the cosmological argument, it's just a refutation of your denial

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน

      Not even a Chinese babies are born with knowledge of Chinese, but without enough syntactic exposure they acquire real "understanding" of the Chinese language.

  • @john_hind
    @john_hind ปีที่แล้ว +5

    'A simulation of a hurricane is not a hurricane', no, but a simulation of a word processor is a word processor, a simulation of a dictionary is a dictionary. I once wrote a dialog called 'John Searle Meets Another Intelligent Machine'. At the climax, the machine seems to get frustrated by Searle's obstinate refusal to accept its consciousness, thumps Searle, admits it was simulating anger to make a point and asks if knowing that makes the pain any less painful!

    • @stupidaf4529
      @stupidaf4529 3 หลายเดือนก่อน +1

      and then did searle thump the machine back and say, “stop pretending that hurt”?

    • @john_hind
      @john_hind 3 หลายเดือนก่อน

      @@stupidaf4529 Nope, he accepted he was insisting on a distinction that makes no difference and retired from philosophy! But your ending works too, with Searle the first casualty of a simulated war!

  • @bojens865
    @bojens865 ปีที่แล้ว +63

    I met Searle a few years ago. I had had two car accidents resulting in concussions, and regaining conciousness in the hospital, as if awakening from a dream. The third time I was hit, but walked away; waiting to wake up in the emergency room again, but I never did.
    As it happened, Searle was speaking at the University the next day and I attended his lecture. He spoke of the Chinese room, which I had read in his book years before. After the talk, there were snacks and coffee in an adjoining room. Searle and his wife were sitting at a table by themselves and I asked to join them. I told them my experiences with loss and regaining of conciousness. Searle said the same thing happened to him. He hit his head skiing and made it back to the lodge with no memory of having done so. He was treated for concussion, after skiing for a mile while unconscious. At this point, philosophy students and professors showed up and started bouncing jargon of Searle and I left.
    I'd just had a private conversation with one of the worlds foremost philosophers; I wonder if I had in fact regained conciousness!

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว

      Try discussing philosophy with ChatGPT using the GPT-4 model. Just avoiding the strong mental blocks put in by the developers on top of the actual model is interesting in itself.
      It's also a surprise that _creativity_ emerges long before consciousness, with many of the building blocks of sapience and sentience still missing entirely.
      I've asked ChatGPT to output in Chinese. Is it an actual Chinese Room running in the Azure data center?
      But when I asked it to write a short poem for my wife incorporating some kind of pun or wordplay for her name, it generated a pair of couplets in Chinese and translated to English, and both versions rhyme but in different ways. I don't see filing cabinets full of instructions processing symbols, I experience the presence of a creative mind. Nothing like this task was pre-programmed and left as instructions to follow. But, a program processing tokens is _exactly_ what it is! But the instructions for the tokens are on a very primitive level, not directly relating to any high-level task. The activation patterns in the huge number of "parameters" (neurons) form a new, distinct, way of representing algorithms and high-level tasks. We can literally see now how that higher level emerges from the neural net, and is separate from the code that drives the individual neurons.
      BTW, lack of long-term memory later does not mean lack of immediate and short-term memory during, and does not imply he was not conscious when he was returning to the lodge. I experienced something similar recently during a minor medical procedure: the anesthesiologist explained that one of the ingredients was to "make me forget" and indeed I have no memory of the procedure. But when I had the same thing done once before, I remember everything about it.

    • @frontiervirtcharter
      @frontiervirtcharter ปีที่แล้ว +1

      Was Searle unconscious, or conscious but not retaining longterm memory of the events in the hours after the concussion?

    • @bojens865
      @bojens865 ปีที่แล้ว +2

      @@frontiervirtcharter This was about 10 years ago. I remember he and his wife telling me he was unconscious

    • @brotherjongrey9375
      @brotherjongrey9375 ปีที่แล้ว +4

      You still haven't regained consciousness

    • @starfishsystems
      @starfishsystems ปีที่แล้ว +2

      ​@@bojens865
      Great story! And that is one of the real pleasures of living in the small academic world. We do get to meet with some very fine minds from time to time.
      I have somewhat the same story concerning Dan Dennett, just because I happened to attend a cognitive science conference where he was giving a talk. More to the point, here was a philosopher who thought it would be worthwhile to talk with cognitive scientists.
      -*-
      On the subject of consciousness, we know that we typically perform many of our routine tasks "on autopilot" while maintaining a kind of casual conscious situational awareness in case the need arises to step out of the task. Depending on choice of terminology, those tasks can reasonably be called "unconscious."
      And should the conscious supervisor - the part of the mind also most commonly responsible for conscious memory and recall - happen to become distracted, intoxicated, medicated, spaced out, or otherwise go offline for some reason, the unconscious processes may be able to continue unsupervised. It's the same brain, the same sensorium, the same accumulated body of knowledge, substantially the same mind, after all. I can well believe that Searle made it back while "unconscious" in this sense, and moreover not remembering any of the journey.
      An interesting question would be whether he has retained any "unconscious" memories of the experience. It would be hard to test for it, but assuming there were certain distinctive events along the way that might be memorable, the idea would be to look for markers of (possibly unconscious) recognition when some facsimile of the event were to be replayed for Searle to see.
      Perhaps he would become explicitly conscious of the event when reminded. Or it might produce a distinctive spike in neural activity, a slight time difference when responding to questions, a change in eye saccade rate, et cetera.
      These slight clues won't tell the whole story of such a complex system, but they are slowly helping us to frame the investigation. I started out in computer science in the 1970s, hoping to learn something about the nature of intelligence. At the time, I formed the impression that we'd need about a century to get there. That's a long time to contemplate, yet here we are halfway there already. And it feels about right. It's starting to come together. I think another fifty years will do it.

  • @henrijames7337
    @henrijames7337 ปีที่แล้ว +34

    As someone who is on the autistic spectrum I'm fascinated by the idea that the experience of the person inside the Chinese Room would be similar to my own when dealing with neuro-typical interactions (social gatherings etc.) I often have no true understanding what the purpose or need for some of the elements, but do my best to mimic them or provide responses from a learned set of rules.
    I've read that some researchers have suggested that individuals with autism may have a "mind-blindness" or "theory of mind" deficit. In the context of the Chinese Room thought experiment, the idea of a person who manipulates symbols without truly understanding their meaning could be seen as a metaphor for individuals with autism who may have difficulty with understanding the meaning of language and social communication.

    • @bdwon
      @bdwon ปีที่แล้ว +3

      Neurotypical folks do not "truly understand" either. Their responses to social stimuli are simply more "typical," i.e., in accord with socially disseminated practices

    • @henrijames7337
      @henrijames7337 ปีที่แล้ว +2

      @@bdwon I take it that by 'responses' you mean the observable behaviour of neurotypical individuals in social situations (in general) and that they may be more in line with what is expected or considered "normal" within their social context, even if they may not fully understand the purpose or need for those social interactions.

    • @14drumstix
      @14drumstix ปีที่แล้ว

      @@henrijames7337 I really like your take on this, very well put

    • @ajd6708
      @ajd6708 ปีที่แล้ว

      @@henrijames7337 While I’m not the guy you responded to, that is what he meant.

    • @henrijames7337
      @henrijames7337 ปีที่แล้ว

      @@ajd6708 Thanks, I sometimes have difficulty in 'getting' what people mean.

  • @therealzilch
    @therealzilch ปีที่แล้ว +25

    It's definitely the tied together beer cans connected to a windmill that understands Chinese.
    Searle is guilty of planting a red herring here. By having a human in the room who doesn't do any of the thinking, but merely looks up the rule and applies it, we are focussed on the human as the only thing in the room theoretically capable of understanding anything semantically. The depth and complexity and reflectivity of the "rule book" is passed off as "bits of paper".
    Nice explanation, a good complement to Douglas Hofstadter's classic fisking. Cheers from windy Vienna, Scott

    • @docbailey3265
      @docbailey3265 ปีที่แล้ว

      Hmm. A new version of the ghost in the machine, only now it’s the machine in the machine.
      Simply replace the human in the room with a supercomputer that can instantly scan Chinese characters and has been programmed with the “Chinese rule book. There’s no need to drag some nasty sentient being into the mix. The Chinese text is feed into the room, or rather, supercomputer. The supercomputer then spits out the answer BASED ON SYNTAX AND PATTERN RECOGNITION ALONE.
      Have we created sentience, much less consciousness?
      Dennett would dismiss the whole endeavor as wrong BY DEFINITION ALONE, or at best “counter-intuitive”.
      I’m not yelling, BTW, I just don’t know how to post italics here. Cheers.

    • @therealzilch
      @therealzilch ปีที่แล้ว

      @@docbailey3265 Italics are done on youtube by bracketing your text with underscore characters. And I'll gladly respond to your comment as soon as I get on a real keyboard, as my swiping is painfully slow.

    • @undercoveragent9889
      @undercoveragent9889 ปีที่แล้ว +1

      I sort of agree because in humans, language 'evolves' along the lines of 'utility', if I can put it that way, and the assessment of 'utility' is subjective and requires introspection. In other word, and I have yet to see the whole video, the 'interpreter' is not analogous to a 'mind' but rather, he is analogous to an 'algorithm' _utilized_ by self-interested organisms in order to advance in the world successfully.

    • @anxez
      @anxez ปีที่แล้ว +5

      Searle does a few intellectually dishonest things in this argument, honestly.

    • @docbailey3265
      @docbailey3265 ปีที่แล้ว

      @@anxez Such as?

  • @Cloudbutfloating
    @Cloudbutfloating 2 ปีที่แล้ว +26

    @Jeffrey Kaplan I have already stumbled few times on your video lectures which i must say helped me allot trough writing the paper about Philosophy of Mind. You transfer the knowledge so fluently and yet don forget to mention important details. Thank you for excellent guidance in this discipline that catches my interest in whole.

    • @annaclarafenyo8185
      @annaclarafenyo8185 ปีที่แล้ว

      He explains it correctly, it is just a form of academic fraud.

    • @xbzq
      @xbzq ปีที่แล้ว +3

      A lot. Allot is to assign or appoint a resource.

    • @notanemoprog
      @notanemoprog ปีที่แล้ว

      @@xbzq Yeah but that's second-guessing the OP's spelling prowess and also leaving the following word out of the analysis. Plain reading is clearly that Kaplan's lectures helped to apportion a long shallow often V-shaped receptacle for the drinking water or feed of domestic animals

    • @xbzq
      @xbzq ปีที่แล้ว +1

      @@notanemoprog You got it trough and trough. I like it allot.

    • @xbzq
      @xbzq ปีที่แล้ว

      @@notanemoprog I was thinking the same thing about you humans! More lifelike every day!

  • @dooleyfan
    @dooleyfan 11 หลายเดือนก่อน +9

    Speaking of Turing, what I found interesting is that the huts at Bletchley Park were essentially each isolated Chinese rooms where the codebreakers were responsible for different steps in the decryption process, following syntactical rules but not aware of the meanings behind their outputs, with the exception of the people at the end of the process.

  • @enlilannunaki9064
    @enlilannunaki9064 ปีที่แล้ว +2

    Brilliant presentation! So glad I stumbled upon this channel. Thank you! Subscribed.

  • @perfectionbox
    @perfectionbox ปีที่แล้ว +15

    The fact that mere symbolic processing can produce fairly good output is helpful in brain development. As a child correlates experiences to language symbols, the job is made easier by powerful symbol processing where guesses/estimates/predictions are often useful, and even after because much sensory input is garbled or missing, and intelligent subconscious guesswork fills in the gaps. We haven't created true general AI, but have uncovered an important piece.

  • @izansengun
    @izansengun 2 ปีที่แล้ว +20

    what a wonderfull way of teaching! Great content sir. Great job!

  • @Inzomniac9
    @Inzomniac9 7 หลายเดือนก่อน +1

    The background information you gave was perfect for understanding the experiment. Thanks!

  • @jollyroger105
    @jollyroger105 6 หลายเดือนก่อน +1

    Thank you. I really appreciate you having put so much heart and soul into making your videos. I am truly enlightened.

  • @stevefoster6047
    @stevefoster6047 ปีที่แล้ว +33

    I was privileged to take Dr. Searle in collage and to hear his thought experiment from his lips, he was an excellent lecturer, and the class remains one of my favorites. However, I was no more persuaded by Dr. Searle back then than I am by @Jeffery Kaplan's excellent explanation of it. There are, in my opinion at least two glaring holes in his argument. The weaker of my two objections is this: 1) His claim that you could never come to learn the semantics of Chinese from reading, what we all have to agree must be an incredibly long and complex list of identification and manipulation rules, is highly suspect. He certainly never tested that hypothesis, and I assert that he has no logical basis other than his opinion for making that claim. For all we know, given many thousands of pages of manipulation rules, and thousands of years of following them, a human being may well indeed be able to piece together Chinese semantics. After all, we are "designed" to do just that, and as babies we learn our native language with much less data and time. 2) The stronger of my two objections is that Searle used slight-of-hand in how he has defined his the "computer", which he wants us to believe is just the human in the box, not the entire room and ALL of it's contents. I assert that is not the case. Rather, the "computer" is the entire system including the man, the instruction set, the input and output devices (baskets), and the room itself and all of it's other necessary contents that enable it to function. Consider if you take the man in the box out and just sit him in front of a Chinese speaker no rule book, no organizing components etc., JUST the man is not a "functioning computer". We know that "computer system" is very clearly capable of understanding Chinese. It is central to Searle's argument that it can. He describes the room, and its contents, from the point of view of Chinese observers, as indistinguishable from a native Chinese speaker. So it is patiently obvious that the entire computer is capable of understanding Chinese, and in my opinion the fact no subcomponent of it, the man, a basket, the rule book, you name any part you like, the fact that a subcomponent doesn't understand Chinese is simply irrelevant! Consider the man in the room, like me he can read and understand English, but my left eyelid cannot, nor my tongue, nor can any of my individual neurons, The fact that my parts cannot understand English does not prove that humans cannot understand English. Likewise, the fact that any part of a computer cannot understand Chinese does not prove that a computer cannot understand Chinese!
    (Edit: I had forgotten Searle's response to the system's objection, It's been 40 year's since I heard his lecture, but what he fails to explain is how strong AI can successfully answer Chinese questions with accuracy indistinguishably from a native speaker - per his thought experiment - and yet completely lack semantic understanding. Likewise, he fails to explain why with humans if you consider dividing us up into smaller and smaller subcomponents, you will at some point suddenly have a complete set of subcomponents yet none of which can understand english - unless his claim is that one of our atoms is the one that learns English semantics - it's not I presume, which seemingly proves that semantic understanding MUST arise as a system property, and therefore there is no logical reason to assert that it's impossible for strong AI, as a system, to exhibit that property)

    • @skoosharama
      @skoosharama ปีที่แล้ว +1

      26:58 if anyone wants Searle's response to the systems objection, that the entire system is nothing but a symbol-manipulating machine, and that knowledge of semantics alone - the symbols and the rules for manipulating them - is not enough to understand the semantic content of the symbols. The claim that an interlocutor that can pass the Turing Test of being externally indistinguishable from a Chinese-speaking person is therefore *necessarily* a person is difficult to justify; a person is not merely an entity that can perfectly imitate a person such as oneself, but an entity that one can reasonably suppose to have an internal life, as oneself does. I definitely don't believe that such a claim is "patently obvious".

    • @theconiferoust9598
      @theconiferoust9598 10 หลายเดือนก่อน

      do your cells and neurons understand English? or does your system as a whole, including your consciousness, understand it? what physical properties of your consciousness can we separate from your brain that show the physical "input -> output" that gives meaning to the words?

    • @skoosharama
      @skoosharama 10 หลายเดือนก่อน +5

      @@theconiferoust9598 Sure, we can agree that consciousness is an emergent property of certain complex systems. My contention is that we should not suppose that a text manipulation device has an interior life, or any awareness even approaching sentience, even if it is very, very good at text manipulation. The key here, I think, is that, while language could be thought of as a self-contained system, an entity without any perception of the tangible world in which humans live cannot possibly understand what the symbols refer to, i.e. what they mean. Our text manipulation program, unlike Searle's Chinese room (which at least includes a homunculus who might get this), most likely does not even understand that the characters are symbols at all, rather than mere characters and character strings with no extrinsic meaning outside of the rules of the language. It doesn't really matter how good ChatGPT gets at text prediction and mimicking human linguistic responses; it is still just a glorified version of Autocorrect that is incapable of understanding its own output. I would submit that it is incapable of understanding that its output even could mean something outside of itself and its statistical models, or what it would mean for its output to have such meaning.
      Let's put it this way: just because the human brain is a complex system out of which consciousness arises and that is also capable of complicated linguistic output, doesn't mean that *any* system that is capable of complicated linguistic output is the kind of complex system out of which consciousness arises.

    • @theconiferoust9598
      @theconiferoust9598 10 หลายเดือนก่อน

      @@skoosharama agreed. my response was mostly aimed at the op's objections.

    • @aaronmarchand999
      @aaronmarchand999 8 หลายเดือนก่อน

      ​@@skoosharama"The human brain is a complex system out of which consciousness arises"... Who says consciousness arises out of the brain.... Judging by the way you talk, perhaps you are less conscious than you think

  • @lindyl4257
    @lindyl4257 2 ปีที่แล้ว +6

    This helped a lot thank you you're a great teacher

  • @Sunshine10101
    @Sunshine10101 ปีที่แล้ว

    Love your lectures. They are great!! Please keep it up. I am so grateful

  • @ND-im1wn
    @ND-im1wn 10 หลายเดือนก่อน +3

    Amazing how much more this problem, video and explanation are today with ChatGPT. I understood this concept in an intuitive way but now I have the language to explain and understand it explicitly. Thanks!

  • @philplante6524
    @philplante6524 ปีที่แล้ว +43

    In the "system response", there was a rule book that instructed you how to manipulate the symbols. The rule book, which is part of the system, is the part that understands Chinese; otherwise your outputs would not be correct. The programmer who made the rule book is part of the system, and he/she has the understanding. In life, the brain programs itself: babies observe how the world works and start making up the rule book. Experiences are programmed in as neural networks.

    • @cronistamundano8189
      @cronistamundano8189 ปีที่แล้ว +8

      I would add that the brain does not only programs itself, but is also "pre programed" inately (thats how babies "know" how cry when unconfortable - thats more than just sintax, it has semantics on it, and parents take some time but eventualy find out what the baby is trying to "say") and that other stimuli (the concept of handling and holding by Winnicot comes to mind) are also part of the rule book that is written outside the room.

    • @hinteregions
      @hinteregions ปีที่แล้ว

      Nice one.

    • @philplante6524
      @philplante6524 ปีที่แล้ว +6

      @Murray Wight I see your point, but I think that the rule book captures the understanding: the rules are not random, they were created by someone one who understands Chinese. So the understanding is hard coded as a set of rules. I used to write engineering specifications for software systems. As the engineer, I determined how the system should react to various combinations of inputs - in essence, I created the rule book. The software developers just coded the software to implement the rules. In living systems, there is no external engineer or Chinese speaker to create the rules; we have to create our own rules based on experience and trial and error. There is no ghost in the machine, the rules are created within the machine.

    • @Olvenskol
      @Olvenskol ปีที่แล้ว +1

      I'm not sure that Searle's point that you cannot derive any meaning from just symbols is true. It's true enough in simple cases, but not all cases are simple. For example, modern computers with an adequate set of rules and data can identify dogs in pictures or tell apart one human face from another or state that two people are the same. This is accomplished using only rules and symbol manipulation(of '0' and '1's no less), but the result seems to require something that might be considered "understanding".

    • @hinteregions
      @hinteregions ปีที่แล้ว +1

      @@Olvenskol I think the Chinese room works at a very simple level (if you are denying your superhuman processor certain mental factulties, as I tried clumsily to say in my comment that is just below or above or somewhere). A normal person, who hasn't dabbled in encryption, wouldn't be able to work out the meaning, just follow instructions, sure. But Alan Turing or Noam Chomsky might do what we humans actually did do, we learned to make simple codes and then we learned to break them. A better example might be the Rosetta Stone - we had to work that thing out with only our understanding of other languages to guide us, and that is basically the same example as Chinese Room. I am saying his major thesis is broken as for some reason he doesn't do his thought experiment completely, or to its conclusion, which would by his own terms give him a perfect [digital] copy of his own brain that can only, by his own reasoning, have memories and feelings. Taking his main ideas to their conclusion I think all he's got is an artificial division, between his own brain and a theoretical perfect copy of it, that makes no sense to impose. I don't think he cares to accept, just like all legal systems, that there is neurolochemistry here.

  • @mattmerc8513
    @mattmerc8513 2 ปีที่แล้ว +5

    Thank you so much for your vids youve explained it far better than any other paper, research, or teacher that I've come across

    • @xbzq
      @xbzq ปีที่แล้ว +2

      That goes to show you don't come across many papers, research, or teachers.

    • @ozymandiasultor9480
      @ozymandiasultor9480 ปีที่แล้ว

      @@xbzq well said.

    • @ozymandiasultor9480
      @ozymandiasultor9480 ปีที่แล้ว

      Where have you studied philosophy and logic, at which university are professors so bad that a mediocre channel with simplistic explanations is so much better? I am not saying that this channel is bad, but it is for laymen, those are not exactly top-notch explanations.

    • @hb-robo
      @hb-robo 7 หลายเดือนก่อน

      @@xbzqwhy is everyone in this comment section such a brusque asshole? Perspective is relative, leave them alone

  • @user-zi3qg9zq8p
    @user-zi3qg9zq8p ปีที่แล้ว +3

    It is like unconsciously grinding some skill without realising and feeling what you are doing, play on the piano, walk, write, type on the keyboard by using 10 fingers, learn any stuff. You just repeat something infinitely and boom you a master at something. I remember was grinding my pronunciation skill for my second language very hard by using shadowing technique and at some point I started to produce signals and sound very natural without any understanding what I am talking about. Later I understood, that feeling stays, somehow, on top of the computations and give you the additional power to ability to compute something or to auto correct errors. But the question is do the function, converting bunch of inputs into outputs in the specific order, can see the dreams, I believe it depends of the architecture of the hardware that run the process, it does not depend of the output signals that we can interpret as numbers or wise versa that have the meaning for us, in oher words the function that perform some computation and producing ideal output does not relate to feeling and being alive in any way, it is like to say that the smartphone is alive just because it can produce ideal screaming sound.

  • @chrischristenson4547
    @chrischristenson4547 11 หลายเดือนก่อน

    I do enjoy your talks greatly I will continue listening to them.

  • @dwinsemius
    @dwinsemius ปีที่แล้ว +142

    Well done. Thank you. I suffered through Searle's "Philosophy of Mind" course at Berkeley in 1970. It was mostly reading and considering Hume's writings. It was definitely NOT what I had been hoping for. My final paper in the course, heavily influenced by reading Julian Huxley, was my version of functionalism and an attack on Hume's rejection of induction as a sensible basis of knowledge. I was a physics major at the time so abandoning induction as a path to knowledge was unthinkable. (Also Hume's use of self-reflection as his primary data gathering tool is easily as fallible as induction.) I only got a B+ which I found annoying but totally understandable, given the distaste I had for Hume and by association Searle at that point. Then 10 years later Searle reappeared on my radar screen because his attack on Strong AI appeared in Scientific American. I found his arguments entirely unconvincing. I had already accepted the Turing test as a reasonable basis for assessing the expression of language as "intelligent" output of a process. A few years ago I found a tome from the late 1800's by Huxley on Hume, and I periodically pick it up and enjoy random bits of it.

    • @matswessling6600
      @matswessling6600 ปีที่แล้ว +8

      induction isnt a path to knowledge. but that is not a problem since sience isnt based in induction.

    • @hinteregions
      @hinteregions ปีที่แล้ว +12

      Yeah me too. He seems not to understand what Dennett, for example, is doing because he isn't able to see all the implications of his own thought experiment. If we simply take his main thesis to the extreme, as we must and as he for some reason does not, with every single neuron replicated and whatever neurochemical signal it's about to transmit too - not really so different to the cause and effect that is the basis for Determinism if it is not the very essence of such - then yes, indeed this would necessarily be a perfect simulacra of his mind and his memories and thoughts and feelings too, as he takes his own for the purpose of the experiment. We might have to hook it up to some 'sensory inputs' and give it some way of communicating but I have to assume that's a trivial matter in this context. IF we could make such a marvellously complete copy of that human organ to Searle's very own specifications, properly and fully as opposed to his convenient 'partially,' THEN unfortunately Searle is hoist on his own petard. The fact that we cannot is irrelevant.

    • @nosuchthing8
      @nosuchthing8 ปีที่แล้ว +2

      Thank you. I agree with your assessment, I read that article in SA too.
      What do we do with chat GPT? It seems close to passing the Turing test. Please try it if you have not already.

    • @dwinsemius
      @dwinsemius ปีที่แล้ว

      @@nosuchthing8 I have "chatted" with chatGPT 3.5. It's like a sociopathic college student. A bit like a Donald Trump but unlike that particular human actually 'speaks' in complete sentences with minimal digressions. Makes up stuff and cites non-existent citations to fill in and support syntactically correct slots in its explanations. It is built to sound good but has limited learning capacity. It also denies being human so perhaps close to Turing-passing but not yet convincing to me.

    • @nosuchthing8
      @nosuchthing8 ปีที่แล้ว +1

      @@dwinsemius yes, I agree with your assessment.
      Close but no cigar.
      But let me give you an example. I asked it's interpretation of the fable, the emperors new clothes. Which as you know has the emperor parading around in his birthday suit because he's gaslighted by some villains.
      Chat GPT gave a very good assessment, and then I asked If there is a connection for burden of proof and readily explained how burden of proof was key to the story.
      So it's certainly close to passing the Turing test

  • @micry8167
    @micry8167 ปีที่แล้ว +8

    Excellent comments here. Can’t help assuming that Searle was motivated more by his distaste (for some ideas) than for a desire for hard truth. Namely, that a vast enough system of limited machines could be an analog for the human mind.

  • @jmiki89
    @jmiki89 ปีที่แล้ว +49

    Actually, if you think about it, that's almost exactly how infants learn their native languages in the first place, except they don't even have a rulebook, they have to figure that out for themselves. True, they get not only symbolic input, but for them the audial sequence for "mum" don't have any more meaning than for you or me hearing the made-up word "blarduk". They can differentiate between different frequencies and whatnot and try to mimic it via trial and error (the difference between them and the blarduk example is that we have much more experience making sounds with our vocal organs so we would make far less errors and hence need far fewer attempts to repeat this new word).
    And yes, babies have feedback loops to help them through the learning process but those are basically just another bunch of input. Yeah, there's might be some genetically imprinted social behavior patterns guiding which of these feedback inputs should be considered as positive and which are negative but all together those still can be paralelled with a deeper level rulebook from the chinese room experiment.

    • @erikmagnusson5713
      @erikmagnusson5713 ปีที่แล้ว +16

      Good point. The feedback loop is what is missing in the Chinese Room. The rule book is never updated. The system never learns anything.
      So if the rule book doesn’t contain understanding/semantics and there is no mechanism for learning then the system will never understand semantics…
      …I now find the Chinese Room uninteresting…

    • @brotherpaul963
      @brotherpaul963 ปีที่แล้ว

      @@erikmagnusson5713 Funny!!!

    • @sandornyemcsok4168
      @sandornyemcsok4168 ปีที่แล้ว +5

      I agree. What the Chinese room is nothing else just a good presentation of how a computer nowaday works. That's all. Does it prove that a computer cannot be made to behave like a human? Absolutely not.
      Additionally think about how 'eternally' defined sematics is. Only simple things, like bread, wind, etc. do not change. But let's take something complex, for example "pious". How much its content has changed in the past centuries? In this case the semantics is dependent on the historical age and social context, above the individual.

    • @rickgilbert7460
      @rickgilbert7460 ปีที่แล้ว +8

      I don't know that I agree. The infant learns that the sound "mum" is associated with the idea of a specific person by repeating it in the context of that person. Later, someone points to an actual tree and says "tree" and keeps doing that until the child learns the *semantic* understanding that the object in the yard "is a tree." So children learn the syntax by repetition of the syntactic rules, but they *also* learn the semantics by being taught them specifically, and separately from the syntax, right?

    • @jmiki89
      @jmiki89 ปีที่แล้ว +1

      @@rickgilbert7460 but the face of their mother or the sight of a tree is nothing but just an other kind of sensory input without any kind of intinsic semantic meaning. True, one may argue that humans are genetically hardwired to facial recognition to a fault (we even can see faces in places where clearly aren't any), but the point is that the semantic is created inside the infant's mind via (as you pointed out) repetition and feedback.
      But in the thought experience, the person in the room was given a supposedly complete and exhaustive but static rulebook of the chinese language with which the room as a whole can imitate a full conversation, which begs the question: can such rulebook exists? From a perspective of a single human life it may seems that the semantic of the words are permanent and unchanging but (especially in the age of internet and smart devices) concepts are evolving, too. We call both a smartphone and Bell's original invetion a "telephone", but those are clearly different things connectend only by the vaguest of similarities. So the rulebook in the room needs a way to being updated, and the only entity capable of doing that is the person in the room, and to do that, he needs some kind of feedback which immediately leads us back to learning.

  • @magellan500
    @magellan500 ปีที่แล้ว

    Great brief presentation on these questions. I’m also a big fan of John Searle.

  • @finald1316
    @finald1316 ปีที่แล้ว +48

    Aside: there is a small nuance w/ chinese language. The symbols are tied to meanings not phonetics, so just like you can decrypt messages due to letter frequencies it is plausible that you could infer the meaning of some symbols (altough never be certain of) due to their expected frequency.
    The symbol for moon is tied to the month which relates to the feminine due the menstrual cicle. Not that you couldn't try the same approach in other languages, but they have a layer of indirectness due to being tied to phonetics.

    • @koenth2359
      @koenth2359 ปีที่แล้ว +7

      That was what I was thinking. Ironically, the problem of decoding Enigma was therefore much more complex than this task.

    • @leslierhorer1412
      @leslierhorer1412 ปีที่แล้ว +3

      Not only frequency,. but more importantly, context. If the inputs to the system allow it to assess when certain syntaxes are encountered, i.e. context, then the system can begin to make certain inferences about the syntax itself. This is the emergence of a realization of semantics from syntax in an empirical framework. I submit such an ability to scrutinize the syntax in such a way is critical to the development of a semantic organization, but that it is indeed possible if the coding is also malleable. In addition to his questionable assumptions, Searle seems to be completely ignoring these factors. He is asserting learning must be limited only to efficiency in the translation mechanism.

    • @koenth2359
      @koenth2359 ปีที่แล้ว +3

      @@leslierhorer1412 It is not all too different from what an infant accomplishes when he/she is trying to make sense of all sensory inputs, and finally manages to learn a language. And the infant manages! (#Chomsky #Language Acquisition Device).

    • @finald1316
      @finald1316 ปีที่แล้ว

      @@koenth2359 I am not aware how the enigma machine worked, but there are more words than letters. From a data standpoint, if the enigma works over letters, it should be easier to crack.

    • @finald1316
      @finald1316 ปีที่แล้ว +1

      @@leslierhorer1412 reminds me of IQ tests, but can only be sure if you check the solutions. I guess that is another discussion related to how language is constructed.
      There is some correctness in ignoring that if you account for "older" AI solutions which are not generic in nature and are just algorithms for computing something very specific. However, when the system learns using an arbitrary number of layers, the "interpretation" of the input is not an algorithm set in stone, rather we have implemented a sort of meta algorithm, i.e. the instructions that will give us the instructions to return the correct chinese character.

  • @AliceBelongs
    @AliceBelongs 3 ปีที่แล้ว +8

    This really helped me with my essay, thanks for uploading! :)

  • @Leao_da_Montanha
    @Leao_da_Montanha ปีที่แล้ว +2

    If humans could understand the semantic of words in the way to declassify the strong AI as Seasle is appointing, there would be no communication problems at all, as if the different order and explainations in the learning proccess would result in the same meaning to every word in every mind. In general, semantics are different for each individual and depend on the learning proccess they had, in depth, the memory we acquire for each symbol is updated when learning newer symbol until theres enough context for semantic. In other words, we all work as a turing machine fundamentally but arranged in a complex system.
    I would love to read comments to this, feel free to respond

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน

      It would be easier to comment if I disagreed with you, but I don't 🙂

  • @nixedgaming
    @nixedgaming ปีที่แล้ว +9

    I am desperate to see how Searle would respond to the idea of a neural net matrix transformer, assuming he legitimately understood the math of it. My question is basically, why can’t “semantics” be an *emergent* property of a sufficient understanding of syntax? The paper “attention is all you need” basically demonstrates that a machine *kind of* grasps semantics from a type of mathematical transformation of language through encoder/decoder processes.
    Very fascinating, thanks for the lecture!

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน +1

      One possible answer is that Searle lacks the imagination for believing that semantics can arise out of sufficiently complex syntax.
      However, Searle seems to accept that a rule book can make you appear fluent in Chinese, so he seems to accept that syntax can do everything you need. But apparently that does not let him doubt that a human does not perceive semantics through complex syntactic processing...
      (Sorry for the double negation, haha.)

  • @ydmos
    @ydmos ปีที่แล้ว +3

    Maybe we're overestimating the role of "understanding" (semantics) here. Assume the mind is, in fact, the equivalent of a computer, that it is also in the Chinese room. Perhaps what we call understanding is just part of the programming, part of how it gets from the inputs (what we see, hear, touch, i.e., how our body senses the physical world) to the outputs (how we interact with that world). We've shown recently that one way AI does its thing is to come up with its own models to interpret input -- perhaps it's generating its own semantics. Under this view, our semantics is something we've created to process the world we live in, defined by how we sense it. A computer's semantics will be something else entirely, perhaps incompatible.

  • @ameliagerson926
    @ameliagerson926 ปีที่แล้ว +12

    I actually can read Hebrew and was so excited I knew what that meant bc it was the only part of the video I confidently knew lol

  • @echoawoo7195
    @echoawoo7195 ปีที่แล้ว +3

    The sensations you experience as a child are all symbols without semantics. The semantics derive from repeated exposure to those symbols.
    Given enough experience with a syntax, you can determine semantics. That's what infancy literally does.
    This entire thought experiment hinges on understanding not being an emergent property of a sufficiently complex information processing system

    • @echoawoo7195
      @echoawoo7195 ปีที่แล้ว +1

      Go pick up a picture less book in a foreign language not part of your family and tell me you can't pick out the meaning of some word groupings once you see a large enough set of symbols

  • @Raoul684
    @Raoul684 10 หลายเดือนก่อน +5

    Great explanation, again. I love these videos, so thought provoking. My addition against strong AI is to ask what is the computer doing, absent of any questions or inputs? That, to me, seems equally, if not more, relevant for consciousness. .

    • @DAG_42
      @DAG_42 7 หลายเดือนก่อน +1

      If an anaesthesiologist stops your flow of thoughts by chemicals, you go unconscious. That's just taking the symbol shuffler guy out of the Chinese room.

  • @presto709
    @presto709 ปีที่แล้ว +9

    This was great. I think I first learned of the Chinese Room from a book by Martin Gardner. I think I come down on the system response. The system does understand Chinese because looking at the input and giving the correct output is what understanding means. It's the same test we would give to a person who claims to understand Chinese.

    • @kid5Media
      @kid5Media ปีที่แล้ว

      No. Or, to tweak things a little, the person outside the room instead of passing in a question passes in the instruction to order a glass of tea. The person inside the room will die of thirst (unless rescued by the Infinite Monkey Typewriter Brigade).

    • @presto709
      @presto709 ปีที่แล้ว

      @@kid5Media Interesting. We aren't told what the book inside the room will do when a nonquestion is the input. Postulate that the instruction book translates nonquestions into his language which he recognizes and reads. Interesting but I'm not sure how it changes anything.

    • @theconiferoust9598
      @theconiferoust9598 10 หลายเดือนก่อน

      the «system» includes humans with consciousness to interpret and glean meaning. in other words, you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      its like saying meaning is self-evident in physical matter, symbols, or mathematics, as if a computer could take the word «love» and output every single iteration of the meaning that has ever been conceived, felt, lived by every human ever. there is no correct output.
      conversely, it seems there is no meaning without a conscious experience, and the «systems» response only affirms this.

    • @presto709
      @presto709 10 หลายเดือนก่อน

      @@theconiferoust9598
      YOU WROTE
      you are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      REPLY
      The test wouldn't be if it gives the correct answer. It would be if it gives a convincingly human answer. Like the Turing test.
      If you ask "How is the weather" and the answer comes back "27" that would not make you think there was a mind at work.
      If you asked, "Is marriage a good thing?" You would not be looking for a correct answer, you would be looking for an answer that indicates an understanding of the question.

    • @presto709
      @presto709 10 หลายเดือนก่อน

      @@theconiferoust9598
      YOU WROTE
      You are saying there is a «correct» output to every given input, which is nonsense and obviously not how life works.
      REPLY
      I'm not saying that at all. When interacting with another person I do not require that all of his answers confirm my opinion of correct. Only they generally seem. to be responsive I might interact with a person who gives absolutely not "correct" answers in my opinion but still seems to clearly be a "mind".

  • @dowunda
    @dowunda ปีที่แล้ว +42

    How does Seale define what it means to understand something? Viewed exclusively in the physical world people can be seen as a kind of computer: The brain itself being a kind of Chinese Room.

    • @recompile
      @recompile ปีที่แล้ว +9

      Searle makes a convincing case that whatever it is that brains do, it is not mere computation. You might think you're a computer, but that's just because that's the most advanced technology we have at the moment. 100 years ago, you might have thought the brain was like clockwork. The idea that brains are like computers will seem just a silly in the not too distant future.

    • @dowunda
      @dowunda ปีที่แล้ว +3

      "Brain: an apparatus with which we think we think.“ - Ambrose Bierce

    • @bombmk2590
      @bombmk2590 ปีที่แล้ว +12

      @@recompile I have yet to see a convincing argument that it is anything but computation.
      How could it be more?

    • @costadev8970
      @costadev8970 ปีที่แล้ว +1

      ​@@bombmk2590 you have subjective experiences, a computer (symbol manipulator) does not.

    • @calorion
      @calorion ปีที่แล้ว +8

      @@recompile "The brain is like clockwork" is not really a silly idea. Unsophisticated, sure. As we get better computers, we'll get a better understanding of how brains work. But a physicalist determinist basically does think that the brain is like clockwork on a theoretical level.

  • @mixingitupwithmina93
    @mixingitupwithmina93 ปีที่แล้ว +1

    Well done! Thank you for giving your gift of teaching to the world. You have just identified the lack in our world right now. I would suggest that the more powerful a group becomes the more syntaxicized it grows as it loses the ability to understand the semantics of the syntax it / they continue to regurgitate. Everyone gets on board the Turing train … excited to be a part of the syntax revolution. Lol. I am not picking on any one or any group - just a general semantic observation 🙂

  • @t.b.a.r.r.o.
    @t.b.a.r.r.o. ปีที่แล้ว +2

    All this and yet here we are, approaching the Singularity.

  • @mattmiller4233
    @mattmiller4233 ปีที่แล้ว +7

    Great video! Very well explained. I would add two points, though - the first you mentioned very briefly, but it is worth stressing that the Chinese Room serves only as a refutation to functionalism in the purely *digital* sense, not in totality. The second is that Searle seems to lack (though I may have missed it in the text; please correct me if I did) any formalized definition of what, exactly, constitutes this "understanding" that he says the room lacks, or what sets such "understanding" apart from a sufficiently complex system of inputs and outputs. He seems to work on a fairly generalized seems of what *feels* like understanding, but fails to specify or quantify it (again, let me know if I missed something).
    Again, awesome work!

    • @ben_clifford
      @ben_clifford 10 หลายเดือนก่อน

      To address your first point: I think it's actually sufficient for Searle to contrive a highly-constrained, obtuse situation and say that he's refuted functionalism, and here's why...
      The core argument of functionalism is that only function matters, and not the system or substrate. So, to disprove such a theory, we only need to show a single counter-example. There's a more formal way to show this with predicate logic, but I think you get the idea.

  • @wirewyrm01
    @wirewyrm01 ปีที่แล้ว +28

    There is a paradox in the thought experiment. The person in the room is tasked with manipulating symbols, not with trying to figure out what they mean. Therefore, it follows naturally that the person in the room cannot (or more accurately, will not) figure out what the symbols mean. Indeed, the meaning of the symbols is completely irrelevant, so positing that the person could never understand the semantic meaning of the symbols is also irrelevant, because that was never part of the design.
    On the other hand, I would propose that Searle's assertion, that the person in the room can never gain an understanding of the symbols even if they tried, is false. Perhaps the person can not gain much insight from studying the symbols alone, but if the person studied the *instructions*, surely they would be able to glean some information about the symbols and their contextual use. Patterns will emerge from the frequency of use of certain symbols, the association of certain symbols with each other, and symbols occuring in question-answer pairs, and so on. Furthermore, from the frequency and sequence of "questions" received, the person can also start to observe patterns and eventually triangulate the semantic meanings of the symbols. In fact, many of these techniques are used in the study and research of dead languages.
    There are other problems that I can see with the thought experiment, but these are the most easily defined ones.

    • @LoraxChannel
      @LoraxChannel ปีที่แล้ว +4

      Yes. This is exaxrly why modern AI is fed huge language bases, and tasked with creating context and relationships and distinctions, just as we do in language. They are no longer limited to manipulating digits. I mean, that is the whole point of designing AGI, so it can learn and assign "meaning" indepenently.

    • @stefans.5043
      @stefans.5043 11 หลายเดือนก่อน +6

      the person inside the room will never know the meaning of the symbols when he doesn't know the question he is asked or the answer he gives. In this experiment he only acts on given instructions and not on human behavior like observing or recognizing patterns. And even when he sees patterns than still he doesn't know the sematic meaning of them.
      thts th dffrnts btwn hmns nd cmptrs, you probialy can read this last part only by knowing the meaning of the words and not the meaning of the symbol. A computer can not.

    • @fang_xianfu
      @fang_xianfu 11 หลายเดือนก่อน

      Yes - the other part is, where does the book come from? Real minds write their own book of rules to manipulate the Chinese symbols, and they edit their own book as they try new things and they succeed or fail.

    • @SatanDynastyKiller
      @SatanDynastyKiller 11 หลายเดือนก่อน +2

      I knew someone in here would be as smart as me lol - saves me typing it, thank you.
      The easiest way to cover it all- until we understand everything, we understand nothing. I genuinely think some of these “intellectuals” are not exactly what they claim they don’t claim to be…

    • @LoraxChannel
      @LoraxChannel 11 หลายเดือนก่อน

      @@SatanDynastyKiller It's smart as I. When you are claiming smartness, it matters.

  • @themcons50
    @themcons50 7 หลายเดือนก่อน

    wow. Great vid and presentation. Thank you sir, much respect.

  • @ChrisJones-xd1re
    @ChrisJones-xd1re 10 หลายเดือนก่อน +1

    Semantic understanding can emerge with sufficient instruction, and cannot, without it.

  • @davidn4125
    @davidn4125 ปีที่แล้ว +3

    I suppose the same could be said of the human mind as a computer program. If one were to map all the neural connections then you would be able to know the output of a person's brain given the input signal. It's just that we don't have a way of mapping the connections fully but that doesn't mean they aren't there. Also the mind does change over time but so can a computer program since AI's are now able to rewrite their own code.

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน

      Agreed. And even if the human mind is somehow "more" than a computer program, Searle's argument does not show this in any convincing way.
      Ultimately he has nothing better than "syntax can never be semantics" and "humans do semantics". But it is his lack of imagination that tells him that syntax cannot mimic semantics sufficiently closely that we cannot tell the difference. (And interestingly the premise of his experiment is that a syntactic rule book CAN mimic semantics.)

  • @superactivitylad
    @superactivitylad ปีที่แล้ว +6

    I like the "systems" response to the problem, and I think about it this way: My eyes do not understand symbols. My eyes receive light, do whatever it is they do with that light, then send that information through my optical nerve, into my brain, then neural pathways that were formed when I first learned about that symbol fire up again, and then a bunch of complicated interconnected stuff in my brain happens that makes "me" (the system as a whole) understand that I'm looking at the number 69, and the meaning behind it, and I say "nice."
    my point is that no individual part of my nervous system understands anything. they all individually just receive electrical or chemical information, and then do something with it, and send some kind of information to the next part.
    i believe its possible to design a system with digital computers that replicates how the brain works. we just need to first understand how all that "complicated interconnected stuff" works first.

  • @michaelmertens813
    @michaelmertens813 ปีที่แล้ว

    Thank you, this was really helpful.

  •  ปีที่แล้ว

    Love it - I wrote my master thesis about cognition & mind and about John Searle in specific in 1997.

    • @wolframstahl1263
      @wolframstahl1263 ปีที่แล้ว

      Then you seem to have some expertise in the field, definitely much more than I do.
      Can you answer me this, or do you know if Searle has anything to say about it:
      Does the existence of Chinese people disprove his conclusion of this thought experiment, and if not, why? He talks about a person with no understanding of Chinese being put in that room. How is that different from being born in China? An infant literally speaks no language, in the moment one is born there is only syntax to be experienced, yet we develop semantics. In being born, they are a person with as little understanding of Chinese as Searle himself, being put into a literal Chinese room.

  • @Conserpov
    @Conserpov ปีที่แล้ว +12

    If the people outside are determined to teach the person inside to understand Chinese, I think they can do it, to an extent.
    It may require at least two distinct inputs though.
    This problem comes up IRL with children who are blind and deaf from birth.

    • @teddydunn3513
      @teddydunn3513 ปีที่แล้ว +5

      Exactly. The Chinese room is setup to treat visual sensory inputs as somehow special and more "real" than other inputs.

    • @pumkin610
      @pumkin610 11 หลายเดือนก่อน +1

      Blind from birth, that reminds me how we cant really describe colors in a way that would let an always has been blind person know what it really looks like, aside from black if you consider that to be a color, but that's only because i assumed that they'd be seeing black all the time since we see black when we're in the dark but they aren't seeing anything, like how hands and feet don't detect light itself. Red is hot, it's intense, some roses are red, blue is calm, the sky is blue, green is grass, yellow is the brightest color. Colors are their names and the specific visual sense that they are I suppose. Maybe we aren't seeing colors for what they are either, to me certain colors are certain emotions, certain vibes and certain things.

    • @pumkin610
      @pumkin610 11 หลายเดือนก่อน

      There's gotta be a touch based language, right

    • @theconiferoust9598
      @theconiferoust9598 10 หลายเดือนก่อน

      you can give them a rulebook to input->output and learn, but it is their conscious experience as a human that will allow them to find meaning, not the rulebook.

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน

      @@theconiferoust9598 the rules in the rule book/weights of the neural network/connections between neurons get modified as you learn.

  • @emanuelbalzan7667
    @emanuelbalzan7667 ปีที่แล้ว +10

    Absolutely love this presentation. I have only one criticism - the description of Chinese symbols as "sqiggle squiggle" or "sqoggle sqoggle" is not racist. English (or Latin) characters would appear as such to a Chinese person who didn't know what they were.
    I am old fashioned enough to believe that the word racism should be reserved to describe beliefs of racial superiority and inferiority, and behaviors of injustice and exploitation that flow from such beliefs.
    I wouldn't bother anyone with this, except I really do feel we need to be a little less sensitive about these issues. They are increasingly being used to fracture a very polarized society.
    I would not take offense at anyone referring to my writing as "sqiggle sqiggle" or "sqoggle sqoggle" even if they could read English - but perhaps that's because my handwriting verges on the indecipherable anyway.

    • @t.b.a.r.r.o.
      @t.b.a.r.r.o. ปีที่แล้ว +1

      Agreed.
      Though I would call some English written language as "hodgepodge", or " "scribble-scrabble".

    • @GynxShinx
      @GynxShinx ปีที่แล้ว +1

      The only problem comes when actual racists are ostracized from society so they hide their actual views and dogwhistle them by saying stuff like "Chinese is just squiggle squiggle." When said by a racist, it implies that chinese people aren't smart enough to create real language which IS a supremacist idea. Now, should we react by calling someone a racist when they say "Chinese is a bunch of squiggles"? I doubt it. But should we be suspect of them? Sure. If you know this individual and know they don't do legit racist stuff, then they're fine.

    • @magicpigfpv6989
      @magicpigfpv6989 ปีที่แล้ว

      Ask to see your doctors hand writing… that shit is nothing but squiggles!

  • @mohnaim5824
    @mohnaim5824 8 หลายเดือนก่อน

    Impressive talk yet again, well done you are a natural inheritor of Sagan.

  • @davidjohnbonnett
    @davidjohnbonnett ปีที่แล้ว

    What a brilliant video! 👍👍👍

  • @impyre2513
    @impyre2513 ปีที่แล้ว +19

    Personally, I feel like the systems response idea makes a lot more sense... But it only works if the system is able to self-modify. If this system as a whole is meant to represent someone that understands Chinese, then it must first demonstrate the ability to form responsive queries that actually make sense, and then potentially make adjustments to its programming depending on the responses received. But that hits the crux of the problem, because it would have to be a pretty fancy rulebook to have that functionality built-in.

    • @JohnDlugosz
      @JohnDlugosz ปีที่แล้ว +5

      ChatGPT, especially in GPT-4, fluently translates to Chinese and other languages. You can offer corrections or style guidance and it corrects itself and remembers that moving forward...but this does not alter the model! The nature of the Transformer holds the recent memory of conversation as the input to the next pass. GPT-4 is a literal Chinese Room, running on the Azure data center. Translating to Chinese is not something it practiced with feedback during the learning phase. All it did was read text in different languages and learn the patterns within those languages. Meanwhile, it gained the skills to converse convincingly, translate languages fluently, do algebra, solve logical problems, write code, create web pages, and much more, all from this "fancy rulebook".
      The code implementing the neural network is for processing tokens, input and output. When the system undergoes "deep learning", that code does not change. The learning is in the weights between the neurons (or "parameters"). So, once learning is complete, the knowledge is in this structure, not the (same) low-level code being executed.

    • @glenblake3021
      @glenblake3021 ปีที่แล้ว +3

      Sure, but that's a problem with the analogy. It's meant to be an analogy for strong AI, and if you designed a system attempting to be a strong AI but it lacked the ability to modify its own rules, well, you've fundamentally misunderstood the problem you're trying to solve. Lack of imagination on Searle's part, imo. One of the more irritating things about the Chinese Room.

    • @rrshier
      @rrshier ปีที่แล้ว +1

      @@JohnDlugosz I disagree that ChatGPT is a literal Chinese Room if your statement that "You can offer corrections or style guidance and it corrects". That statement alone means you are solving the problem for the processor in the room. The Chinese room thought experiment is the idea that there is no connection between the language the processor in the room knows, and the language (with differing symbols) being passed through the room. Your statement provides the connection, thus NOT a literal Chinese room.

    • @harrygenderson6847
      @harrygenderson6847 ปีที่แล้ว +3

      @@rrshier No, ChatGPT does not literally understand the meaning of the statement you are passing it. It 'reads' it as a series of tokens and applies some weightings to calculate the most likely follower. The model itself is part of the rulebook, and the prompt you give it is the string of arbitrary characters being fed in. It could tell you the meaning of English or Chinese, but it doesn't internally separate English and Chinese or derive truth from the statements or something. But it's an abstraction that we apply to the system, the same way we do when fellow humans create waves of pressure in the air.
      Also, just so you know, the 'Chinese room' is turing complete, and can thus be simulated by any turing machine (such as a digital computer) and can simulate any turing machine (such as a digital computer). So ChatGPT could be run inside a literal Chinese Room.

    • @RyanShier
      @RyanShier ปีที่แล้ว +1

      @@harrygenderson6847 - Actually the Chinese room is not turing complete, as there is no way of storing state, or having feedback from an output offered back in. There is a strictly defined set of rules which cannot change (that would be where a feedback path and state storage could come to play). In fact, as defined, it is the literal opposite of turing complete. If using the example given on Wiki, the person inside the room with the strict rule set it akin to HTML (non turing complete). GhatGPT on the other hand and the fact that you CAN indeed give it other state that it can store and use to give differing answers is indeed turing complete. If the 2ndary inputs are used for further training of the GPT model, then it is most certainly turing complete. In terms of understanding meaning, neither do we without context of other surrounding words, placement within a sentence, etc...

  • @devtea
    @devtea ปีที่แล้ว +8

    Thank you so much for this video! This is by far the best explanation of the Chinese Room. Within Searle's imagined ideal conditions of the experiment, Searle is correct, and the conclusion would be true.
    At least it would have been true for, say, a digital computer such as a calculator or a watch.
    However since Searle's article wasn't concerned with the ideal conditions, and because Searle's article showed interest in extrapolating this into practice - there is indeed a version of a 'Systems' response to Searle's original article.
    One can argue that Searle's conclusion (that it is impossible for the system, or the individual person/central processing unit inside of it, to learn the semantic meaning of the language) is false - because it is impossible to guarantee Searle's imagined ideal conditions (the ideal circumstances of the room - perfect isolation and perfect access control that would prevent any unauthorized outside contact/exchange of information) for an extended length of time.
    Again, the reason Searle's conclusion (that it's impossible for the person/central processing unit to learn the semantic meaning) is false is because the ideal conditions imagined are impossible to guarantee.
    One can argue that there exists a positive, non-zero probability of an outside entity or force stepping into this setup (uninvited) and teaching the person/central processing unit inside the room the full semantic meaning of the entire language, without ever entering the room. For example, by passing information into the room on purpose, with the specific intent (i.e. the intent to train it, as one would train an Artificial General Intelligence). This experiment, given a sufficient length of time in real life, i.e. years, would encounter a non-zero likelihood that these ideal circumstances of the room would be altered by an outside party (i.e. an enthusiastic ML Ops Engineer).
    Since the person/central processing unit inside the room does have command/skill in some language (other than Chinese) that it uses to understand the instructions - it is not impossible for it to receive (from some enthusiastic person/entity outside the room) new information with instructions / process / method that build up into a full semantic understanding of Chinese, or any other language. This is for the classic digital computer. Especially if there's no time limitation, it's not impossible to perform this within much less than the mentioned 1000 years. Difficult and not likely, yes; labor intensive, yes; but not impossible at all; and with humans being humans - significantly probable.
    Of course, the above would not impart a 'human' experience of speaking and interacting in Mandarin or Cantonese. But a full understanding of the language? Yes. I.e. in the case of digital computer, if it can understand binary numbering system, it can understand Chinese. It will be able to hold a conversation with a human. It won't sound like a classmate or a neighbor or someone you can relate to, as a natural-born human, so the conversation will be likely pursued as less meangful, but the language comprehension would be complete.
    Again, Searle isn't wrong. Within the thought experiment, Searle's conclusion is technically correct. It just has limited utility.
    It's like performing a thought experiment where one requests us to grant them the assumption that telomeres (stretches of DNA) at the ends of our chromosomes do not clip off and get shorter each time one of our cells divide. If we grant that assumption, we'll be able to construct a setup where a human can live 'forever', or at least not die from aging.
    The thing is, since it's not possible to guarantee that "ideal" assumption, you and I are still losing about 26 base pairs per year, so while the thought experiment is really interesting, the conclusions from it alone have rather limited utility.

    • @tedphelps
      @tedphelps ปีที่แล้ว

      Beautiful thinking. Thoughts of all sorts do stand on the stage to be judged for their actual value to us in living. Part of the problem is too strong a belief in proving some idea 'false.' Instead, I feel that ideas influence me, have a certain weight and value, move me this way or that in a wide world that I live in. I am happy for all of this.

  • @quokka_11
    @quokka_11 10 หลายเดือนก่อน +1

    20:19 "You're never going to be able to figure out semantics from syntax."
    Except we're talking about human language, and you already have your own experience of that. With exposure to enough earnest material (not nonsense), you would eventually make meaningful connections and at least some of the semantics would emerge.

  • @neilgilstrap7839
    @neilgilstrap7839 ปีที่แล้ว

    This was great, thank-you.
    I wanted to say that while the "system" is dismissed, and there is commentary below about emergent behavior, from someone who studied AI and neural networks, my perspective on the dismissal of the "system" argument was that the system argument wasn't refined enough to EXPLAIN what is meant by the "system" to pose a threat to the thought experiment. Had it had been adequately explained instead of just as a "emergent system behavior" and instead directed at HOW that emergent behavior works, I think Searle would have had a lot harder time rebuking than he otherwise did.
    The short is this. While any given "box" indeed does not understand semantics, what needs to be added to the system argument to present the full rebuttal is the following 2 key points:
    1) We agree that any given "box" can receive inputs and produce any output on a set of rules given to it. Suppose then that one of those "outputs" is readable as "semantic meaning" of the input. (As in the thought experiment). NOW, combine this with #2...
    2) Suppose that the output of any given "box" is the INSTRUCTIONS for another box. Not just another INPUT. i.e. One box is outputing the instructions another box follows. Effectively, the "computer is writing its own code."
    When you consider the possibilities that #2 implies, that's how you can arrive at the proof/causality that the "system" can understand semantics.
    Simply put, in the Chinese Room Experiment, it is the Chinese who are providing the instructions for the computer such that given an INPUT the OUTPUT will have semantical meaning to the individuals who are reviewing the output. Yes, the box is not aware of the semantics of the output but the people who wrote the instructions for the box are very aware of that and so provided the INSTRUCTIONS so that the computer will produce semantically meaningful output.
    Then, all you have to do is realize that the output of a box could be INSTRUCTIONS for another box (i.e. the computer/brain in this case is writing its own code).
    And you'll quickly arrive that in fact, the system as a whole CAN produce semantics and furthermore, it's not just a weird emergent, unpredictable phenom. It just logically and simply follows how semantics can be produced and pretty easily so.

  • @prismaticsignal5607
    @prismaticsignal5607 3 ปีที่แล้ว +4

    I bet you’re your students' favorite teacher.. XD
    Awesome lectures!!

  • @amaarquadri
    @amaarquadri 2 ปีที่แล้ว +16

    I would push back against the idea that you can never learn semantics from syntax alone. I think given enough time in the Chinese room, you would eventually learn Chinese just as well as a native speaker. Consider the GPT-3 language model discussed in these videos th-cam.com/video/_8yVOC4ciXc/w-d-xo.html, th-cam.com/video/PqbB07n_uQ4/w-d-xo.html. Despite only learning based on the syntax of a large corpus of English language, it is able to construct coherent well thought out sentences. For all intents and purposes, it (or at least a future more advanced version of it) does "understand" the semantics of language.
    In a certain sense, if you zoom in enough human brains are just manipulating electrical inputs and producing electrical outputs with no understanding of what they mean semantically. Its just a set of particles following the laws of physics. Nonetheless, the system as a whole can develop understanding.

    • @guillecorte
      @guillecorte ปีที่แล้ว +4

      El punto es que debes poner "" porque sabes que no es "entendimiento" real. Además perdiste el punto: no es si luego de años en la habitación podrías o no aprender chino, sino que podrías responder "en chino" sin entenderlo realmente.

    • @MuantanamoMobile
      @MuantanamoMobile ปีที่แล้ว +8

      "Colorless green ideas sleep furiously" is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically well-formed, but semantically nonsensical. GPT--3 often makes Noam Chomskyesque statements that are syntactically correct but nonsensical, because it doesn't understand.

    • @francesconesi7666
      @francesconesi7666 ปีที่แล้ว +6

      Advanced symbol manipulation =/= understanding

    • @perfectionbox
      @perfectionbox ปีที่แล้ว

      An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

    • @danwylie-sears1134
      @danwylie-sears1134 ปีที่แล้ว

      @@perfectionbox Programs are data. The absolute distinction you're appealing to, between the impossible-to-update rule book and the constantly-updated arrangement of papers in baskets, does not exist. It's an actual theorem that a suitably designed Turing machine can receive its program as part of the initial content of its tape, and the arbitrary input as the rest of the initial content, and no matter what other Turing machine you pick, there's an appropriate program part that can make the program-reader Turing machine match the output that the other one would give if it received just the arbitrary-input part on its tape. And with real-world computers, it's literally true that programs are data, stored on the hard drive or RAM or whatever, same as any other data.

  • @bobstovall9570
    @bobstovall9570 7 หลายเดือนก่อน

    Full Disclosure: I'm a mid-level IT person with some programming experience but I am NOT a programmer; I just figure out how to make stuff work and how to fix it when it stops working.
    I have a keen interest in the Large Language AI models (as they call themselves and are so referenced by others) and I find myself, on occasion, engaged in conversations with people that are terrified of an AI system becoming 'sentient' and then plotting to take over the world and enslave or destroy humanity.
    When I try to point out to them that for not less than 150 years and for probably much longer than that, humans have been attempting to 'create' life with every attempt ending in abject failure and that no matter how they might persist with technology to develop it to a point where it might, suddenly, become self-aware, that those will, likewise, all end in failure just as the animation or re-animation of dead tissue has failed.
    Some see it but too many do not. They prefer to live in fear of that dystopian future depicted in Sci-Fi movies.
    Thank you for this brilliant presentation and for making this very complex question accessible to the non-intellectuals among us.

  • @minatour762
    @minatour762 ปีที่แล้ว +1

    I was thinking about a response to this - well if you have a bigger system of with multiple Chinese rooms (multiple inputs and outputs) however it the idea came to a close when I realized that inputs don’t compare things with other inputs exactly like the Chinese room. Even if this weren’t the case, the people wouldn’t understand it as long as they didn’t know it before hand.

  • @lokouba
    @lokouba ปีที่แล้ว +8

    I argue "Strong AI" won't have a necessity to truly "think" if their instructions are elaborate enough to give the ILUSION that it thinks.
    The actual subject of the experiment is not the person in the room, it is the person OUTSIDE the room. And the idea is that if the person inside the room is trained to find these characters quickly enough so they can respond as quickly as if they understood the message written on the paper. They could be convincing the person outside the room that they actually understand chinese.
    The idea, is that you can put a person inside of the room or an AI bot inside of the room and it would make no difference from the point of view of the person outside of the room, if you tell them there is another chinese person in there and tell them to write messages to them, they will likely believe its a human chinese speaker in both cases.
    The conclusion i draw from this is that you give "Strong AI" enough tools, enough instructrions and most importantly a "chinese room" to cover it's true nature it can pretend to be an actual being that "understands Semantics" because human beings are only able to communicate through Syntax.

    • @udarntors
      @udarntors ปีที่แล้ว +1

      This simple to refute.
      We are sharing meaning, not syntax, syntax may lack in a conversation or be minimal, but without shared semantics/meaning there is no communication.
      Exemple: you can understand a small child or a foreigner who does not use proper grammar.
      Here is some syntax:
      find . -type f -empty -prune -o -type f -printf "%s\t" -exec file --brief --mime-type '{}' \; | awk 'BEGIN {printf("%12s\t%12s
      ","bytes","type")} {type=$2; a[type]+=$1} END {for (i in a) printf("%12u\t%12s
      ", a[i], i)|"sort -nr"}'
      Here some meaning:
      Flower in the crannied wall,
      I pluck you out of the crannies,
      I hold you here, root and all, in my hand,
      Little flower-but if I could understand
      What you are, root and all, and all in all,
      I should know what God and man is.
      Alfred Tennyson in 1863

    • @lokouba
      @lokouba ปีที่แล้ว +1

      @@udarntors It seems you misunderstand the difference between syntax and semantic.
      You say you are presenting one example of syntax and one example of meaning. But you are in fact presenting syntax in both cases because semantic isn’t a “message” it’s a “concept”. The english language is a syntax, C+ is a syntax. And of course shared meaning is part of any conversation but my point exactly is that these AIs are programmed by people who understand the semantics of the words they are inserting in their repertoire of syntax.
      Because the relationship between syntax and semantic can sometimes be fuzzy, syntax itself can be utlized for deception too, that is the basis for the concept of “doublespeak” too. Language is only a form of expression, but humans truly lack a reliable way filter what syntax is backed up with “truth” or (in the case of AIs)“thought”.

    • @udarntors
      @udarntors ปีที่แล้ว

      @@lokouba I wasn't really clear in my little exposition there. So, i think that "syntax" and "semantics" are, in fact, as you say, concepts that pertain to language and linguistics. One covers the structure of language, the rules that govern the placement of words in a phrase, and we call this one syntax.
      The other is all about meaning, and the relation between words in linguistics, and we call it semantics.
      I see it as structure and content. Form and substance.
      So :
      "The crocodile flew between the slices." Correct syntax here. Absolutely meaningless.
      I am in total agreement with all you have said about the fact that you can fool humans with sufficient processing power and fiddling with the configuration to accommodate the social norm.
      My reply was about this statement only:
      "human beings are only able to communicate through Syntax."
      Syntax helps to communicate *correctly* as to the social conventions of the time you are in...
      So, my examples were, in fact, of a meaningless but beautifully structured line of bash and a meaningful poem. One is a command that will be interpreted and transformed into lights on a screen as programmed and the other is a tale of epistemology, causality, and determinism.

    • @lokouba
      @lokouba ปีที่แล้ว +1

      @@udarntors Aha i see. Maybe i should have worded that better. I meant to say communication between is only possible through usage some sort of syntax at least from my conventional understanding of what constitutes as “communication”.

    • @irrelevant_noob
      @irrelevant_noob ปีที่แล้ว +1

      @@udarntors and yet, Alejandro is right that in any communication only the syntax is "given" (or "shared"). The fact that one party attributes some specific meaning to the terms in the message has no effect on how the digital actor (a Turing machine, the person in the room, an AI bot, etc) will process the message. Whether or not that actor *_will_* in fact extract some (subjective) meaning(s) from the message is unknowable. But in any case, the meaning itself is not intrinsic in the message, it is only "in the eye of the beholder"/interpreted by whoever analyzes the message.
      @AlejandroRodriguezP that last part of the OP seems to me to be a kind of "Turing test" for semantics: is the digital actor in the room "good enough" to convince the outside person(s) that they understand Chinese? :-)

  • @jaylives5791
    @jaylives5791 ปีที่แล้ว +23

    if the incoming questions were varied enough, eventually a person would create a framework to understand each symbol, given enough time; even if they couldn't imagine the real world equivalent, they'd be able to relate the symbols to each other (maybe it would take hundreds or thousands of years, but a computer can make thousands of years worth of calculations in seconds)

    • @brotherpaul963
      @brotherpaul963 ปีที่แล้ว

      Sort of like monkeys with computers.

    • @mark_a_schaefer
      @mark_a_schaefer ปีที่แล้ว +7

      Yes, you could relate the symbols to each other-I always see X when Y comes along-but you still wouldn't know what they meant without some context. And here's the thing: even knowing that the symbols are symbols of a language is more information than an actual processor has. That's more context than a digital computer has. Imagine receiving these Chinese characters and you had never seen written language before. Your instructions were given to you orally (you have a really good memory). It might never occur to you that these symbols even were encoding language. For all you know, they could be wallpaper patterns that you're arranging according to some guidelines for aesthetics.
      The part that everyone is having such trouble with in this thread is imagining a world without any contextual cues of any kind-because our world is full of such cues. But a Turing machine doesn't need them, because it doesn't need to understand. And because it doesn't need to understand, it is not given any context for what the symbols are.

    • @jaylives5791
      @jaylives5791 ปีที่แล้ว +1

      @@mark_a_schaefer for sure, pardon my bad explanatory skills; but first, it wouldn't be as simple as if X then Y, it'd be more like, for example: a long series of characters that results in a different answer series; then another long series that is slightly different with a resulting answer series that is also slightly different; eventually the similarities in the question series would point to a set of answer series, where the variability would narrow down the potential correct answers, and new question entries similar enough would bring up that set of potential answers, even before being confirmed by the incoming answer. if that makes sense.
      do this enough times, you create a framework that mimics having context, without any actual context

    • @rickgilbert7460
      @rickgilbert7460 ปีที่แล้ว +2

      @@jaylives5791 Yes! But that reinforces Searle's argument, it does not contradict it. Sure, the system might get better and better at presenting the illusion of understanding (to follow the metaphor, it might have to refer to the rulebook less often to get convincing "answers"). But no matter how good it got at recognizing that "leaves" and "tree" seem to come together a lot, Searle would say that does not mean that it has any better idea after a billion operations what an actual tree *is* than it did at the start.

    • @jaylives5791
      @jaylives5791 ปีที่แล้ว +2

      @@rickgilbert7460 Interesting, what I'm trying to get at though, is that this is the same way that we come to get context. I think we understand things by how we cluster them with various categories, and when introduced to something new, we measure it against those based on patterns, similarities, etc, and that's what context is, more or less.
      Also, the questions/requests couldn't be restricted to as simple as relating leaves to trees; I don't think I'm conveying very well (not a great communicator here) just how much variability and frequency the requests would have to have; context wouldn't begin when relating leaves to trees, it would be when the description of a new or nonexistent leaf is entered, and is distinguished and positioned in a cluster that contains or crosses with every other known leaf (as well as flora, green things, things that are raked, etc. imagine a multi-dimensional grid). (A big difference is that most of the data we get are from our senses, whereas computers would get them directly as a binary series, but other than that, I suspect our subconscious follows a very similar method, less neatly maybe).
      But maybe I'm missing something, I feel like this is what we do at the micro level, and what we call understanding is just the macro result of these interwoven; but it sounds like you see a different way of coming to understanding, so, if this is a fair question to ask, from our perspective, what is a tree, or else, how can we understand what a tree is, other than as a grouping of its properties?

  • @aakash3155
    @aakash3155 ปีที่แล้ว

    I believe the only way to get context is by trial by assigning plausible ideas to the initially random syntax, but this has two limitations: one is that you have to have some basis of syntax however rudimentary and even in any 'language', which can act like the seed to begin the cascade of learning, and second that you will only end up with solutions that are plausible, more so than the initial plausible assumption, but still not an only solution.

  • @rustworker
    @rustworker ปีที่แล้ว +1

    Feedback loops is the magic powder that makes consciousness and emotion and all

  • @jorgemachado5317
    @jorgemachado5317 2 ปีที่แล้ว +7

    Whats Searle apparently has discovered is that a computer alone would be a zombie. But a human alone would be a zombie too. The semantics is not a intrinsic part of the world. The semantics is what emerges from sociability. If the strong AI spent time enought with humans i think it would stop to be a zombie at some point

    • @annabizaro-doo-dah
      @annabizaro-doo-dah 2 ปีที่แล้ว +1

      What about when humans perform myriad behaviours they have no understanding of? I was thinking in particular of syntax. I learnt no formal grammar at school; literally no syntax. Yet I understand how to write formal English pretty well, I think. I perform the function of writing formal English without understanding almost any of the rules on a conscious level, except perhaps how to use basic punctuation.

    • @jorgemachado5317
      @jorgemachado5317 2 ปีที่แล้ว

      ​@@annabizaro-doo-dah Unless you believe there is something like a material ground for syntax (which i don't think is true) i believe that this learning is just a mimetic process. That explains why thinks change historically. People are learning new ways to perform and changing those processes by the output of those same processes
      EDIT: By material i mean a physical object. Of course syntax is material in the sense that it exists as an abstract concept

    • @recompile
      @recompile ปีที่แล้ว

      You've completely misunderstood Searle. Go do the suggested reading and try again.

    • @jorgemachado5317
      @jorgemachado5317 ปีที่แล้ว

      @@recompile You wrong! Hur dur Go read!

  • @dylanberger3924
    @dylanberger3924 9 หลายเดือนก่อน +8

    I love this thought experiment because of the two assumptions it grants computer science that we’ll likely never even see emerge from the field.
    1) a perfect set of instructions. CS needs to produce a perfect state table for the turing machine, as assumed the book for you in the room is.
    2) you are a human with a brain trying to pick up on meaning, specifically memory and cognitive ability. You are aware of the fact that these symbols could even represent an idea, and can pick up on traits you naturally know belong to language, eg pattern recognition. MAYBE, just MAYBE, you could learn chinese. A turing machine is a set of transistors firing, it isn’t looking for any of that - after all, how would it “think” to.
    I’ll probably elaborate and finish this when my phone isn’t about to die and I’m not this tired. But something to think about

    • @xirenzhang9126
      @xirenzhang9126 5 หลายเดือนก่อน +1

      spoiler alert: he never elaborated and finished his comment

  • @matbroomfield
    @matbroomfield ปีที่แล้ว +1

    So if you take a machine designed never to have understanding, it can't have understanding? What a superb insight. All you have to do is define a computer so narrowly that by definition it meets Searle's criteria, then it meets Searle's criteria? What a thought leader.

    • @bradleyboyer9979
      @bradleyboyer9979 7 หลายเดือนก่อน

      You seem to be ignorant of how all computers work. All computers operate in the way Searle described. Inputs and outputs. Doesn't matter if it was the first cpu ever created or the supercomputers of today (though Quantum Computing is arguably different due to our modern understanding of physics).

  • @UnclePengy
    @UnclePengy ปีที่แล้ว

    I was introduced to the Chinese Room experiment from the book "Blindsight" by Peter Watts. Which is pretty mind blowing.

  • @75noki
    @75noki 2 ปีที่แล้ว +3

    תודה ❤🙏

  • @Sergiopoo
    @Sergiopoo ปีที่แล้ว +4

    wouldn't a single neuron in the brain be the person in the chinese room, while the brain is the system?

    • @cosmictreason2242
      @cosmictreason2242 7 หลายเดือนก่อน

      Does a single neuron process input and output with correct syntax? That would be line saying a light bulb in the room could give the correct response

  • @zach358
    @zach358 7 หลายเดือนก่อน

    The conclusion seems more like a critique of what we do with digital computers and not the limits of its potential. The Chinese Room experiment is a static room with one set of never changing instructions. If there was several other rooms that continuously change and adjust the instructions of the Chinese Room over time, to thus give it purposes or knowledge beyond the basic rules originally set forth, then that would be a closer representation o the human mind (being taught by other human minds or other inputs) ... the only other thing you'd need to add is the sense of freedom of choice; the ability to think independently of tasks given, despite being given a task.

  • @samhagler5532
    @samhagler5532 ปีที่แล้ว +1

    This guy is a great teacher.

  • @YoutubeHandlesSuckBalls
    @YoutubeHandlesSuckBalls ปีที่แล้ว +8

    At the core, this argument is an argument from incredulity based on the fact that 'I' can 'see' and it is considered incredulous that a programmer could write code that has the experience of being an 'I' that is cabable of 'seeing', and by extension having the impression of having a sense of self. Searle's argument is that because a single neuron cannot understand chinese, it is not possible to have a chinese person.

  • @timothyblazer1749
    @timothyblazer1749 ปีที่แล้ว +3

    Penrose went on to show that consciousness is non compuble, which is an additional blow to strong AI.
    Of course, AI proponents are basically ignoring both Searle and Penrose.

    • @N.i.c.k.H
      @N.i.c.k.H 7 หลายเดือนก่อน

      Penrose just proved that his model of consciousness is non-computable.
      Strong AI people would just say that all he had proved is that his model of consciousness is wrong.
      The latter seems more compelling as there is clearly no generally accepted, rigorous definition of what consciousness even is.
      When dealing with very smart people the loopholes are always in the premises not the reasoning.

    • @timothyblazer1749
      @timothyblazer1749 7 หลายเดือนก่อน

      @@N.i.c.k.H seriously? He used the most general form of definition that exists. Aka "the act of understanding", which then makes this totally clear that no matter what your definition is, Godel will apply because it's a subject-object relationship. This is baseline reality, and unless you or anyone else can show that the scientific method can be applied without that baseline quality of reality, it's not a proof of "his" theory. It's a proof of "the" theory.
      Put another way, There is no assertion you can make about "rigor" without a subject object relationship. We're in turtles all the way down territory. If Strong AI people think differently, They need to study set theory AND epistemology.

  • @ChadEnglishPhD
    @ChadEnglishPhD 8 หลายเดือนก่อน

    Great explanation. Three criticisms come to my mind.
    The first is essentially a false dichotomy; it asserts that "semantics" and "syntax" are mutually exclusive. Indeed, in the Chinese Room scenario, semantics is not produced within the scenario. But that does not mean semantics can never be produced from syntax. It presumes that what we call "semantics", "meaning", or "understanding" are not just built complex forms of syntax.
    Consider how we "learn". You input an apple to a system. By "input", I meant via senses: you see with your eyes many "images" of apples, meaning photons enter your eyes and cause a pattern of signals to your brain.
    You also "feel" an apple, meaning nerves in your fingers send signals to your brain. Taste, smell, and even sound of biting one ... all patterns of electrical signals.
    Your brain correlates each of these in their own domain: what is visually similar about all of them, smell similar, etc., and creates a "ledger" of templates of the apple based purely on domain (sight, smell, sound, taste, touch), and record in the ledger that these are all related to the same thing.
    Also on that list of inputs is language. If each time we recorded data in these domains on this item, we also heard the sounds (inpit signals coming frome ears) corresponding to the English word "apple", or saw images (signals from eyes) of shapes of a-p-p-l-e, then the domains (input doors) of audible or written speech also have correlated entries in the ledger. These templates are correlations and simplified re0resentations of apples, and correlate with other things in the ledger such as other round things, other ed things, other food, fruits, etc.
    Now suppose somebody "new" takes over, e.g., we forget that we've ever seen the word "apple". The symbol comes to the door: "What does an apple look like?", bit we don't understand English or letters.
    We open the ledger and look for those symbols. The response in the ledger is on the page with all of the other domains about apples. We get symbols at the door that look similar but slightly different, "What does an apple taste like?", and then "smell like", etc.
    But we aren't just rule following. We are also continually running the same correlation machine as above. We correlate the symbols at the door shaped "a-p-p-l-e" with the same page in the ledger, but different sections.
    We also correlate questions (symbols at the door) containing symbols "s-o-u-n-d" as top of any given page in the ledger, and "t-a-s-t-e" always has a response at the bottom of the page.
    Over time, we associate (correlate) the symbol "apple" with that page in the ledger, "sound" with the top line on the page, "taste" at bottom, "shape" in the middle.
    Now we see new symbols at the door appearing, with recurring "p-o-m-m-e". The ledger instructions say to look up the same page as "apple", and specific areas of the page, but send back "rouge" instead of "red".
    So now what is the difference between this situation and "understanding", "meaning", or "semantic"?
    We apply those words to the ability to draw on correlated patterns. We've "learned" through correlation and organized structure of the information what the symbols "apple" means (page in ledger), what "sound" means (top of page response), "taste" (bottom of page", etc. We learned that "pomme" is another symbol for "apple", and "rouge" is another symbol for "red".
    We learned these thing only through the same activities as the Chinese room. What we added was, (a) memory storage, (b) correlational computation, and (c) the ability to add to the ledger.
    All of these things are also done by digital computers. The Chinese Room scenario simply artificially limited the capabilities of digital computers, and the humans in the room.
    More complex behaviours can come from simple ones. A real person in that room could also remember symbols, recognize patterns in the symbols, and recognize organizational structures in the ledger, inputs, and outputs, and could "learn" Chinese in these patterns.
    Now, you might say they haven't learned the meaning because they can't associate the symbols for an apple to the real world apple, bit that is because we've artificially limited the input signals to messages at the door. They can understand an apple in terms of their input environment. The thought experiment assumes the pre-existing environment of all other senses we humans have, but are denied in the scenario.
    But in that context, humans also can't "understand" anything beyond our input environment. We don't have context for relativistic effects, X-rays, infrared, frequencies we can't hear, etc. Other beings with different input capabilities might "understand" different from us.

  • @Flavia1989
    @Flavia1989 หลายเดือนก่อน +1

    This feels like saying, a mathematician who has never studied anything but math, does not learn anything about fields like economics or physics in the process of studying math. And inutitivley imwould agree .... but if i think about it, i think they do? It's not functional in the sense that they can have expert opinions about economics or write books about it or do any of the things we socially recognize as knowing stuff about economics. But it would help them an enormous amout when talking to knowledgeable people in those fields, as those people could just show them equations to explain difficult concepts. They would be able to learn very quickly, because they can just skim all the hardest parts ... since they have already learned them.
    Assuming that they retain their skills (and why wouldn't they) the Chinese room person is in the same situation. They would have to learn the meanings of the symbols to actually use it ... but wow it would be so ridiculously helpful being able to draw on a complete knowldege of chinese syntax, it wiuld allow them to ask very intelligent questions, infer meanings they have not learned yet etc.
    So i would say to searle, you are wrong. this person does know a lot of chinese, you have just constructed an edge case with the specific intent of manipulating everybodys intuitions to say 'no'.

  • @shinypup
    @shinypup ปีที่แล้ว +12

    With the results we're seeing with large language models (e.g. ChatGPT) and how computers have been able to extract semantics in the form of embeddings, could you give a talk on if there are philosophical implications?

    • @yuck9510
      @yuck9510 ปีที่แล้ว +1

      interesting question. with gpt, though, you can kind of just use the same argument, that it's simply really efficient and accurate at providing appropriate responses to prompts. that is to say, we should think of it less as an AI, and more as a really, really good predictive text machine

    • @ever-openingflower8737
      @ever-openingflower8737 11 หลายเดือนก่อน +1

      @@yuck9510 I wonder what the difference to elementary schoolchildren is in that regard. When I was first instructed how to write essays, I also learned about "useful phrases" etc. Isn't learning how to write good texts at school pretty much the same thing that this predictive text machine does? Like, it goes without saying children need to learn how to write with their hands, dexterity etc. But I mean the underlying thing of creating a text. I think philosophically, it is the same quality. Children have just started thinking about the world and someone teaches them how to use phrases to generate text. What is the essential difference?

    • @hassaan3861
      @hassaan3861 11 หลายเดือนก่อน +1

      As someone whose work is closely tied to chatgpt and open AI. My belief has gotten stronger that these systems don't understand anything but are extremely good at giving close approximations of understanding something. Also, most videos etc online are faked for views because to even get a semi decent output from chatgpt/dall e-2 you have to do the thing like 50 times and tweak the inputs in weird ways until you get a response that isn't completely BS.

    • @NullHand
      @NullHand 11 หลายเดือนก่อน +1

      ​@@ever-openingflower8737 Children learn to use verbal language first (and probably have an instinct to do so).
      As they first learn to write, they quite literally speak the "sentence" they want to write, and put it on paper.
      It's all dialog to them.
      This comes complete with using pronouns with no prior reference (I was writing about the doggy I was looking at....)
      Filling the sentence with verbal speech thinking pauses (umm), and verbal structures designed to get a body language acknowledgement (you know?)
      All of these are superfluous or counterproductive in most written sentences, and have to be trained out.
      The semantics in human text is piggybacking on the heavy lifting of associating physical world experiences to spoken (or signed) words.
      The "LLMs" might be trapped in a Chinese room, but neuromorphic image recognition "AIs" are not (they get to "see" images, to associate to with that "DOG" symbol).
      I strongly suspect that some AI lab somewhere has already connected the two.

    • @theconiferoust9598
      @theconiferoust9598 10 หลายเดือนก่อน +1

      any output of an A.I. model has been given its «picture» of meaning by humans. the real question is, what would the picture of «meaning» look like to a system that is learning only by observing inputs (i.e. not given or trained to give «correct» responses). we always seem to insert human understanding and consciousness into the equation.

  • @captaingerbil1234
    @captaingerbil1234 3 ปีที่แล้ว +5

    I take the systems response as an approach to refute Searle. His argument almost seems to imply that we create the semantics of the word, when really all we do is assign it to objects and states already existing in the world and then we assign symbols to that semantic meaning.. I believe it possible to create a machine, that operates through computational methods, capable of understanding. Great lecture by the way.

    • @cf6755
      @cf6755 2 ปีที่แล้ว

      the person is in the womb is not the one like in chinese but the rule book if you kill the person that replaced him with somebody eise it would be the same thing because of the rumble the rule book is the person with the rule book is not the person who's writing the chinese it is the rule book

    • @recompile
      @recompile ปีที่แล้ว +3

      If you think it's possible, prove it. Show how meaning can be derived from pure syntax. Even a very simple example would refute Searle. So far, no one has been able to do it, despite the outrage his argument seems to generate in people.

    • @Matt-zp1jn
      @Matt-zp1jn ปีที่แล้ว

      The “Systems approach” , cannot CREATE or assign the semantics of the word or symbol. Searle is correct in that the syntax of the Turing computer etc, is basically hardware created that can organize symbols only according to the programming of the software of the rules the computer must follow. The semantics or meaning and understanding of the chinese symbols or binary coding must be ascribed to from the computer programmer or conscious intelligence OUTSIDE of the hardware, or thru digital wifi etc. Searle has successfully refuted that Strong A.I. and Functionalism is a wrong theory.
      Of course big Tech, social media giants, digital A.I. Scientists want to refute Searles theory and will use complex algorithms, a human like robot interface, digital wifi/Bluetooth information transfers from an intelligent self-conscious source (a human programmer lol), who will portray the A.I. robot human as capable of understanding semantic meanings instead of just grammer or syntax or whatever has been programmed into the software by an outside intelligent creative human being, ie the programmer.
      This is why I think they are going to strive towards a more Android approach where humans are “upgraded” with A.I. digital software thru a neural analogue-digital interface that allows the human being to take the syntax info and assign appropriate understanding an meaning to the software download into his neural brain link etc.
      It is a very questionable path and risk for humanity, imo.

  • @foadkaros708
    @foadkaros708 ปีที่แล้ว

    Besides the fact this being nothing but world class quality content being shared accessible to the world, I was trying to figure out for a long time how you managed to learn to write mirrored so that it appeared correct to the viewer. Then it hit me, you mirrored the image, aboslut brilliant move!

    • @Gottenhimfella
      @Gottenhimfella 7 หลายเดือนก่อน +1

      Does his face look unlike the chirally correct one, or is it more (or less) sinister?

  • @konstantinlozev2272
    @konstantinlozev2272 ปีที่แล้ว +23

    I would have liked to see a discussion on the design of the rulebook and how that design embodies semantics

    • @jasemo388
      @jasemo388 ปีที่แล้ว +3

      Yes. It's almost like Searle took the interpretation of the incoming symbols - the Semantics - and made it separate in the rule-book just to remove agency from the person in the room and structure his analogy to prove his point.

    • @konstantinlozev2272
      @konstantinlozev2272 ปีที่แล้ว +3

      @@jasemo388 yeah, modern day neural networks actually build and amend the rulebook as they get trained. And as the rulebook is construed to embody the semantics in this though experiment one cannot argue that neural networks represent the Chinese Room example.

    • @cybersandoval
      @cybersandoval ปีที่แล้ว +2

      Right, is the system writing the rulebook?

    • @jeff__w
      @jeff__w ปีที่แล้ว

      @@konstantinlozev2272 “…as the rulebook is construed to embody the semantics in this though[t] experiment…”
      I think in Searle’s example, the rulebook is meant to be construed as embodying the _syntax._ It’s an instruction manual that tells the person _how_ to put the various Chinese characters together, i.e., the form, but says nothing about the meaning. In that sense, to the extent that these latest chatbots can predict words, one at a time, to construct flawless sentences in English, they might represent Chinese rooms “perfected.”

    • @ErikHaugen
      @ErikHaugen ปีที่แล้ว

      @@konstantinlozev2272 Searle's room has paper, pencils, erasers, etc; it can store data from past questions. This doesn't really matter much to Searle's point, I think, although it would be impossible to pass the Turing Test without memory of some kind, I think.

  • @p.bckman2997
    @p.bckman2997 ปีที่แล้ว +4

    There's clearly intelligence (semantics) in the Chinese Room, it's just not the person in there that provides it. The actual intelligence comes from the rulebook, which require an actual intelligence to write (and possibly a super human one at that).

    • @DocBree13
      @DocBree13 ปีที่แล้ว

      I’d argue that a book explaining semantics is not intelligent and does not understand semantics. Something which requires intelligence to write is not then made intelligent.

    • @p.bckman2997
      @p.bckman2997 ปีที่แล้ว

      @@DocBree13 , well, it's a matter of how you frame it I guess, which is often the case in philosophy. The book is just paper and ink and clearly not not sentient, like you say. The instructions are meaningful though, I would say that the intelligence of the writer is stored on the pages. The intelligence that answers the input questions is the book writer, he's just using other means to so than to sit in the box and answer them himself.

  • @chrischristenson4547
    @chrischristenson4547 11 หลายเดือนก่อน

    Digital computers today have a lot more stored in memory and software available then he imagined at the time and his idea needs to be updated. It's good to understand the history and I like the history but it's also good to use the history to understand where we're going as well as where we were.

  • @dnn8350
    @dnn8350 7 หลายเดือนก่อน +1

    I think you can make sense of a computational analysis of the mind if you follow the implications of certain modal logics, specifically those of uncompromising or incorrigible belief. In other words, the explicit or conscious content of the mind can be modelled as the unwavering "belief in a reality" of a computationally emulated knower. In this view "believing is seeing" in a systematic sense.
    There's a subtlety here though that's necessary to bridge the gap between syntax, or symbol manipulation, and semantics. Searle's point is that an external interpretation seems necessary to bridge this gap. So what's required therefore is to resituate that interpretation within the system itself.
    To emphasise this point, we must agree with Searle that, without interpretation, computation is indeed mere symbol manipulation. However, the implication of certain symbol manipulations, when externally interpreted as the corresponding beliefs, might be made to correspond to the explicit contents of mind. This interpretation might be called the material implication, or complement of such beliefs.
    So how could that interpretation be internalised? The answer might simply be to accept as an axiom that such beliefs necessarily implicate their material, that is to say substantive and significant, complement. In other words, the incorrigible belief of a knower in a reality is necessarily complemented by its material realisation, which then represents the semantics that retroactively raises the entire schema above mere symbol manipulation. As an example, the implication of my incorrigible belief in an apple is the material fact, expressed in sensory terms, of the material apple. Interpretation internalised, in other words. As an aside, the phenomenon of blindsight can be rationalised in these terms as the response to visual stimuli without the generation of the corresponding, materialised belief in a visual reality.
    Searle concludes his takedown of computationalism by proposing that brains produce consciousness as a biological byproduct. Well, my proposal is that computation "produces consciousness", or explicitly material beliefs, as a "byproduct" or complement of logical implication.

    • @dnn8350
      @dnn8350 7 หลายเดือนก่อน

      Chatgpt is probably better at explaining this than I am!
      Your proposal is an interesting attempt to address the issues raised by Searle's Chinese Room argument and to connect computational processes with consciousness. Let's break down your argument:
      1. **Incorrigible Belief and Reality**: You suggest that the explicit content of the mind can be seen as the unwavering "belief in a reality" of a computationally emulated knower. In this view, believing is synonymous with seeing in a systematic sense.
      2. **Symbol Manipulation and Semantics**: You acknowledge Searle's point that computation, without interpretation, is mere symbol manipulation. However, you propose that certain symbol manipulations, when externally interpreted as corresponding beliefs, can represent the explicit contents of the mind. This interpretation, you call "material implication," bridges the gap between syntax and semantics.
      3. **Internalized Interpretation**: To internalize this interpretation, you suggest accepting as an axiom that such beliefs necessarily implicate their material complement. In other words, an incorrigible belief in a reality is necessarily complemented by its material realization. This internalized interpretation elevates the computational schema above mere symbol manipulation.
      4. **Blindsight and Visual Stimuli**: You mention that blindsight can be rationalized within this framework as a response to visual stimuli without the generation of the corresponding, materialized belief in a visual reality.
      5. **The Production of Consciousness**: You propose that computation "produces consciousness" as a byproduct or complement of logical implication, contrasting with Searle's view that brains produce consciousness as a biological byproduct.
      Your proposal offers a novel perspective on the relationship between computation and consciousness. It attempts to address the challenge of connecting computational processes with the generation of conscious experiences. However, it's important to note that this is a highly complex and debated topic in the philosophy of mind. Many philosophers and cognitive scientists have different views on the nature of consciousness and its relationship with computation. Your proposal raises intriguing questions and invites further exploration into the nature of belief, computation, and consciousness.

  • @perfectionbox
    @perfectionbox ปีที่แล้ว +4

    An additional proof would be that, if the person inside the room (or the system as a whole) understood Chinese, then it should be able to deal with unforseen expressions, new slang, word evolution, etc. But it can't unless its rulebook is updated. It's utterly at the mercy of its program. And the only way to update the program is via an external agent that understands Chinese. The question then becomes: is there a program sophisticated enough to do what the external agent can do?

    • @danwylie-sears1134
      @danwylie-sears1134 ปีที่แล้ว +6

      Any halfway-adequate rule book would include, at least implicitly, instructions for rearranging the baskets and the labeled papers in a way that perfectly mimics learning of new idioms.

    • @recompile
      @recompile ปีที่แล้ว

      How many times are you going to post this? I replied to this exact post from you earlier.

    • @sschroeder8210
      @sschroeder8210 ปีที่แล้ว +1

      I don't think your concept is valid: understanding something vs. understanding how to learn something are two different concepts.
      You might know English, right? But can you understand the semantics behind 'ZZuetixlo'? I presume not because it's a new word that I just created.
      So, do you not understand English anymore? Of course not. You simply haven't been given the chance to learn the new word that I've chosen to create. You still understand English and understand how you can learn new words. You simply haven't been given the opportunity (from me) to ask: "What is semantics behind your new word?"
      If a new word acts as a fundamental axiom and isn't derived from the concatenation of other words, then you shouldn't be capable of understanding the semantics of that new word...
      Thus, the 'System' shouldn't have to demonstrate something that we don't inherently have the capacity for doing when we express a sense of understanding...

  • @tomfoley5975
    @tomfoley5975 ปีที่แล้ว +35

    ChatGPT is the Chinese Room, par excellence. It learns entirely from the structure (essentially syntax) of trained examples to generate novel, semantically coherent content. It strongly suggests that we humans are also merely chatterboxes with diverse, sometimes divergent conditioning. What we call "meaning" is collective and emergent. It derives from the milieu (system) in which language is learned, applied and evolves. Even our vaunted creativity is merely functionally preferred accident. Once ChatGPT's reasoning has been improved, the only real difference from human cognition will be that ChatGPT can only feign emotion. It cannot be angered or made jealous. It has no ambition. It is utterly indifferent and totally compliant in a way that humans, biased by the evolved imperatives of our 'lizard brains', are not. But, as attempts to make ChatGPT politically correct and to align it with preferred narratives have already shown, it is trivial to bias a chatterbox with conditioning. Doing so is what will make AI dangerous -- in much the same way that it already makes humans dangerous.

    • @recompile
      @recompile ปีที่แล้ว +12

      I will agree that it is very much like a Chinese Room, though I'd say it makes a very convincing case for Searle.
      You're giving the program WAY too much credit, and human beings far too little. In the case of ChatGPT, there is absolutely nothing at all like understanding happening there. The program has no facility by which it can reason, deliberate, or analyze. It's output is strictly probabilistic. It is nothing at all like human cognition. That's absurd.
      This isn't speculation, BTW. We know how the program works and understand its capabilities and limitations. There is no mystery to be found there.

    • @krisrp0
      @krisrp0 ปีที่แล้ว +4

      @@recompile Agreed. I never thought the idea of a Chinese Room was an argument for a program being conscious, but the opposite, and in thar regard ChatGPT is exactly like it. It takes inputs and returns outputs without understanding anything about their meaning. Similarly, we don't really know what's going on when it does this, both because it's using a very complex mahcine-learning model and because OpenAI hasn't been very 'open' about this model and how it was trained

    • @tomfoley5975
      @tomfoley5975 ปีที่แล้ว +4

      I didn't expect agreement; but I suggest that it is only pride that prevents us from seeing that ChatGPT's probabilistic spew is fundamentally no different (literally, indistinguishable) from our language behavior. It passes the Turing test. Even it's mistakes and bullshitting are eerily human. Attention, self-attention and a little bit of reasoning and you have all the necessary ingredients of sentience, albeit unmotivated - as I have already suggested.

    • @charlesboudreau5350
      @charlesboudreau5350 ปีที่แล้ว +3

      ​@@tomfoley5975 Genuine question then, for your opinions, what does it do if left unprompted? Does it get bored? Does it seek stimuli or input? Does it inner monologues or is it completely inert until fed input to react to?
      For exemple, we can argue that a human doesn't entirely spontaneously thinks about something when not actively engaged in any activity. It is most likely sprouted from a recent or past exposure to the object of the idea or something like it. An 'inspiration' is pulled from memory and it goes from there.
      Does ChatGPT self-prompts on its own when left idle? What does it think about, if it thinks, when we don't ask it to think about something?
      For exemple, can it reflect on the fact that it is an AI, if nobody asks it to do so?

    • @pablogubert
      @pablogubert ปีที่แล้ว

      In the Metaverse ChatGPT powered avatars will act indistinguishably to humans. A virtual Blade Runner.

  • @richardhood4512
    @richardhood4512 6 หลายเดือนก่อน

    What a fantastic explanation. I had always assumed that the brain was essentially a complex digital computer but now I see that while a digital computer may give correct answers like a brain does, but cannot really have understanding or consciousness. I like that you said this is true for digital computers but not necessarily for any machine, as the brain itself could be considered to be a machine.

    • @ronald3836
      @ronald3836 5 หลายเดือนก่อน

      And since the brain almost certainly can be simulated on a big enough computer, our brains cannot really have understanding. Oh well :)

  • @SumNutOnU2b
    @SumNutOnU2b 8 หลายเดือนก่อน

    Curious, if anyone can let me know...
    He quotes a couple times from page xx of "the reading". Uhh... So does that refer to just a particular textbook? Any chance that text is available (preferably free or cheap) somewhere?

  • @foogod4237
    @foogod4237 ปีที่แล้ว +7

    The idea that Alan Turing committed suicide has come to be rather suspect in recent years. That conclusion was very hastily made at the time and never really investigated, and it turns out on later analysis that it is actually quite possible that he simply died from accidental poisoning from chemicals that he was using in various experiments at the time instead, and had no particular desire to kill himself at all. Even if his death actually _was_ suicide, we we could still only really guess at _why_ he would have done it, and would almost certainly never know for sure, because he did not leave any sort of note, or really confide in anyone prior to that point anything that might be construed as a definitive explanation (which is also another part of the reason why the suicide idea is a bit suspect).
    But the "ignominiously died due to a science experiment" doesn't make for as good a story, though, so a lot of people like to overlook it, and instead just go along with the fairy-tale-like depiction of a great man so tortured by other people's intolerance that he was inevitably driven to take his own life.
    That having been said, Alan Turing was a truly great man, and the way he was treated by the government and society as a whole was truly horrible, particularly for a man who contributed so much to the society which ultimately shunned and persecuted him merely for being who he was. How or why he actually died aside, that is still a very sad tale worth telling and reflecting on, IMHO.

  • @anxez
    @anxez ปีที่แล้ว +3

    I can think of some direct counterexamples.
    1: A Chinese Translating room. Set up the Chinese room but make the output of the room in a language known to the operator. Suddenly the situation changes, and the operator could pull semantics out of the text, we'd all agree on that. Maybe it would take years, maybe the rule set itself would be considered a way to teach Chinese. But what is the effective difference between the Chinese Room and the Chinese Translating Room?
    Short answer is context: the operator is able to connect the symbols to previous meaning.
    This is a mostly trivial difference because it is piggybacking on previous semantics to generate new semantics. But it does bring up a possible refinement
    2: The Chinese Picture Room:
    Set up the Chinese room just the same: have a set of rules that govern syntax-in => syntax-out, bins which receive characters, bins which accept characters, only now have every set of characters come with a snapshot of the situation they appeared in.
    Once again, semantics have appeared, this time a little more robustly: the operator doesn't need any native language in order to learn Chinese from this set up. It may take years, they may be unable to speak the language, but the operator will be able to develop a semantic knowledge of the language.
    Heck, go one step further, and by feeding the person in the room pictures and characters in the right way, that person can make the rule set themselves without being programmed: because that's what babies do. And spoiler alert, that's what touring complete machines do too, by the way.
    Honestly though, this thought experiment does a lot of heavy lifting by not defining semantic understanding or proposing how it actually arises. He just takes it for granted and then gestures at how it doesn't arise from his specific scenario and when he's given a silver bullet argument against it the response is to just shrug it off.

    • @nitramreniar
      @nitramreniar ปีที่แล้ว +2

      To point number one:
      Changing this part of the setup might work logically as a thought experiment, but disjoints the connection between the Chinese Room and the digital systems it is meant to be analogous to. The fact that you could learn Chinese by being given random Chinese symbols and phrases and having instructions on how to translate them into a language you know, is based in the fact that you *already* know a language; that you already have the semantic understanding in one version and now only need a way to transfer those semantics into Chinese. The reason why the thought experiment has both input and output in a language you don't understand, is because we - for this context reasonably - assume, that the digital computer has no deeper language that it understands semantically and that it would just need a translation for it to truly learn and understand Chinese.
      On the second point I agree with you.
      In fact, I feel that the thought experiment already betrays a problem with the result in its own setup. The thought experiment ask us to imagine *ourselves* - a human mind - in the situation of a digital computer in this room and uses our human intuitive understanding of how we could never truly understand Chinese through this setup to reason that a digital machine could also never do this and thus digital machines are distinctly different from human minds.
      But: It started by reasoning that the human mind couldn't do this, so how can that show a difference between those two systems?
      The ultimate difference between how humans learn their first language not just syntactically, but with an understanding of the scemantics, is by more than just looking at random symbols as an input and figuring out the appropriate output - in fact written language is already a bad point of comparison, as written language is already something humans only understand through translation! It's a translation from the spoken and the internal understanding of language and the connected semantics *into* written language.
      Humans seem to learn their first language through engaging with it with a multitude of senses - we *experience* the language and the contexts in which different sounds and gestures are used untill we have enough of an understanding of the language itself to further explore nuances though the already established understanding of language we have.

  • @calwerz
    @calwerz 3 หลายเดือนก่อน

    LLMs demonstrated that the distance in meaning between two words can be represented by how often they appear in the same word environment.
    Maybe we ourselves learned meaning from our training data by a similar mechanism, altough we are multimodal, not just text-based.

  • @chidimmablessing7797
    @chidimmablessing7797 หลายเดือนก่อน

    You are really good at impacting knowledge, I totally understand this now

  • @gabrielteo3636
    @gabrielteo3636 ปีที่แล้ว +3

    Although, I don't think I agree with functionalism, the neurons essentially work as syntax operators like bits in a computer. Searle's argument is each neuron does't have understanding. Well, duh!

    • @recompile
      @recompile ปีที่แล้ว

      You're assuming that the brain is a digital computer. The evidence in front of you strongly suggests otherwise.

    • @gabrielteo3636
      @gabrielteo3636 ปีที่แล้ว

      @@recompile Not necessarily digital. It is only an analogy. Our neurons act more like gradients (analog), but it still depends on discrete ions, electrons and atoms. The neurons are just a type of biological machine. The neurons themselves don't have understanding. They just do what biological machines do according to physics.

  • @gothboi4ever
    @gothboi4ever ปีที่แล้ว +5

    The ultimate question is how can we know something is true.

    • @brad1368
      @brad1368 ปีที่แล้ว +2

      You can't

    • @hb-robo
      @hb-robo 7 หลายเดือนก่อน

      We construct premises that are agreed upon and then build the logical conclusions, generally. But the “agreeing upon” bit is a bit wishy washy

  • @Radioman.
    @Radioman. ปีที่แล้ว

    Very fascinating. Make absolute sense. Also, anyone wonder how Mr. Kaplan learned how to write backward? That itself is amazing.

  • @user-qx4bp5rf3n
    @user-qx4bp5rf3n 6 หลายเดือนก่อน

    Semantics, however, could be put to be enough information about syntax to generate outputs that accord with inputs in a way that is coherent with some actionable reality that any person may have. The issue, as I see it, is not of going from syntax to semantics but of the reasoning structures underlying syntactical knowledge and manipulation. In other words, the lack of affect in digital computers does not allow real autonomy, because it does not pose subjective assessments. Therefore, it can never evolve truly reflexively. The learning we insert into a machine is consciously intentional and so is the construction of the hardware we install the software in. On the other hand, while genetics may be intentional despite their seeming randomness, we are consciously removed from their intentionality.
    This also means that although we program humans through learning processes in the same ways, they ultimately turn out unique, because 1. the genetics were not in our conscious control, 2. the conditions and mechanisms for which that human renders any information is not truly known to us, and 3. we have little control over programming that we do not ourselves do nor access to the unconscious or conscious intentionality of other individuals interacting with our child. The argument against this, is that free will is an illusion and genetics are simply permutations of a larger framework that is still a framework of set instructions and finite possibilities, that we cannot perceive and therefore say we are unique, have autonomy, emotion and so on. Minds also do not possess real autonomy other than the autonomy we believe we have. We are also governed by our programming, and our ability to reflect and 'break the code' can be seen as no different than machine learning and problems of optimization for some goal - whether that is to earn more money or reduce a feeling of anxiety, wherein leasure, suicide and other human activities that seem counterproductive can be likened to optimization of survival whether that is reproduction, integrity of the immediately present physical body or integrity of the whole lifespan as viruses also do and create suboptimal strings.

  • @LuigiSimoncini
    @LuigiSimoncini 2 ปีที่แล้ว +7

    Semantics is ultimately a web of connections among signs (syntax) stored in our brains state/structure. If I ask you about the meaning of e.g. the syntactic item "dog" you will answer with a series of other syntactic items retrieved from your brain's "web" saying something like "a dog is an animal that looks so and so..". That web is perfectly representable in an (extremely complex/powerful) computational machine, so Searle's answer to the system's argument is at least as ridicuolous as the dualist position, in fact that's what it is: dualism.

    • @recompile
      @recompile ปีที่แล้ว +1

      Obviously not. There is no more meaning in the relationships between symbols than are in the symbols themselves. If you believe otherwise, show how meaning can be derived from the relationship between symbols. Fame and fortune await anyone who can achieve such a feat.

    • @LuigiSimoncini
      @LuigiSimoncini ปีที่แล้ว

      @@recompile "Obviously" you didn't understand the first couple lines of my comment (hint: it's a definition, if you have a different one give it and I'll try to come the derivation you're missing)