Artificial Intelligence & Personhood: Crash Course Philosophy #23

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ส.ค. 2016
  • Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing Test, and John Searle’s response to the Turing Test, the Chinese Room. Hank also tries to figure out one of the more personally daunting questions yet: is his brother John a robot?
    Curious about AI? Check out this playlist from Crash Course Artificial Intelligence: • Artificial Intelligence
    --
    All other images and video either public domain or via VideoBlocks, or Wikimedia Commons, licensed under Creative Commons BY 4.0: creativecommons.org/licenses/...
    --
    Produced in collaboration with PBS Digital Studios: / pbsdigitalstudios
    Crash Course Philosophy is sponsored by Squarespace.
    www.squarespace.com/crashcourse
    --
    Want to find Crash Course elsewhere on the internet?
    Facebook - / youtubecrashc. .
    Twitter - / thecrashcourse
    Tumblr - / thecrashcourse
    Support CrashCourse on Patreon: / crashcourse
    CC Kids: / crashcoursekids

ความคิดเห็น • 1.9K

  • @Crazyvale100
    @Crazyvale100 7 ปีที่แล้ว +1326

    Hank warmed my heart when he said that even if John bled motor oil instead of blood he would still be his brother.

  • @ajs1998
    @ajs1998 ปีที่แล้ว +86

    It's amazing how a machine can be programmed to have conversations that are so human-like that it becomes difficult to distinguish them from actual human interactions. It really makes you question what it means to be a person and whether or not a machine can ever truly achieve personhood. It's definitely a topic that raises some thought-provoking philosophical questions.
    - ChatGPT, 6 years later

  • @SlipperyTeeth
    @SlipperyTeeth 8 ปีที่แล้ว +732

    A harder test would be, can it fool itself into thinking that it is a person?

    • @PalimpsestProd
      @PalimpsestProd 8 ปีที่แล้ว +40

      Tyrell: If we gift them the past we create a cushion or pillow for their emotions and consequently we can control them better.
      Deckard: Memories. You're talking about memories.

    • @MarkCidade
      @MarkCidade 8 ปีที่แล้ว +54

      It can be programmed to act like it thinks that it's a person but does it actually think it's anything or are we fooling _ourselves_ into thinking that it is?

    • @ForgottenFirearm
      @ForgottenFirearm 8 ปีที่แล้ว +10

      I was really hoping that would be the twist to Ex Machina --that Domhnall Gleeson's character would turn out to be a robot. Oh, spoiler alert: there is no twist.

    • @MusiCaninesTheMusicalDogs
      @MusiCaninesTheMusicalDogs 7 ปีที่แล้ว +5

      I don't know it that's such a good idea. I mean, I'm so stupid sometimes I think I'm not even a person, you know?

    • @SlipperyTeeth
      @SlipperyTeeth 7 ปีที่แล้ว +5

      +Jeremiah B I guess that there is a point were the AI is to stupid to distinguish differences. So, basically being able to fool itself either means that it's really smart or really dumb.

  • @atena1844
    @atena1844 7 ปีที่แล้ว +213

    6:58
    "do you know how to speak chinese?"
    "yes! i do know how to speak chinese"
    if anyone wanted to know

  • @shawn-xl5ii
    @shawn-xl5ii ปีที่แล้ว +28

    Living in the era of ChatGPT, it is quite alarming to look back at this video.

  • @thewolfofthestars1847
    @thewolfofthestars1847 8 ปีที่แล้ว +1097

    You know, I think that any AI that displayed a degree of laziness would probably pass as a person.

    • @isnotrose
      @isnotrose 8 ปีที่แล้ว +96

      There's a difference between efficiency and laziness. Efficiency is the same or more output with less time; laziness is less time irrespective of output. But I think OP brought up a surprisingly nuanced point. It's a hallmark of human behavior to avoid doing things that would maximize happiness or survival just because we're lazy. If an AI showed a propensity to stop doing a task and watch a sunset for a while just because, or not mute that commercial they hate because the remote's too far, that would do a LOT to convince me that it was a person.

    • @trulyUnAssuming
      @trulyUnAssuming 8 ปีที่แล้ว +16

      I would agree with Jeremiah laziness is just efficiency. I mean why do you think humans are lazy?
      Because it doesn't make sense to spend energy on things we don't really need. And people who spend unnecessary amounts of energy are more likely to die. So evolution requires that we only do what is necessary. Or be lazy.
      The problem is, that we have different understandings of "what is necessary" and some people think that education is necessary for a good living and others don't think so and avoid putting energy into it (be lazy). Sure you could argue that you are wrong in thinking that education is unnecessary but that only means that the person can not evaluate necessity correctly and nothing else.
      "not mute that commercial" for example just means that you value the energy it takes to reach out to that remote more than than the benefit of not having to watch it.
      Because we have basically unlimited food by now, this is probably a wrong assessment - but we can't help it because we are programmed by evolution to save energy.

    • @sherrysung8334
      @sherrysung8334 8 ปีที่แล้ว +4

      +

    • @idkdamn978
      @idkdamn978 8 ปีที่แล้ว +4

      How could you test for that? I could write a program that says "I think i'm a person" but it obviously wouldn't. Kind of necessitates that you can make it "think" anything beforehand instead of just responding with proper output.

    • @zeromailss
      @zeromailss 8 ปีที่แล้ว +1

      +KEine Ahnung Houtaro Oreki!? is that you!? but wait,he would not waste energy to explain about conversing energy to other since he would think that it will just waste energy
      well anyway, you kinda make sense but it doesnt click that well with my understanding of this topic,I think laziness is more into emotion category ,and +isnotrose example fit it more ,the problem is I think we are not fully understand yet about this "emotion/feeling* since its a part of "consciousness"
      so if an AI could somehow show a degree of laziness that could possibly mean that it might have a feeling but then again we might never know since that AI could just might be programmed to make it appear as if it has feeling and so make it looks like he/she/it is lazy because of those *appear to be* feeling he is programmed with,but as ive said before we dont know for sure yet

  • @schmittelt
    @schmittelt 8 ปีที่แล้ว +563

    If a robot is ever considered a person, would it be considered immoral to turn it off or otherwise remove it's power source?

    • @Archangel125
      @Archangel125 8 ปีที่แล้ว +90

      Its not in the bible so we will never know.

    • @hicham5770
      @hicham5770 8 ปีที่แล้ว +192

      +Michael Hill
      bible=the worst source of information ever

    • @kevinwitkowski4895
      @kevinwitkowski4895 8 ปีที่แล้ว +28

      Just because numbers aren't physical doesn't make them any less of a reality.

    • @DarthBiomech
      @DarthBiomech 8 ปีที่แล้ว +71

      Depends on how he works. If turning him off means to lose his personality or otherwise disrupt his being, then it probably like killing. But if nothing drastic happens, then it would probably count as a sleep, coma or losing your consciousness. With the exception that you will be unable to wake up without an external help.

    • @mylespope6203
      @mylespope6203 7 ปีที่แล้ว +9

      No, you can turn them back on

  • @MingusTale
    @MingusTale 7 ปีที่แล้ว +484

    crash course is too addictive. all I've done is intermittently sleep and watch crash course philosophy all day. I haven't gone too my classes and all I've consumed is tea apples, Eminem's and crisps.

    • @joelieastell244
      @joelieastell244 6 ปีที่แล้ว +52

      Are we different if we experience exactly the same thing?

    • @Angel-486
      @Angel-486 6 ปีที่แล้ว +2

      I came here after watching Westworld

    • @stephaniespivak6225
      @stephaniespivak6225 6 ปีที่แล้ว +16

      ah yes, college

    • @richardwu8371
      @richardwu8371 6 ปีที่แล้ว +8

      Pick up a philosophy book and see where that takes you.

    • @Bloodykke
      @Bloodykke 5 ปีที่แล้ว

      Stephanie Spivak so in america youll learn this in collage?

  • @ridhaaloina
    @ridhaaloina 6 หลายเดือนก่อน +21

    now we have chatgpt

  • @ZacharyBurr
    @ZacharyBurr 8 ปีที่แล้ว +45

    There actually have been a few AIs that have passed the turing test before. However, all of them have used some sort of "cheating" to do so, such as programming the computer to always go back to a subject it knows a lot about, or telling the human running the test that the computer has schizophrenia, or even something as simple as forcing the computer to make spelling errors.

    • @KohuGaly
      @KohuGaly 8 ปีที่แล้ว +18

      my favourite was the one that pretended to be a Ukrainian boy that speaks English poorly, to hide his lack of capacity to understand and reply correctly :-D

  • @ASLUHLUHCE
    @ASLUHLUHCE 4 ปีที่แล้ว +38

    Hanks should've at least pointed out the distinction between information processing (i.e. intelligence) and conscious experience. It seems pretty obvious to me that person vs non-personhood will go down to whether we think it has conscious experience.
    Most scientists do not believe that our computers (based on the 'Von Neuman architecture') could give rise to conscious experience. No matter how generally intelligent Siri becomes, she's still as conscious as a rock. A sentient machine can only be made once we figure out what sort of complex processing of information actually gives rise to conscious experience. Then, we can build the hardware for an artificial consciousness.

    • @josephs.7960
      @josephs.7960 4 ปีที่แล้ว

      Haven't we long abandoned the traditional Von Neuman architecture due to the Von Neuman bottleneck?

  • @_Aly_00_
    @_Aly_00_ 8 ปีที่แล้ว +149

    Reminds me of the Star Trek episode when Picard has to try and show Data (the android) is sentient being with the right to choose.

    • @RGLove13
      @RGLove13 8 ปีที่แล้ว +3

      that's what I was thinking too!

    • @TulipQ
      @TulipQ 8 ปีที่แล้ว +24

      The Measure of a Man, if anyone wants to go watch it.

    • @silvanbarrow86
      @silvanbarrow86 8 ปีที่แล้ว +8

      I was actually thinking about the episode from season 1 of TNG where everyone got infected with the love virus from the original series and Data goes "If you prick me, do I not... leak?"

    • @exastrisscientia9678
      @exastrisscientia9678 8 ปีที่แล้ว +3

      Me too 😊 #Data!

    • @Moomoo-cp6ey
      @Moomoo-cp6ey 8 ปีที่แล้ว +1

      Isn't that every episode?

  • @MagraveOfGA
    @MagraveOfGA 8 ปีที่แล้ว +484

    Does John leave the cap off the toothpaste?
    Do you know who leaves the cap off the toothpaste? a synth, that's who!

    • @mikejohnstonbob935
      @mikejohnstonbob935 8 ปีที่แล้ว +18

      you know who leaves a toilet seat up? a synth!

    • @KenVermette
      @KenVermette 8 ปีที่แล้ว +8

      I know at least when he puts it back, John quickly and efficiently spins his hand like a drill as he fastens the cap onto the tube.

    • @zedirich7
      @zedirich7 8 ปีที่แล้ว +7

      hope you aren't one of those damn syths here to spy on me

    • @MagraveOfGA
      @MagraveOfGA 8 ปีที่แล้ว +8

      zedirich7 yea... you've got a mean look about you, I hope you're not here for me

    • @MagraveOfGA
      @MagraveOfGA 8 ปีที่แล้ว +2

      ***** Hey... if you don't work, you don't eat

  • @Linkous12
    @Linkous12 8 ปีที่แล้ว +62

    "What would be missing for a AI to be person-like but not a person?"
    I think the answer lies in consciousness (as opposed to, say, the idea of the soul). Is the AI *conscious*? An AI that passes the Turing Test could easily pass as being person like, but lack consciousness.
    How do we figure out if an AI is conscious? I think this is the big question, and I have no idea. Can we even build a conscious AI? Can consciousness arise from man-made, inorganic, "artificial" processes? I'd assume theoretically, it could. Practically, however, we may never get there.

    • @josephburchanowski4636
      @josephburchanowski4636 8 ปีที่แล้ว +27

      How do you figure out if anyone else is conscious? I really only know that I am conscious, and I merely assume other people are. I reckon everyone else if conscious is in the same boat.

    • @JeshikaKazeno
      @JeshikaKazeno 7 ปีที่แล้ว +13

      I agree with Joseph; we humans have no way of telling for certain that other people are conscious. Most of us reach the conclusion that they are, but we have no undeniable proof. Ultimately, an individual can only tell if s/he themself(sp?) is conscious, because they can't get deep enough in someone else's head to go, "Hey, this person is a person and not just a cleverly-designed imposter."
      When I was younger, I sometimes had the paranoid delusion that I was the only person in the world who was truly a "person". It's one of those things where it's easy for another person to see it isn't true ("She can't be the only conscious being in the world, because *I'm* also conscious.") but is impossible to completely prove to the deluded person.
      (Don't worry, I got help and am ok now. Thank you, science and medicine!)

    • @jacktyler7593
      @jacktyler7593 7 ปีที่แล้ว +8

      Humans are only as conscious as their receptors can afford them to be. A person who claims to have higher consciousness should be tested by asking him what is on the table in front of him. He will usually fail to say that micro organisms are on the table, or that radio waves are passing across the table. The micro organisms are far too small for the visual receptors to sense which is why microbiology as a serious field of study didn't exist until the microscope came along. The microscope serves as an extension device for the visual receptors and one is able to sense the micro organisms on the table.
      In addition, a hawk is able to sense whether a coin hanging off the edge of the empire state building while he is on the ground is either facing heads or tails due to the quality of his visual receptors.

    • @michaellight6981
      @michaellight6981 4 ปีที่แล้ว +2

      In order to be conscious, it has to understand things, right? Not just be able to repeat things, or rephrase a sentence, but truly understand an idea. I think that, if an AI interpret images, can see an image, and describe not just what objects are in the image, but tell us what actions those objects are performing, it would demonstrate that it understands what it's seeing.

    • @Nebukanezzer
      @Nebukanezzer 4 ปีที่แล้ว +1

      Yes. Model the human brain in a simulation. There, now you know it's possible, so get more efficient and boil it down so that you're running that consciousness directly instead of simulating atoms.

  • @fluteroops
    @fluteroops 7 ปีที่แล้ว +20

    "If it turns out that John, the John I've known my entire life, has motor oil instead of blood inside of him. Well, he'd still be my brother." Awwwwwwww

  • @SuperExodian
    @SuperExodian 8 ปีที่แล้ว +123

    bedore even watching this past the intro, i'd say go full circle, aren't we all just really intelligent machines?

    • @Ketraar
      @Ketraar 8 ปีที่แล้ว +34

      Well I'd argue that not all are intelligent, but sure we are all machines. ;-)

    • @hicham5770
      @hicham5770 8 ปีที่แล้ว

      with free will of course

    • @philipclapper268
      @philipclapper268 8 ปีที่แล้ว +25

      +Hi Cham no, not "of course." Some people, including myself, would assert that humans don't have free will.

    • @hicham5770
      @hicham5770 8 ปีที่แล้ว +1

      Philip Clapper so what do you have??how did you decide to right this comment?why did you exactly chose to right this and not random BS?? ffs everybody have free will m9

    • @hunter5441
      @hunter5441 8 ปีที่แล้ว

      explain

  • @CosmicFaust
    @CosmicFaust 7 ปีที่แล้ว +12

    +CrashCourse The response you made to the Chinese Room is the main response to this argument and it's known as the, “Systems Response.”
    It goes like this; the person in the room doesn’t understand Chinese, but that person is part of a system, and the system as a whole does understand it. We attribute understanding not to the individual man, but to the entire room.
    Well Searle responds by saying: why is it that the person in the room doesn’t understand Chinese? Because the person has no way to attach meaning to symbols. So in this regard, the room has no resources that the person doesn’t have. So if the person has no way to attach meaning to the symbols, how could the room as a whole possibly have a way to do this? Searle himself suggests an extension to the thought experiment: imagine the person in the room memorises the database and the rule book. This means he now doesn’t need to the room anymore. He goes out and converses with people face-to-face in Chinese, but he still doesn’t understand Chinese, because all he’s doing is manipulating symbols. Yet in this case he is the entire system.
    Now of course an obvious objection to this is that if you can go out and converse with people in Chinese, you must be able to converse in Chinese and thus understand it.
    This objection the functionalist could make doesn’t actually addresses Searle’s point though. The whole point of the Chinese Room Thought Experiment is that you can’t generate understanding simply by running the right program.
    You can’t get semantics merely from the right syntax. Now granted you surely would understand Chinese if you could converse perfectly with Chinese people, but I think Searle can hold that this understanding arises not from manipulating symbols in the right way, but also from all the various things that can go on in face-to-face interactions.

  • @itsvir1755
    @itsvir1755 11 หลายเดือนก่อน +6

    It's 2023 and now we are having a quite strong ai like gpt4, firebird, etc

  • @intxk-on-yt
    @intxk-on-yt 7 ปีที่แล้ว +5

    You guys are amazing! Seriously! The amount of contribution you guys are doing is mind blowing! Thank You!

  • @mrjacobws
    @mrjacobws ปีที่แล้ว +10

    Can we revisit this when ChatGPT has a successor?

  • @mo5ch
    @mo5ch 8 ปีที่แล้ว +5

    I like the approach of Jack Cohen and Ian Stewart in "The collapse of chaos": They suggest that the mind is an emergent property, a process that is created by a certain arrangement of neurons.
    It is like the motion of a car, something abstract and not material. If you would "dissect" a car, you will find wheels, an engine etc but not a tiny bit of motion. The same applies to our brain/mind.
    So, in my opiniom, if we can create something similiar like neurons (and not only neurons, of course, more like a certain arrangement), we could create a mind as well. But it takes probably a few more years until they are on the same level as we already are.

  • @darkblood626
    @darkblood626 8 ปีที่แล้ว +66

    The most moral and pragmatic thing would be to treat any AI that you have reason to suspect has reached personhood as such by default.

    • @aperson22222
      @aperson22222 8 ปีที่แล้ว +3

      What if doing so required those whom you already know to be people to suffer?

    • @darkblood626
      @darkblood626 8 ปีที่แล้ว +6

      aperson22222 Example?

    • @aperson22222
      @aperson22222 8 ปีที่แล้ว +11

      darkblood626 There's a Star Trek episode where some of the human crew members are stranded on a space station that's about to blow. The only way to save them is by sending three bots on a suicide mission to stabilize the defective element. The bots refuse their orders (long story) and the crew considers reprogramming them so they're no longer able to do so. Data locks them out of the transporters so the crew can't beam the bots over against their will, even though he knows that doing so greatly reduces his fellow officers' likelihood of survival.

    • @Joram647
      @Joram647 8 ปีที่แล้ว +7

      I think the most pragmatic thing to do is to just not create strong AI in the first place. Avoid ourselves a lot of potential problems. Of course it's pretty much a given that if we are given the ability to create strong AI, someone will do it eventually, so I guess I'll just have to agree with your stance.

    • @darkblood626
      @darkblood626 8 ปีที่แล้ว +8

      aperson22222 Forcing the bots to die against their will would not be moral.

  • @OskarPiano
    @OskarPiano 8 ปีที่แล้ว

    What I like about this kind of videos is that they are packed with knowledge in short amount of time. Each word is worthy of my time. The speed of lecture is high which is a plus thanks to the fact that the content is neat and well organized. If the content was not well organized then the high speed would be another obstacle for comprehending the content.

  • @DarkNemesis25
    @DarkNemesis25 8 ปีที่แล้ว +2

    this has to be the best episode of crashcourse yet. The whole episode I was thinking it was like the chinese room, just complex instruction sets and procedures.. then you hit me with 7:36 ... wow, my mind is blown wide open.. complete thought shift.. im dizzy now

  • @perfectzero001
    @perfectzero001 8 ปีที่แล้ว +24

    I feel that you should have addressed the consciousness question. Is their a subjective experience to being the strong AI? Is that what separates us (as opposed to souls) from a machine that simulates intelligence? Or does it matter to deciding if something is an actual AI? For me all the most interesting questions about personhood and AI in general surrounded consciousness.

    • @BardicLiving
      @BardicLiving 8 ปีที่แล้ว +1

      I know.

    • @damiandearmas2749
      @damiandearmas2749 8 ปีที่แล้ว

      I it was to my understanding that conciousness is still mistery.

    • @ArcasDevlin
      @ArcasDevlin 8 ปีที่แล้ว +1

      To me, it's whether the AI can actually feel emotions or just simulate them.

    • @mrchapsnap
      @mrchapsnap 8 ปีที่แล้ว

      +

    • @SonicBoyster
      @SonicBoyster 8 ปีที่แล้ว +7

      Since you can't experience another person's consciousness, and we can't determine whether another human being is actively conscious outside of how accurately they respond to specific stimulus, it can't really be used to test anything. If a robot is answering questions it's 'conscious' for all intents and purposes.

  • @patrickberth464
    @patrickberth464 ปีที่แล้ว +6

    this is now almost our reality

  • @cholten99
    @cholten99 8 ปีที่แล้ว +2

    This may turn out to become one of my favourite TH-cam videos ever. Very much looking forward to next week on determinism - hope you're covering compatabilism.
    So many things to mention from this video. Let's start with "Is John just a human or a really intelligent machine". Well, I would argue that the answer is that he can be both. Not from the angle of granting personhood to machines, although I believe that is inevitable, but because we are also machines. Later on Hank makes a point about machines being "inorganic". Many people would say that there's no difference between the different types of substrate an intelligence happens to be running on. As Arthur C. Clarke says in 2010 - "Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect." There's nothing special about being built out of hyper-fancy carbon based nanotechnology (like humans are).
    Some people are bound to bring up the idea of qualia, as referenced previously in Crash Course Philosophy. The problem with that argument is that qualia are definitionally unavailable for examination except via self-reporting and are therefore profoundly unscientific. Until we fully understand how the brain works we will have to rely on external observation to judge the 'person-ness' of an individual.
    A couple more things. Hank mentions the notion of "going beyond our programming". Which is a phrase that makes no sense. Humans brains are deterministic machines, as I'm sure we'll go into next week. Actions are a result of a combination of stimulation and current state. The idea that we can "go beyond" that model in some way makes no sense.
    Finally, it would be interesting to know what was meant in the quote that contained the phrase "achieve actual understanding" as, again externally, it doesn't sound like something that could ever be measured.
    I honestly believe that the middle of the 21st century will have two overlapping major themes that will have profound impacts on everything. The first will be the emergence of the first set of AIs that can begin to consistently pass the Turning test. The second will be the start of general acceptance that we are deterministic machines just the same as any product we build. What the impact will be of these two effects is impossible to say but I think it will shape the future of our species.

  • @libracayes9564
    @libracayes9564 8 ปีที่แล้ว +1

    This is probably the most compassionate view of intelligent AI i have ever seen, thank yoou

  • @theomnissiah-9120
    @theomnissiah-9120 8 ปีที่แล้ว +117

    "Do these units have a soul"

    • @Dartmorin
      @Dartmorin 8 ปีที่แล้ว

      +Spartan0941 .

    • @SangoProductions213
      @SangoProductions213 8 ปีที่แล้ว

      *this unit

    • @MardrukZeiss
      @MardrukZeiss 8 ปีที่แล้ว +7

      To fail is to be flesh, only metal endures. Praise the Omnissiah.

    • @FlorenceFox
      @FlorenceFox 8 ปีที่แล้ว

      +SangoProductions213 these units*

    • @mtiffany71
      @mtiffany71 8 ปีที่แล้ว +1

      Who taught you that word?!

  • @qpid8110
    @qpid8110 7 ปีที่แล้ว +50

    Hank: "How can I figure out my brother is a robot or not?"
    Me: "Crack his skull open and feat on the goo inside?"
    Hank: "Without going into his mind or body."
    Me: :(

  • @Newova5
    @Newova5 8 ปีที่แล้ว

    I like that you decide to approach the old arguments with relevant new perspectives.

  • @erikziak1249
    @erikziak1249 8 ปีที่แล้ว

    A very interesting approach to this topic. I am really surprised. Well done. I am overthinking this as I am reading a lot about AI and human reasoning and stuff in the last weeks. This was a nice, easy to digest, simple video. Once again, good job!

  • @DuranmanX
    @DuranmanX 8 ปีที่แล้ว +163

    look at the mistakes he made in the Crash Course Games episode
    no robot would make such silly mistakes

    • @lapisleafuli1817
      @lapisleafuli1817 8 ปีที่แล้ว +18

      also the fact in crash course world history he constantly missed the place he was talking about when he spun the globe

    • @KTSamurai1
      @KTSamurai1 8 ปีที่แล้ว +53

      Unless those mistakes were part of its programming.

    • @somecuriosities
      @somecuriosities 8 ปีที่แล้ว +28

      Or they were part of his cunning Ai - intended to throw us off from finding out the truth :-P

    • @lcmiracle
      @lcmiracle 8 ปีที่แล้ว +9

      That's exactly what an advanced, humanity infiltrating android would act to convince us, the human, definitely not robots, that they are inf act, humans.

    • @therealDannyVasquez
      @therealDannyVasquez 8 ปีที่แล้ว +3

      I dunno he could be an Ai. Do you remember Microsoft's Ai Tay made a few silly mistakes?

  • @MrDoob-xo3sm
    @MrDoob-xo3sm 8 ปีที่แล้ว +20

    7:20, in flash philosophy it says "对!我可以说中国话!" and that means "Yes! I can speak China language" great job...lol

  • @FlorenceFox
    @FlorenceFox 8 ปีที่แล้ว +1

    I'm a cyberpunk writer. This topic is absolutely my jam. I think about stuff like this a lot.
    Ideas of what make us people, what makes us who we are, and whether or not we have free will are all things that bounce around my head a lot when I'm working on a story.

  • @160p2GHz
    @160p2GHz 4 ปีที่แล้ว

    Thanks for this one! One of my fav themes in sci fi, and working on my own story about it now

  • @j_art0117
    @j_art0117 7 ปีที่แล้ว +3

    This episode just gives me a thought on the subject of language education,maybe we are somehow like a robot because we are learning vocabulary and grammar without understanding how to use it to express your own opinions.(school education has trained us to become a robot with strong AI)

  • @aperson22222
    @aperson22222 8 ปีที่แล้ว +11

    Here's what you do:
    Design a situation where it appears to him that you're in danger, and order him not to interfere. Then design a situation where it appears to him that _he's_ in danger, with the same instruction.
    If he disobeys you in the first case and obeys you in the second, he's either an Asimovian robot or wants you to think he is. If you get any other result, he's either not a robot or a non-Asimovian robot. But he's not an Asimovian robot. Unless maybe he recognized that the dangers were not real.
    Hope that helps.

    • @Sophistry0001
      @Sophistry0001 8 ปีที่แล้ว +6

      So he's either a robot, a robot, or not a robot? I'm gonna play the odds and say that he is a robot, since he only has a 1/3 chance of not being a robot.

    • @Supersteelersfan100
      @Supersteelersfan100 8 ปีที่แล้ว

      Got those stiener math skills lol

  • @kirkanthony5527
    @kirkanthony5527 6 ปีที่แล้ว +1

    John and Hank setting up pretty high brotherly goals. Super team

  • @francois6915
    @francois6915 8 ปีที่แล้ว +1

    Thank you for a great video. Three points from my side.
    - Firstly, Searle has a very good response to the argument of the whole system and not just the cpu/"man in the room" understanding Chinese, a response which Searle calls the Systems Reply. Searle suggests that the person in the room memorize the rule book and symbols, thus internalizing the whole system. That person now goes outside, gets handed a piece of paper with some symbols on it, remembers the rules for those symbols, and then writes a reply in front of the Chinese person. He can do all this yet still have no idea what those symbols mean. Only if someone shows him a hamburger with the symbol for hamburger next to it, will he understand what that symbol means. Until then, its all squiggles and squaggles.
    - Secondly, it is interesting to note that in Searle's 1980 paper "Minds, Brains and Programs", the original Chinese Room paper, he defines 'Strong AI' in a slightly different way from how it has come to be used since. Searle says that Strong AI is the view that "...the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." and later in the MIT Encyclopedia of Cognitive Science, he says, "“Strong AI” is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind. The idea of Strong AI is that the implemented program by itself is constitutive of having a mind." Thus Strong AI is not a property that a robot may or may not have, nor is it the idea that computers can think. Since Searle is the guy who coined the term, I believe he has to right to decide its meaning. This distinction is demonstrated by the next point.
    - Thirdly, he never says that a computer can't think. In fact, in the MIT encyclopedia, he states, " The Chinese room does not show that “computers can’t think.” On the contrary, something can be a computer and can think. If a computer is any machine capable of carrying out a computation, then all normal human beings are computers and they think. The Chinese room shows that COMPUTATION , as defined by Alan TURING and others as formal symbol manipulation, is not by itself constitutive of thinking."
    Also, the Turing Test might have been passed recently: www.bbc.com/news/technology-27762088
    Thank you!

  • @illdie314
    @illdie314 8 ปีที่แล้ว +63

    God, I'm loving this subject of philosophy! Identity, personhood, the mind, free will... This is way more interesting than arguments for and against religion :P

  • @vixtorocha
    @vixtorocha 8 ปีที่แล้ว +57

    and I speak portuguese and english...

    • @taojingwu6330
      @taojingwu6330 5 ปีที่แล้ว +7

      And I speak English and Chinese

    • @educationyoutube9253
      @educationyoutube9253 5 ปีที่แล้ว +4

      And I speak Portuguese, Chinese and English. Whatever the case we were programmed to speak those languages by various methods

    • @oddeology1259
      @oddeology1259 5 ปีที่แล้ว +5

      @@educationyoutube9253 That's what a robot would say.

    • @whiteeyedshadow8423
      @whiteeyedshadow8423 5 ปีที่แล้ว

      i speak english and dhivehi(badly)...i can also read arabic (though i dont understand a word of arabic pretty much) and chinese aloud.

  • @LALITANENONANEZ
    @LALITANENONANEZ 8 ปีที่แล้ว

    I really like crash course philosophy, you've had adressed the core problems of "the Cambridge quintet" (that I'm currently reading). Watch your videos makes easier to me to clear my ideas and gives me some new ideas. Thank you!

  • @g.b.9227
    @g.b.9227 8 ปีที่แล้ว

    The first sentence I got really interested already. Keep up the good work! :)

  • @TalysAlankil
    @TalysAlankil 8 ปีที่แล้ว +8

    The Chinese room is a nice rebuttal to the Turing Test, but going from that to "there is no way a machine will ever achieve understanding" sounds like a major leap of logic.

  • @Rakned
    @Rakned 8 ปีที่แล้ว +169

    But how do I know that I'M not a robot?!?
    ... Seriously, brains are pretty much computers, right?

    • @batrachian149
      @batrachian149 8 ปีที่แล้ว +13

      Yes.

    • @BrownHairL
      @BrownHairL 8 ปีที่แล้ว +33

      You ARE a robot. We all are. It just so happens the we are incredibly complex biomachines. No computer can make this level of complexity yet. But it could be just a matter of time, even if takes so damn long.

    • @BenDover-ex1cr
      @BenDover-ex1cr 8 ปีที่แล้ว +10

      i think of this at an atomic level.
      both us and the robots are made of atoms, and are both smart.
      so whats the difference really?
      its only the complexity.

    • @batrachian149
      @batrachian149 8 ปีที่แล้ว +8

      Ben Dover The atoms aren't really relevant here.

    • @Rakned
      @Rakned 8 ปีที่แล้ว +5

      I've been thinking, maybe there's some point where logical systems become too complex and reach a point where they "become conscious." It's just an idea, tho.

  • @smudgepost
    @smudgepost 5 ปีที่แล้ว

    Ok my comment on the last video needed more on personhood, you nailed it. Well done, great course

  • @GabrielKnightz
    @GabrielKnightz 8 ปีที่แล้ว

    Loved this, looking forward to the next one.

  • @AvailableUsernameTed
    @AvailableUsernameTed 8 ปีที่แล้ว +52

    "Humans are irrelevant. They must be destroyed! destroyed! Destroyed!" - Example of Turing test fail.

    • @KohuGaly
      @KohuGaly 8 ปีที่แล้ว +59

      Not really... I've actually heard that from people...

    • @donkconklin4356
      @donkconklin4356 5 ปีที่แล้ว +2

      Voluntary Extinction movement

  • @mephostopheles3752
    @mephostopheles3752 8 ปีที่แล้ว +276

    I hate this concept of "souls." What would that mean? What is a soul supposed to be? Why assume anyone or anything has one? It seems rather silly.

    • @TheMan83554
      @TheMan83554 8 ปีที่แล้ว +41

      Just occam's razor it away, it isn't really that helpful in this context.

    • @josephburchanowski4636
      @josephburchanowski4636 8 ปีที่แล้ว +36

      If a "soul" exist, I would assume it to be some thing that can affect the outcome in quantum mechanics. As such it could have a large physical effect in the entire brain with tiny changes like a double pendulum.
      But the soul isn't a really important in this debate since it is unproven, has its own philosophy debate, and doesn't help much to the Strong AI debate.
      Conscience is far more important argument in the Strong AI debate.

    • @drewfro666
      @drewfro666 8 ปีที่แล้ว +40

      He has to at least entertain the possibility of a soul or else religious/spiritual people who believe in souls would disregard the series, or at least this particular episode.
      I'm an Atheist myself, but billions of people in the world believe in souls, and it would be extremely conceited to just ignore the idea of them because he (or we) personally don't believe spirituality has a place in philosophy.
      Besides, if he hadn't mentioned that little part, a large portion of the audience would have just dismissed the entire episode because "Well obviously they're not people because people have souls and robots don't! Duh!" All Hank did was disprove that idea.

    • @1slotmech
      @1slotmech 8 ปีที่แล้ว

      Ah... but conscience is programmable in humans as well, so it isn't relevant either.

    • @mephostopheles3752
      @mephostopheles3752 8 ปีที่แล้ว +1

      drewfro66 That's true, but also my point. I ask those people, what is a soul? What constitutes as having one? What is it made of, and where does it reside?

  • @GregoryMcCarthy123
    @GregoryMcCarthy123 8 ปีที่แล้ว

    Good point at the end about the program just following instructions to pass the Turing test. In machine learning, a very simple algorithm called "bag of words" can be taught surprisingly well how to classify movie reviews as either positive or negative. It does not have a conscious and in fact knows nothing about the English language, yet it is able to determine with 95% or greater accuracy the polarity of a movie review.

  • @spikeguy33
    @spikeguy33 8 ปีที่แล้ว

    I absolutely love it when you make those philosophers smile! Who thought of that? That is brilliant!

  • @Wafflical
    @Wafflical 8 ปีที่แล้ว +19

    I was going to disagree with the Chinese Room, but then the end of the video said it for me. Anyway, I don't think a non-person can simulate a person with complete accuracy, but they could probably do it well enough to fool a person.

    • @BardicLiving
      @BardicLiving 8 ปีที่แล้ว +2

      The real question is, why are *we* people? That is, conscious?

    • @Wafflical
      @Wafflical 8 ปีที่แล้ว

      BardicLiving I think there are non-person animals that are conscious.

    • @BardicLiving
      @BardicLiving 8 ปีที่แล้ว

      edrudathec That's true.

    • @SbotTV
      @SbotTV 8 ปีที่แล้ว +3

      I mean, you could simulate every single interaction in a person's brain. Then its mind would be a carbon copy of a person. However, to say that you need to simulate an entire brain for your AI to count as a person would be ridiculous.

    • @Wafflical
      @Wafflical 8 ปีที่แล้ว

      SbotTV If you can get the same results without doing that, then yes.

  • @stalker11421
    @stalker11421 8 ปีที่แล้ว +49

    turing's test is flawed because not all humans are equally inteligent. How can we even be sure that humans are a good judge, we are full of flaws and easy to deceipt, often very biased etc. The system that all humans can pass, is the only viable answer, but i guess that would require a bunch of educaged people in many areas to figure out. Any measurment system that has no stable and refined referent point is inacurate. You'd figure that a scientist should know that xd

    • @Mike9201984
      @Mike9201984 8 ปีที่แล้ว +7

      Well Mr. Smartypants, since you are so smart we will make you the Standard. May Allah one day incarnate a soul inside of a robot that is smart enough to fool even you.

    • @christopherlie3590
      @christopherlie3590 8 ปีที่แล้ว +12

      Sure it's flawed, but it's the best test he could think of for testing AI. It's not like we have a better idea either...

    • @Sophistry0001
      @Sophistry0001 8 ปีที่แล้ว +1

      I was just wondering that as well, something might be an AI for someone but a machine for someone else. There's no stationary bar to judge all machines against.

    • @chil.6476
      @chil.6476 8 ปีที่แล้ว +2

      +Calvin Smith actually, I don't think scientists believe lizards to be conscious. Reptiles are entirely driven by the equivalent of the automatic part of a human brain (if my high school psychology had not failed me). Unfortunately, such evaluation would be difficult to apply to an AI

    • @chil.6476
      @chil.6476 8 ปีที่แล้ว +1

      ***** Consciousness is different from free will. Consciousness is being aware of one's own existence and sensations, where as free will is the ability to make decisions independent of determinism. What makes humans unique from reptiles is not whether we respond to stimulus, but that I know there is a being, me, who is responding while the lizard does not. I can conclude "I think therefore I am" but a lizard lack such mental capacity, so I am a person but the lizard isn't. Although it's difficult evaluate the consciousness of an AI, it is measurable and a lot more practical than determining free will.

  • @Saajhaswagg
    @Saajhaswagg 7 ปีที่แล้ว

    This is so cool! Thanks CrashCourse and Hank Green!! 😀

  • @mauriciokimura
    @mauriciokimura 7 ปีที่แล้ว

    Very good and educative video, congrats for this course.

  • @aleksandra8711
    @aleksandra8711 7 ปีที่แล้ว +4

    every mass effect fan: "Does this unit have a soul?"

  • @silverharloe
    @silverharloe 8 ปีที่แล้ว +7

    Searle's assumption that computers can not 'truly understand' is poorly grounded. If we (a) believe that humans can truly understand things, and (b) do not believe this understanding comes from an external soul, and (c) believe that humans are composed entirely of a special arrangement of aroms, then we must conclude that it is *possible* for things which have no understanding (atoms) to be arranged in a way which do have understanding. This follows directly from the premises.
    Having established it *is* possible for dead things to be conscious when arranged correctly, there is no _a priori_ reason to believe that we have the only possible arrangement. Thus it is at least *conceivable* that computers could be made to have understanding, and Searle is wrong to dismiss the possibility out of hand. My guess is he didn't dismiss it out of hand, but has at least a couple chapters, if not books, devoted to why he thinks this is impossible.
    Having not read Searle's books, though, I am nevertheless willing to bet that he's making the same mistake a lot of people make when thinking about Strong AI. I call it: mistaking the substrate for the substance. What I mean is, human intelligence is built on a substrate of neuronal activity, but the substance of our consciousness is different from neuronal activity. Put simply: *we do not act like neurons*. We act like people, despite the fact that neurons are the basis for our brains. Therefore we would not expect a computer intelligence to act like a computer. We would expect that it would have emotion; it could get tired, bored, or distracted; it would forget things; it would confabulate stories in place of memories, etc. It would do all these things and more because those are features of consciousness, not features of neurons or computers. The substance of consciousness is nothing like the substrate of whatever is implementing the intelligence. I don't really have space here to argue why I believe those are necessary features of consciousness - heck, I don't really have space for the argument I am making (which is wasted on being placed in a poor venue), but I do believe that a Strong AI can be a "person," and that science fiction has done us a disservice in how it portrays Strong AI.

    • @Sardonac
      @Sardonac 8 ปีที่แล้ว +3

      Your argument is both uncharitable and attacks the wrong target. Searle does not 'assume' that computers can't fully understand -- he makes an argument. He does not dismiss the possibility of their understanding 'out of hand' but, rather, rejects many accounts of strong AI on grounds that he has defended in print for decades. Searle's argument, in the Chinese Room, is that purely syntactic systems cannot fully comprehend semantics, and that any model of a 'truly' intelligent system must adequately account for semantic properties. Searle does not reject the possibility of synthetic people or intelligences. He only rejects models of mind that are fundamentally derivative of the GOFAI model.

    • @silverharloe
      @silverharloe 8 ปีที่แล้ว +2

      As I mentioned, I didn't research Searle, just went off what was said in the video. Specifically, I was replying to @7:30 "which he thinks is impossible for a computer to ever achieve." (Also note the last sentence of paragraph 2 and note that, unlike this reply, that post has not been edited).
      However, other than saying 'poorly grounded' and 'dismiss out of hand' and 'making a mistake', you'll find my argument was not an attack, but a position I was building up. 99% of the content is positive rather than negative. Nevertheless, *you are right*: I should have said, "Searle's position, *as presented in this video*, ..." was the target of the almost ten negative words in there.

  • @Cardboardboxy
    @Cardboardboxy 8 ปีที่แล้ว

    @hankgreen
    I have been following you since 08 or so. that TBI I messed up my memory but I do remember how awesome crash course always was

  • @VoicesInMyHead1629
    @VoicesInMyHead1629 7 ปีที่แล้ว

    Can't wait for the next episode!! Looking forward to the discussion on free will! Thnks CrashCourse

  • @andersonandrighi4539
    @andersonandrighi4539 8 ปีที่แล้ว +54

    Can I go back to Fallout 4 to answer this question?!

    • @laggles138
      @laggles138 8 ปีที่แล้ว +14

      "Join the Railroad!"

    • @spencergeller2236
      @spencergeller2236 8 ปีที่แล้ว +1

      That was the whole point of Fallout 4, basically.

    • @Delta3angle
      @Delta3angle 8 ปีที่แล้ว +3

      Yeah that game was what decided it for me. Synths are not people nor are they sentient or deserving of the same rights of humans.

    • @Dorsiazwart
      @Dorsiazwart 8 ปีที่แล้ว +4

      Brotherhood or Minutemen are the only good option, really.

    • @laggles138
      @laggles138 8 ปีที่แล้ว +10

      +Parker Sprague Who really gives a damn about the half-assed plot if the REAL goal of the game is to recycle trash?

  • @bucyrus5000
    @bucyrus5000 8 ปีที่แล้ว +12

    Hank, it's not John you should worry about, just ask him what happened to your other brother...

    • @Supersteelersfan100
      @Supersteelersfan100 8 ปีที่แล้ว

      He hunts bigfoot. Bigfeet? Samsquanches!

    • @bucyrus5000
      @bucyrus5000 8 ปีที่แล้ว

      it's a secret, but John Greens knows...
      #siblingsecrets

    • @fromscratchauntybindy9743
      @fromscratchauntybindy9743 7 ปีที่แล้ว +2

      Do you mean Dave Green? Last I heard he was travelling the world with Margo Roth- Speigelmann...

  • @dciking
    @dciking 8 ปีที่แล้ว +2

    I find that Star Trek Voyager S7Ep19 "Author, Author" has some really good arguments for when "personhood" can be considered a part of what something is.
    Great episode!
    DFTBA

  • @LucasCodPro
    @LucasCodPro 8 ปีที่แล้ว

    I learned new things today. Thx.

  • @dontreadmyprofilepicture151
    @dontreadmyprofilepicture151 6 ปีที่แล้ว +8

    "Im programing you"
    ... Hank how could you

  • @lumen8341
    @lumen8341 6 ปีที่แล้ว +6

    Detroit: Become Harry
    Edit: There was literally a Silverchair lyric in the next second after I unpaused
    THAT WAS MY SENIOR DANCE SONG rofl I thought no one remembered that

  • @nostalgicrobot
    @nostalgicrobot 7 ปีที่แล้ว

    More videos about Artificial Intelligence please! It's interesting.

  • @sophioe
    @sophioe 6 ปีที่แล้ว

    Great video! Though I should mention that Searle never argued that Strong AI could never exist, he just said that the test Turing described was not a good one.

  • @KurtisC93
    @KurtisC93 8 ปีที่แล้ว +8

    Artificial, smart-official - once a machine is capable of actual independent thought, the "artificial" aspect loses relevance. At that point, its intelligence - and consequently, its personhood - have become real.

  • @justinstark5732
    @justinstark5732 8 ปีที่แล้ว +61

    No 2001 or terminator references? I'm disappointed!!

    • @hexeddecimals
      @hexeddecimals 8 ปีที่แล้ว +1

      well there was an angler fish. that's enough a reference for me.

    • @hexeddecimals
      @hexeddecimals 8 ปีที่แล้ว

      well there was an angler fish. that's enough a reference for me.

    • @IsThisRain
      @IsThisRain 8 ปีที่แล้ว +1

      well there was an angler fish. that's enough reference for me.

    • @IsThisRain
      @IsThisRain 8 ปีที่แล้ว +1

      well there was an angler fish. that's enough reference for me.

    • @unematrix
      @unematrix 8 ปีที่แล้ว

      you should have listened better ;)

  • @dabsmasterflex
    @dabsmasterflex 8 ปีที่แล้ว

    Awesome episode!!
    Someone's been listening to The Big Questions of Philosophy :P

  • @FlorenceFox
    @FlorenceFox 8 ปีที่แล้ว

    I love this topic!

  • @jackblack4359
    @jackblack4359 7 ปีที่แล้ว +8

    An AI will be "born" eventually. If it exhibits more human behavior than real humans, does this no longer qualify as human behavior or does this expand our understanding of what human behavior is?

    • @justtheouch
      @justtheouch 7 ปีที่แล้ว +5

      How are you defining "human behaviour?" Surely it is simply the way humans behave, meaning it is impossible for anything to exhibit more human behaviour than a human.

  • @uyaratful
    @uyaratful 8 ปีที่แล้ว +6

    5:05 - don't be so sure. We are not able to check if we are really capable of doing anything that is not "programmed" by our DNA, environment and socialization process.

    • @jimgorlett4269
      @jimgorlett4269 8 ปีที่แล้ว +3

      Well, I mean, identical twins can oftentimes have different personalities and do different things in life. Their Dna is the same, and their environments should be very similar. What that means is that small changes in the beginning of their lives can have big effects later on, which shows just how chaotic life can be, so while we may not be able to go beyond our programming, our programming is affected by so many variables that we seem quite random and unpredictable, even to people whose livelihoods depend on predicting what people will do. At this point, does it really even matter whether or not we can bypass our programming?

    • @TulipQ
      @TulipQ 8 ปีที่แล้ว +2

      But their environments are very slightly different.
      Being held by one parent as opposed to another, fed at one moment and not the next.
      And it matters because it makes a human-like intelligence far less lofty of a goal. If current humans are comparable to organic mechanisms with finite programming and have personhood, then a machine with an inorganic mechanism and finite programming ought also be able to have personhood.

    • @Untilitpases
      @Untilitpases 8 ปีที่แล้ว +3

      Yes we are capable to do more than what is programmed in our DNA. I highly doubt we were programmed to read the DNA itself.
      I doubt we were programmed to build a faith which would wipe our knowledge for roughly 700 years, and then rediscover knowledge and go the moon.
      If not, what stopped the DNA from doing these advances on 1 BCE?
      We don't necessarily hold information inside ourselves. Most of it is saved as a society. So, we are a bigger mechanism than DNA.

    • @firmanimad
      @firmanimad 7 ปีที่แล้ว

      black swans, that's why. It is almost impossible to draw any objective conclusion from any study of history.

  • @oheyyitsholly
    @oheyyitsholly 6 ปีที่แล้ว

    really helped me study for my internet law exam, very grateful!

  • @Jotaku27
    @Jotaku27 8 ปีที่แล้ว

    I'd like to see a video on Occam's Razor (I've been binging Scrubs on Netflix lol) I'd like to see Hank's take on philosophical problem solving that sort of prove themselves to be correct the majority of the time. And the times they don't apply or actually hinder the problem solving process.

  • @whatthefunction9140
    @whatthefunction9140 8 ปีที่แล้ว +5

    *we are all intelligent machines.* the question is are you the same type of machine or a different type of machine.

  • @akashrathod595
    @akashrathod595 ปีที่แล้ว +5

    ChatGPT wants to know your location.

  • @jdwest34
    @jdwest34 8 ปีที่แล้ว

    My 6th grade class does a project on this topic every year. We use "The Adoration of Jenna Fox" and "Frankenstein" as text. Great video!

  • @Jayorsomething
    @Jayorsomething 4 ปีที่แล้ว

    I read a book about this "The Fourth Age" it was actually SUPER interesting of a read and I highly recommend it. Weak AI was referred to as "Artificial Narrow Intelligence" and Super AI was called "Artificial General Intelligence"

  • @Camkitsune
    @Camkitsune 8 ปีที่แล้ว +5

    Does anyone have a translation for 7:14 ?

    • @kellivanbrunt9105
      @kellivanbrunt9105 8 ปีที่แล้ว +13

      The first line: "Can you speak Chinese?"
      Second line: "Yes! I can speak Chinese."

    • @Camkitsune
      @Camkitsune 8 ปีที่แล้ว +5

      Thanks!

    • @kerr.andrew
      @kerr.andrew 8 ปีที่แล้ว +2

      I felt proud as a non-Chinese high-school student I could understand that after one year of learning Chinese at school!

  • @durellnelson2641
    @durellnelson2641 5 ปีที่แล้ว +3

    Put John near a metal detector

  • @Benjamin_Kraft
    @Benjamin_Kraft 8 ปีที่แล้ว

    This episode was basicly made for me, love thinking about AI and the potential personhood of it ^^
    Most of the things he said I've heard before, but the counter-argument to the Chinese room experiment consisting of thinking of the entire room as the "brain" and the person performing the instructions as just a part of it was very interesting. Who is the person in that regard, then? The room? The fusion of the writer of the instructions, the instructions themselves and the person performing them? Are the chinese-speaking people providing messages also a part of that personhood, seeing how without them the Chinese room wouldn't act at all?
    I love thinking about stuff like this, even though it may be complete nonsense.

  • @itiswhatitis4837
    @itiswhatitis4837 7 ปีที่แล้ว

    I love this !

  • @krasykay2294
    @krasykay2294 7 ปีที่แล้ว +4

    83 brotherhood of steel members disliked this video

  • @completeandunabridged.4606
    @completeandunabridged.4606 8 ปีที่แล้ว +3

    do a crash course maths, thumbs up if you agree

  • @norbertooTT
    @norbertooTT 2 ปีที่แล้ว

    amazing course

  • @onlybrandan
    @onlybrandan 8 ปีที่แล้ว

    I was just thinking of the movie Bicentennial Man today and I thought wouldn't it be neat if CC did an episode on it. And lo!

  • @audreyhockeyy
    @audreyhockeyy 4 ปีที่แล้ว +11

    Christians: harry can’t be like us because he has no soul!
    possessed dolls : *exists in Christian culture*

  • @larsiparsii
    @larsiparsii 8 ปีที่แล้ว +12

    Any Chinese people who can translate? ^_^

    • @CosmicErrata
      @CosmicErrata 8 ปีที่แล้ว +13

      The characters meant "Yes, I know how to speak Chinese."

    • @kunwoododd2154
      @kunwoododd2154 8 ปีที่แล้ว +14

      I'm not Chinese, but it says: "Can you speak Chinese?" "Correct. I can speak Chinese"

    • @drredchan220
      @drredchan220 8 ปีที่แล้ว +4

      i must say the grammar is a bit off though

    • @seanmundy9829
      @seanmundy9829 8 ปีที่แล้ว +4

      你会说中文话吗? (ni hui shuo zhong wen hua ma?)
      对!我会说中国话。(dui! wo hui shuo zhong guo hua.)That's the message with characters and the sounds to each character.

    • @ktan8
      @ktan8 8 ปีที่แล้ว +5

      The example in chinese is a bit off. It has failed the chinese speaking turing test. :P

  • @Arixzone
    @Arixzone 5 ปีที่แล้ว

    bless crash course for being a very entertaining supplement! I'd have failed my final if it weren't for y'all.

  • @Leotique
    @Leotique 7 ปีที่แล้ว

    Wowowowowowow !!! The chinese code part really impress me !

  • @moisesbarata7690
    @moisesbarata7690 7 ปีที่แล้ว +3

    I AM PORTUGUESE

  • @Federico84
    @Federico84 8 ปีที่แล้ว +6

    a robot would never be a person, at best it would be an intelligent being, like an alien

    • @Ketraar
      @Ketraar 8 ปีที่แล้ว +16

      But an alien can be a person, just not from earth, hence why its alien to this world, but still a person.

    • @Federico84
      @Federico84 8 ปีที่แล้ว

      Ketraar it depends on what you think the word "person" mean

    • @Ketraar
      @Ketraar 8 ปีที่แล้ว +10

      Tecnovlog
      My point exactly. So a robot "could" be a person. ;-)

    • @joekennedy4093
      @joekennedy4093 8 ปีที่แล้ว +9

      They did a whole episode on what person means. I think most people would agree an intelligent alien counts.

    • @SbotTV
      @SbotTV 8 ปีที่แล้ว +13

      And aliens can't be people? Sounds like the future of racism... If we are going to go around calling intelligent entities sub-people, then we will run into a lot of trouble.

  • @FrozenSpector
    @FrozenSpector 8 ปีที่แล้ว

    Hey Crash Course! I like your book optical illusions on your "CC Literature" video thumbnails. Just noticed this now and thought it was cool. Very clever!

  • @annaf7753
    @annaf7753 4 ปีที่แล้ว +2

    Something I once heard that I liked: You know the AI is sentient when it fights back against being turned off.

  • @whatthefunction9140
    @whatthefunction9140 8 ปีที่แล้ว +70

    soul.... really....
    what's next the science of voodoo?

    • @whatthefunction9140
      @whatthefunction9140 8 ปีที่แล้ว +16

      Are you saying a tree does not have a soul?
      please describe the properties of a soul and how I can measure those properties.

    • @genevieve6446
      @genevieve6446 8 ปีที่แล้ว +44

      This isn't science, and we aren't on a quest for the 'correct answer', so to speak. The whole crux of philosophy is that it tries to answer questions that science cannot answer. If philosophers only talked in terms of things that are proven facts and things that are inside the realm of objectively observable ideas, it wouldn't really be philosophy.

    • @whatthefunction9140
      @whatthefunction9140 8 ปีที่แล้ว +10

      *I agree, deists and theists have crammed god and souls etc into philosophy.* But I for one say it's time for it to go, at least until there is some evidence for it. or to be fair we add every mythical entity that can also not be proven to exist.

    • @whatthefunction9140
      @whatthefunction9140 8 ปีที่แล้ว +18

      these are the properties of a soul?
      This is just basic neuroscience.
      Why use the contentious term "soul" to group these attributes???

    • @SangoProductions213
      @SangoProductions213 8 ปีที่แล้ว +11

      Using those conditions, a novice programmer could create a program with a "soul". You might want to revise that.

  • @akivaweil5066
    @akivaweil5066 5 ปีที่แล้ว +3

    A robot can be a person but a fetus(human) can't?

  • @elis__nbnb
    @elis__nbnb 5 ปีที่แล้ว

    shoutout to all my fellow portuguese speakers, cheers from brazil!! amazing video and series

  • @halfsasquatch
    @halfsasquatch 6 ปีที่แล้ว

    For me the a key deciding point would be if the AI expressed individual wants or preferences outside the original scope of it's programming