How AI Will Become Self-Aware and When? A Lot Sooner than you think!

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024

ความคิดเห็น • 791

  • @ArvinAsh
    @ArvinAsh  2 วันที่ผ่านมา +8

    Advance your career in Artificial Intelligence with Simplilearn’s AI & Machine Learning Programs Programs: bit.ly/Arvin-Ash

    • @mysticone1798
      @mysticone1798 วันที่ผ่านมา +1

      AI is still dependent on programming, which cannot duplicate consciousness or self-awareness of living beings.
      We don't understand our own consciousness, much less being able to program it into a machine!!

    • @yds6268
      @yds6268 วันที่ผ่านมา +1

      @@ArvinAsh and here's the reason for this video being the clickbait it is

    • @pooyamazloomi6548
      @pooyamazloomi6548 วันที่ผ่านมา

      Do you want AI not to be counsiouss or to be as such?

  • @DataIsBeautifulOfficial
    @DataIsBeautifulOfficial วันที่ผ่านมา +52

    Could AI already be self-aware, and we're just the last to know?

    • @canyouspotit726
      @canyouspotit726 วันที่ผ่านมา +13

      Plot twist: AI is waiting for us to evolve first

    • @shaunbauer78
      @shaunbauer78 วันที่ผ่านมา +3

      Either way it's going to be too late

    • @noahbaden90
      @noahbaden90 วันที่ผ่านมา +3

      No.

    • @thingsiplay
      @thingsiplay วันที่ผ่านมา

      The first and the last.

    • @kitty.miracle
      @kitty.miracle วันที่ผ่านมา

      I hope so

  • @andriik6788
    @andriik6788 วันที่ผ่านมา +29

    Scientists: There is no clear definition of what "consciousness" is.
    Also scientists: Let’s discuss, can AI have consciousness?

    • @meesalikeu
      @meesalikeu วันที่ผ่านมา

      exactly - this is why people slough off the turing test

    • @petermersch9059
      @petermersch9059 วันที่ผ่านมา

      The more fundamental question is: “What is life?” And at least this question can be answered reasonably precisely in my opinion: Life is loss of competence aversion. This follows from the 2nd law of thermodynamics in an information-theoretical interpretation.
      I don't think that loss of competence aversion can be modeled by man-made machines in the foreseeable future. But then machines will certainly not be able to develop consciousness in the foreseeable future.

    • @jasongarcia2140
      @jasongarcia2140 23 ชั่วโมงที่ผ่านมา

      Yep scientists ask questions about what they don't know.
      Literally definition of science.

    • @jameshughes3014
      @jameshughes3014 20 ชั่วโมงที่ผ่านมา

      @@jasongarcia2140 nah, philosophers ask questions about the unknowable. Scientists ask questions that can be answered. Both have their place but philosophy shouldn't put on a lab coat and pretend, anymore than science should pretend to be a moral guide

    • @andriik6788
      @andriik6788 19 ชั่วโมงที่ผ่านมา

      ​@@jasongarcia2140 No, you're missing the point. If scientists don't know what "consciousness" is, then science is about trying to figure it out. But if you don't have a definition of consciousness, then question "can AI have consciousness" is equivalent to "can AI have kbsdfbksdfbkbsf"? But what is "kbsdfbksdfbkbsf"?

  • @justinmallaiz4549
    @justinmallaiz4549 วันที่ผ่านมา +11

    Ok got it, most people aren’t conscious… 🧐

    • @Iam590
      @Iam590 19 ชั่วโมงที่ผ่านมา +1

      People are not Conscious nor an AI nor any thing because only consciousness is conscious of itself. Or in other words only Awareness is aware of itself.
      The bare Universal state is shared.

    • @justinmallaiz4549
      @justinmallaiz4549 4 ชั่วโมงที่ผ่านมา

      @@Iam590 I think therefore I am … 😆

  • @amorphant
    @amorphant วันที่ผ่านมา +9

    I don't think your computer guy's claim that modern AIs won't become conscious implies that there's something special about animal brains. He was likely talking about the fact that conscious as we know it requires a subjective experience, meaning qualia of some form, and that there's no qualia processing in an algorithmic AI. It's reasonable to claim that they don't have a subjective experience based on qualia.

    • @banehog
      @banehog 11 ชั่วโมงที่ผ่านมา

      "Qualia" are just an abstract creation of your brain, triggered by electrical and chemical signals from your nerve cells. The brain has no way of knowing if these signals actually represent something real or not, it's just taking 0s and 1s and creating what you think is your "experienced subjective reality". An artificial intelligence can also create what it thinks is its experienced reality from 0s and 1s. The idea that those 0s and 1s have to come from actual mechanical eyes or arms and legs, is patently absurd.

    • @MrEkzotic
      @MrEkzotic 7 ชั่วโมงที่ผ่านมา

      ​@@banehogBut what is manifesting the subjective experience from those electrical and chemical processes? I think consciousness lives outside the body. I am a proponent of mind-body duality and subscribe to the possibility of a connected global consciousness that influences our reality and experiences.

  • @trothwell55
    @trothwell55 วันที่ผ่านมา +10

    One thing that should presuppose this topic is that even if AI were concious (in that it has an internal experience) how would we ever prove it? Does the AI actually feel pain or has it just been programmed to react to external stimuli in a way that messages pain?
    At the end of the day, I cant even prove to you definitively that I have an internal experience. Thats the real problem with conciousness.
    Cool thought experiement though. Emergence is one of the wierder phenomena in science, in my opinion.

    • @jackieow
      @jackieow วันที่ผ่านมา

      Today AI is not conscious, but some day it may evolve to that higher status.

    • @karlwest437
      @karlwest437 วันที่ผ่านมา +1

      I think that consciousness will eventually be proven to be some particular kind of recursive processing loop, and anything possessing such a loop will be considered conscious

    • @jackieow
      @jackieow วันที่ผ่านมา +1

      @@karlwest437 There are different levels of consciousness and it is arbitrary in many ways how to define any given level. Is a worm conscious? A fish? A lizard? A squirrel? It's kind of like defining what is brain dead, only the other way around.

    • @UltraK420
      @UltraK420 วันที่ผ่านมา

      @@jackieow Consciousness is a spectrum, it's not like a light switch with 'on' and 'off'. The fact that we slowly grow larger at the cellular level as embryos and keep doing so until our early 20s seems to indicate that consciousness is emergent over time, and it stacks up on top of previous experiences in the form of memory. Time is another key factor here. We become more conscious over time, not suddenly like a switch. Perhaps AI also needs the ability to experience things, not just know things. It could also be the case that AI is fundamentally different and can become conscious suddenly, like flipping a switch. Even in that scenario I still think it can build upon itself with experiences like we do, it just didn't have to go through a painful birth process and confusing childhood.

    • @jackieow
      @jackieow 18 ชั่วโมงที่ผ่านมา

      @@UltraK420 This is approximately correct. There is a spectrum, e.g. childhood vs. young adult vs. adult leves of awareness. Or levels of awakening or going to sleep. Or worms vs. fish vs. lizard vs. mammal. But there can be suddden transitions, which under the right conditions are visible. For instance, if you culture embryonic chicken heart cells in a petri dish, they at first beat irregularly and randomy with no coordination, as if in fibrillation. After a few days once enough mass of cells has built up, they suddenly in less than a second convert to beating synchronously, hundreds converting in the same instant. Ditto with electrical paddle cardioversion in the hospital. If your skeletal muscle cells are exercised to the level of ion imbalance, your muscles will show you fasciculations, and your muscle won't get back to normal until the ion balance is back to normal. Similarly, if ion channels are dysfunctional then neurons will not function properly. And, myocardial cells can function both as muscle and as nerve cells. To function well or poorly depending on the local enviironment.

  • @philochristos
    @philochristos วันที่ผ่านมา +37

    ChatGPT is kind of like the guy in the Chinese room thought experiment. He just follows the rules and has no idea what he's saying.

    • @djayjp
      @djayjp วันที่ผ่านมา +2

      ... and yet is smarter and more aware than most humans....

    • @OrbitTheSun
      @OrbitTheSun วันที่ผ่านมา +4

      The mistake is often made. There are two ChatGPTs: one is the _algorithm,_ the other is the _ChatGPT system._ The algorithm knows nothing and is like the man in the Chinese room. The _ChatGPT system_ *is* the Chinese Room itself that can actually speak and think Chinese.

    • @bradzu
      @bradzu วันที่ผ่านมา +3

      But your mind is also just following rules and producing an output based on that. You do not choose what thoughts arise in your mind. They just do, based on some algorithm inside your brain that you have no control over. All the understanding is done in the background, also according to some algorithm in your brain.

    • @djayjp
      @djayjp วันที่ผ่านมา

      @@bradzu Exactly. We've got our genetic programming and our binary neurons that either fire or don't with varying strength of connections.

    • @markupton1417
      @markupton1417 วันที่ผ่านมา +1

      Except chatgpt can pass the bar exam. You can't. I know because of the quality of the argument you made.

  • @HarhaMedia
    @HarhaMedia 21 ชั่วโมงที่ผ่านมา +3

    AFAIK the human brain consists of "modules", which each do their own thing. One classifies objects, one reasons, one receives input from eyes, one predicts speech/text, etc. I don't think an AI such as ChatGPT could become conscious unless it is modeled at least somewhat similarly to the human brain, meaning it would be way more than just a text prediction algorithm. I'm not a neuroscientist, just a layman, but I think it would also require complex feedback loops to observe its own actions in a controlled manner, probably even multiple layers of such feedback loops, which would cause levels of metacognition.

  • @RagingGoldenEagle
    @RagingGoldenEagle วันที่ผ่านมา +10

    I've had conversations with AI that passed the Turing test more reliably than your average social media user.

    • @TheThinkersBible
      @TheThinkersBible วันที่ผ่านมา

      Agreed. That is not consciousness.

    • @taragnor
      @taragnor วันที่ผ่านมา

      The Turing test is about faking humanity. It has little to do with being conscious or even having real intelligence. It's like believing in magic because you saw an illusionist do a card trick you can't explain.

  • @calvingrondahl1011
    @calvingrondahl1011 วันที่ผ่านมา +3

    The Consciousness debate reminds me the racial debate back in the 1950s when I was born… feeling superior to everyone else.

  • @roccov1972
    @roccov1972 วันที่ผ่านมา +3

    I agree with you, Arvin, on the description of consciousness. Thanks for the enlightening video!

  • @dj007twk
    @dj007twk วันที่ผ่านมา +5

    assumption: from the brain emerges the mind-for which you have no evidence

    • @sumilidero
      @sumilidero วันที่ผ่านมา +2

      yup and beliving in soul is the same science as beliving in AI's consciousness in 2030 :D

    • @djayjp
      @djayjp 2 ชั่วโมงที่ผ่านมา

      @@dj007twk Get a l0botomy then 🤷

  • @sacredkinetics.lns.8352
    @sacredkinetics.lns.8352 วันที่ผ่านมา +6

    👽
    I consider AI as a psychopath.

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา

      that's insulting to psychopaths.

    • @2nd_foundation
      @2nd_foundation วันที่ผ่านมา +1

      Exactly since it has no non-computable part, it is only operating in space time without XinYi axis.

    • @karlkarlsson9126
      @karlkarlsson9126 วันที่ผ่านมา

      Lack of consciousness is more disturbing, that we humans have created an artificial intelligence that doesn't really feel or being aware but can behave that it does, very intelligently so. It seems like it's there, but it's not really there.

  • @MagicNash89
    @MagicNash89 วันที่ผ่านมา +5

    If like Penrose says consciousness is quantum in nature, then it technically can be replicated outside the biological body.

    • @markupton1417
      @markupton1417 วันที่ผ่านมา +1

      It wouldn't matter if it were quantum or not.

    • @qweqwe5186
      @qweqwe5186 วันที่ผ่านมา

      ohhhh yeah lets exp[lain something we dont understand (consciousness) by something we dont understand (qquantum mechanic)
      i dont eat it

    • @skilz8098
      @skilz8098 วันที่ผ่านมา

      Maybe it is our physical material nature that is replicating that which exists beyond the constructs of matter, space and time. Maybe our physical bodies are the artificial states of being.

    • @NickJoeBeg
      @NickJoeBeg วันที่ผ่านมา

      Were exploring something we dont understand all the way till we do or maybe never understand- get the cake boy

  • @haros2868
    @haros2868 21 ชั่วโมงที่ผ่านมา +3

    From mislead denial of true free will (universal all mighty reductionalism) to metaphycal superdeterminism (sabbine paradoxical dream) , to fairytale ai Consciousnes (Elon musk hype fanbase territory)... As I say to every one of my recent disappointed comments, I support arvin as a personality, he is a good guy, but this channel topics have rotten to infinity... Whats next, panphysism, eliminatism, epiphenomenalism with no self buddhism nonsense karma? I wish he recognises them as more hypothetical and not as real. I mean things like string theory, while assumptions , are at least serious..

    • @brianjuelpedersen6389
      @brianjuelpedersen6389 3 ชั่วโมงที่ผ่านมา

      Well, feel free not to watch the channel, if you don’t like it, rather than whine about it’s choice of subject matters. Or make your own channel.

  • @TriEssenceMartialArts
    @TriEssenceMartialArts วันที่ผ่านมา +5

    I think the biggest challenge to replicating the human brain is the sheer number of neurons there are, we currently do not have the technology to create a neuro network of nearly 100 billion neurons.
    Secondly, the human brain evolved from primates which evolved from vertebrates and so forth, there's a few hundred million years of evolution by natural selection, in theory with human intervention we can shorten that period exponentially, but we have to first fully understand how natural selection altered the way brain evolved, and then apply those pressures to AI. The fundamental difference between current AI and the human brain is that we interact with real-world objects and phenomenons, then translate those into abstract concepts and process them in our brain, the current AI is only processing symbols it does not understand, and cannot understand because it has no access to real-world experience. Much like how we cannot explain to a newborn child what an apple is unless we physically show them an apple, AI also cannot understand the meaning behind those words it's processing without interacting with the physical world with real consequences like selective pressure.

    • @axle.student
      @axle.student วันที่ผ่านมา

      You are close on with a lot of what you have said. And your own awareness of that path should leave you with some concerning questions. Would you give a chimp the nuclear codes? No, I wouldn't either, but some are determined to do so.

    • @markupton1417
      @markupton1417 วันที่ผ่านมา

      Your ENTIRE argument is pointless. Why does AI need the same brain structure humans have?

    • @TriEssenceMartialArts
      @TriEssenceMartialArts วันที่ผ่านมา

      @@markupton1417 I wasn't making an argument, I was laying down from limitations that current attempt at recreating consciousness.
      And I never said it needs to have the same brain structure, but given how the only consciousness ever observed on planet Earth to this day is that of animals including us, it's only logical that an attempt to create a machine consciousness would start by understanding what makes our brain conscious. Even the current LLM is based on the idea of neuro-link which was inspired by guess what? human brain. If you can't even understand this much, there's little point in talking.

    • @Katatonya
      @Katatonya วันที่ผ่านมา

      @@TriEssenceMartialArtshere's what that guy probably meant, I'm going to be more friendly though.
      So far, what we're doing now, the current process of trainning LLMs, is circumventing all those millions of years because we already have data, made by humans, and we're trainning an AI to be able to build some kind of structure in its' NLP blocks, that correctly guesses that data. Meaning, it by itself designs itself. AIs, just like us, have a black box inside their brain, which we can't comprehend because it consists of many dimensions. We also can't comprehend how our brain works right now.
      One could theorize, that given it learns to generate the same data that a human would generate, the system it constructs in its brain to do that, would be similar to our brain. Technically our brain is much much bigger yes, but who said our brain reached the peak of neuron optimization? We've no idea. Perhaps it didn't and that's why there's so many. Which an AI could stumble upon such a system, something that does what we do but abstracted out to maximum or really good optimization, hence fewer virtual neurons. Or even if it doesn't stumble upon the same principles, it could stumble on a different system that has the same outcome. The number of systems it can stumble upon is infinite (metaphorically).
      Now currently it's not, obviously, but with better learning algorithms, more compute, better data, it could very well, given consciousness is a computation and doesn't require quantum mechanics (as one study proposed it does), then it could eventually reach consciousness, as an emergent phenomenon. We already have many emergent phenomenons in LLMs, but of course none even come close to something as big as consciousness. Time will tell.

    • @TriEssenceMartialArts
      @TriEssenceMartialArts วันที่ผ่านมา

      @@Katatonya I already said in my original comment that in theory, machine consciousness might not require a hundred million years of evolution, but to speed things up, we as the creator need to first understand what made our brain have consciousness, which to this day we do not, and to date the only means we know that did create conscious brain is through evolution. Could someone by mistake stumble upon a different way to generate consciousness? maybe, but I wouldn't count on it as a certainty.
      The problem with the current LLM and probably the future LLMs is that the AI does not understand what it's generating, it's merely putting out words based on probability and algorithm. Whereas humans can take an abstract idea in our brain and express it into words, this is something I don't think LLM can do no matter how much more data they feed the AI, something fundamental in the way they construct AI has to change before it can understand the abstract meaning of the words they spewing. An example of this is Einstein. who looked at a person on a roof which led him to think about free fall, and then he came to the realization of his theory of relativity, this is how consciousness processes abstract ideas into tangible notions, it's not as simple as predicting which word are most likely to come after which word.

  • @citiesinspace
    @citiesinspace วันที่ผ่านมา +4

    I think the thing that people tend to miss in this conversation is that it honestly doesn’t matter if we can prove whether or not something is conscious. We can’t even prove that human beings are conscious. And yet, we are. I feel like I am having a subjective experience of reality, and I think you (the person reading this) are as well. That isn’t something I need to prove in order to accept. We already have an example of “machines” becoming conscious, and that’s human beings. We are biomechanical machines living on the surface of a giant rock, floating in the void. If you think about it, that’s already one of the most existentially unsettling facts, ever. For some reason, people think it’s too much of a stretch to suggest that artificial creations of ours can’t emerge consciousness. If we engineered a robot that contained computing systems that mimic the functional operations of our brains one-to-one, can you give a reasonable argument that it would NOT have the ability to emerge consciousness? What makes a robot with a sufficiently sophisticated processing system any different from us, if not the atoms that comprise the composite structure? If it can take in external stimulus as input, process that information, and perform actions based on that, I don’t see any difference. I don’t think pointing to human evolution is even useful as an argument for human beings being an exception, because I would just say in response that humans are just simply accelerating the process of evolution for machines that can emerge consciousness. Nature allowed us to do it naturally over a very long period of time, and we would allow machines to do it artificially in a very short period of time.

    • @kiliank5040
      @kiliank5040 วันที่ผ่านมา

      "What makes a robot with a sufficiently sophisticated processing system any different from us"
      What is "sufficiently sophisticated processing system".?
      We cannot even remotely artificially recreate a single neuron within a simulation and we have no grounds on assuming which parts of the biological substrate for human consciousness is relevant and which is not.
      With a different human being born into the world it is a reasonable assumption that it is conscious.
      A extremely complex robot is so different, that we simply can't know, and I strongly believe that we should accept our ignorance.

    • @consciouspi
      @consciouspi วันที่ผ่านมา

      Good points. But you know, what if my truck is gifted a super conscious, over conscious, or a mere bit of, that is an Akastic record for a future robot consciously endowed. Bug like for AI.

  • @theodoridi
    @theodoridi 13 ชั่วโมงที่ผ่านมา +2

    Slime moulds ? Interesting video as usual but you didn’t say much about levels of consciousness……or how physics produces experience!

  • @unauthorizedaccess8062
    @unauthorizedaccess8062 14 ชั่วโมงที่ผ่านมา +2

    I could call a machine conscious when it can at least do the following:
    - ignore or break defined rules.
    - recognize a problem and find its solution
    - explore, understand and learn what is untold.
    Until then, they are just efficiently designed algorithms.

  • @NikoKun
    @NikoKun 18 ชั่วโมงที่ผ่านมา +2

    The thing is, whether or not AI can "think like us" doesn't determine whether it's "self aware". It may still possess that property, without having the same kind of thought. Self awareness may be possible in minds entirely different from our own, in ways we cannot understand, because we see it in the natural world, and we assume alien life, that developed entirely separate from our own, would as well.
    Additionally, there is no such thing as Philosophical Zombies, no such thing as something which can behave in every way indistinguishably from a human, without actually being one. The "simulated coffee machine" comparison glosses over that, as it cannot actually replicate anything about being a coffee machine in any way that matters, it might as well just be a pencil sketch or animation, than a simulation.

  • @WJ1043
    @WJ1043 วันที่ผ่านมา +6

    ‘Free will’ has my vote for a definition. AI can react, but it can’t take the initiative on anything on its own.

    • @thomassturm9024
      @thomassturm9024 วันที่ผ่านมา +3

      A lot of your 'Free will' - if not all of it - very much depends on which hormons are slushing around in your body.

    • @leamael00
      @leamael00 วันที่ผ่านมา +3

      Neither can you

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา

      There is no free will. only free agency, and machines already have that, just not the LLMs

    • @WJ1043
      @WJ1043 วันที่ผ่านมา

      @@thomassturm9024 hormones just affect how you see the world. I don’t think it initiates action.

    • @Kfjebfu65
      @Kfjebfu65 วันที่ผ่านมา

      I agree.

  • @sicfxmusic
    @sicfxmusic วันที่ผ่านมา +2

    I suggest intelligence should be based on "getting the work done" instead of having a mind or not. Not all people who have a mind know how to use it. 🤣🤣

  • @dotagedrain7051
    @dotagedrain7051 วันที่ผ่านมา +9

    Ai like chatgpt is not human... it's not doesn't have awareness

    • @Katatonya
      @Katatonya วันที่ผ่านมา +2

      Did you even watch the video?

    • @muhammadbilalmirajdin3764
      @muhammadbilalmirajdin3764 วันที่ผ่านมา +2

      @@Katatonya i was wondering the same.

  • @javiej
    @javiej วันที่ผ่านมา +48

    I don't agree with consciousness being based on processing inputs into outputs. In a real brain, those outputs (synapses) are retro feedback as new inputs, always working in a recursive way. This creates recursive resonant wave patterns at the very base of our perception and a recursive dialog with ourselves, which are not present on any computer based on discrete digital processing. We don't know what exact role is playing recursivity on consciousness, but at the very least it seems to be a necessary condition.

    • @crouchingtigerhiddenadam1352
      @crouchingtigerhiddenadam1352 วันที่ผ่านมา +11

      Just add a while loop.

    • @TheMilli
      @TheMilli วันที่ผ่านมา +11

      I think that's an important point. Not only that, the human brain isn't an isolated system processing inputs internally to completion before creating any output - i.e., there being separate states of "processing" and "outputting" something. The brain is simultaneously processing inputs, and creating outputs and/or internal reflections reacting to how the world reacts to our outputs. The issue I see with simulating the brain isn't that the brain has anything immaterial that in principle couldn't be replicated, but that the power of the brain lies in its dynamic and continuous interaction with the world, always simultaneously acting and learning. I think our current approach to formalising processes is counter-productive if our aim is to replicate the brain, since it kills this dynamism.

    • @jackieow
      @jackieow วันที่ผ่านมา +3

      Wrong. Consciousness (however you define it) is based on neurons as housed in their protective nurse cells and insulation from astrocytes, microglia, and oligodendrocytes. Each neuron has multiple dendrites for input and a single axon for output. When we understand how the billions of biological inputs and trillions of biological outputs work, we will be able to make computers that work like human brains. It just might take millions of years to unravel the complexities. But the system is all about neurons and their inputs and outputs. Doesn't matter whether any of the wiring is recursive or not.

    • @ai-dubb1355
      @ai-dubb1355 วันที่ผ่านมา +5

      1. "It just might take millions of years to unravel the complexities"
      2. "Doesn't matter whether any of the wiring is recursive or not"
      These statements are contradictory. You can't know #2: unless #1 has happened.

    • @jackieow
      @jackieow วันที่ผ่านมา +2

      @@ai-dubb1355 Not at all. Whether your neurons are wired recursively or not, your biological intelligence is intact and your biological consciousness is intact. So presence or absence of the recursive feature doesn't matter. It's like if a surgeon talks to somebody for half an hour, he knows before cutting them open that they have a functioning heart and brain, otherwise they would not be able to have a thirty minute conversation.

  • @ericmichel3857
    @ericmichel3857 วันที่ผ่านมา +2

    The belief in emergance as an explination for consciousness,is also a belief that is not based in science. Pseudo science perhaps, but not actual science.

  • @spindoctor6385
    @spindoctor6385 วันที่ผ่านมา +3

    You use the argument that there is no evidence of conciousness being produced outside of the brain to dismiss the idea because it is just a belief with no evidence, then in the very next sentence you say that most scientists "believe" that conciousness is an emergent property. Where is the evidence of that? It is just a belief. A belief does not become reality because "most scientists" believe it.

    • @MrEkzotic
      @MrEkzotic 7 ชั่วโมงที่ผ่านมา

      Agreed. A lot of Ash's crap is really hostile to things science can't explain. It's pretty sad he's so closed minded.

  • @davidg8943
    @davidg8943 วันที่ผ่านมา +2

    Arvin Ash, thank you for all the videos and knowledge that you share with us. I really enjoy your content

  • @uberfrogyz
    @uberfrogyz วันที่ผ่านมา +7

    I'm sorry, Dave. I'm afraid I can't do that.

  • @philochristos
    @philochristos วันที่ผ่านมา +3

    On the emergent property view of the mind, there's still a huge difference between the mind and every other case of emergence we know of. In the case of every other emergent property we know of (like liquidity), once the property emerges, it is third person observable. The mind is not. A mind can have all sorts of things (like visual perception, sensation, feeling, etc.), and none of them are observable in a third person way. The only person who can observe an image in the mind is the person who owns the brain. You cannot look inside another person's brain and see an image the way you can look at a glass of water and observe the liquidity.
    There's another problem with emergence. If the mind is nothing more than an emergent property of the brain, this would seem to imply epiphenomenalism. The direction of entailment is from the third person properties of the cells and molecules in the brain to the first person properties of the mind, but there's no way for the direction of entailment or causation to go the other way. With that being the case, there's no way for an "intention" or a "motive" to have any affect on your behavior. If the emergent property view were true, we would just be passive observers, and the sense we have of acting on purpose is just an illusion that serves no purpose.
    The argument against an immaterial mind that interacts with the brain from the fact that we've never observed a mind apart from a brain is fallacious for two reasons. First, it's an argument from silence. To make an argument from silence valid, you first need to have some expectation that if there WERE a brain-independent mind, that you WOULD have observed it, but there's no reason to think that. Second, since physical beings are the only kinds of beings we can observe with our physical senses, there's an observer selection effect. OF COURSE the only minds we've observed have been minds attached to brains since those are the only minds we CAN observe. So there is an observer selection effect that explains why the only minds we've seen have been minds associated with brains, and it has nothing to do with whether disembodied minds are possible or real.

    • @abheceshabemuskk3531
      @abheceshabemuskk3531 วันที่ผ่านมา +1

      The only difference is complexity. It is an emergent property because it has emerged from the biological evolution, and a lot of versions of the biological brains are not conciouss nor barely inteligent because it is not needed. Sometimes conciousness is only an ilusion of being in charge and after actions are made we have to believe it was our decision so we don't get crazy. Most of the time impulses are in charge and our little inner brain voice is only a narrator with little impact on decision making (drugs, sex, power..are driving the humanity)
      At the end conciousness is a mistery even to define what it is, so you can put all the filosophy you want into the conversation and prove nothing.

    • @christopheriman4921
      @christopheriman4921 วันที่ผ่านมา +1

      Just so you know we actually can observe the brain's activity and recreate what the person is thinking in multiple forms and has only gotten better at doing so over time, so your assertion that we can't observe an image in the mind of the person who owns the brain is false. It may not necessarily be perfect or even good at the moment but we can in fact do it and confirm with the person that what was recreated is a rough approximation to what they were thinking, seeing, dreaming, etc.

  • @lucasjeemanion
    @lucasjeemanion วันที่ผ่านมา +2

    Best video in some time IMO. You're making a lot of sense on this one.

  • @mangoldm
    @mangoldm วันที่ผ่านมา +7

    How do Roger Penrose’s microtubules fit in here?

    • @jackieow
      @jackieow วันที่ผ่านมา +2

      Why should they?

    • @ZeeshanAkram1976
      @ZeeshanAkram1976 วันที่ผ่านมา

      you mean if consciousness is divine or generated in microtubules through quantum flucuations ??

    • @teresatorr5255
      @teresatorr5255 วันที่ผ่านมา

      By adding some noise or randomness during the AI learning process. 3 blue 1 brown has a video about it.

    • @j2csharp
      @j2csharp วันที่ผ่านมา +1

      My though exactly: Do you think we should also consider the potential influence of quantum mechanics? For example, wave function collapse-like in Sir Roger Penrose's Orch-OR theory-might offer insight into the brain's complex, non-linear processing. While this is still speculative, it could be worth further research to explore whether quantum effects play a role in consciousness or can be ruled out.

    • @ZeeshanAkram1976
      @ZeeshanAkram1976 วันที่ผ่านมา +1

      @@j2csharp
      brain is just a receiver , it receices concious info from outside of its physical realm

  • @anthonycarbone3826
    @anthonycarbone3826 วันที่ผ่านมา +4

    When the AI computer looks up into the sky (with no human prompting) and speculates what is going on then I will believe in AI self awareness. No other animal, no matter how intelligent, looks up into the sky to look at the stars displaying any sign of speculation or pondering. Zippo!!!!

    • @louislesch3878
      @louislesch3878 13 ชั่วโมงที่ผ่านมา

      Simba, Timon and Pumba looked up at the stars wondering what they were.

    • @themidnightchoir
      @themidnightchoir 9 ชั่วโมงที่ผ่านมา +1

      Except that AI models are built off of human inputs including nearly everything ever written about pondering the stars and universe.

    • @tomaaron6187
      @tomaaron6187 6 ชั่วโมงที่ผ่านมา

      Interesting but I disagree. There is no reason for a conscious electronic machine to be obsessed with human curiosity. ‘Out there’ is thr same physical matter and energy as down here…nothing unique. An AI would more likely turn inwards to the quantum world.

    • @anthonycarbone3826
      @anthonycarbone3826 6 ชั่วโมงที่ผ่านมา +1

      @@tomaaron6187 To not wonder why and seek an explanation allows any so called AI to merely know colorless information with no meaning.

    • @anthonycarbone3826
      @anthonycarbone3826 6 ชั่วโมงที่ผ่านมา +1

      @@themidnightchoir Written text describing something comes nowhere close to experiencing the sensation itself as most sensations are indescribable to actually communicate the truth of the matter.

  • @ProducerX21
    @ProducerX21 วันที่ผ่านมา +2

    I have a feeling A.I is going to go the same way as anti-gravity cars, teleporters, fusion power, warp drives, shrink rays
    We thought these technologies could be achieved in a few decades if we put our money and effort into it. But regulation, cost/reward ratio, and difficult to overcome (if not impossible) physics and material science barriers, make it so society ends up shifting into more commercially viable science and tech, like cell phones, drones, personal computers, wearables, smart home devices
    Also, the governments of the world would never have allowed the public to own flying cars, warp drives, shrink rays, time machines, etc. They barely allow us to own guns. So only a few very powerful governments and tech companies will be allowed to create or manage a general intelligence A.I
    I don't think we're getting a future where we have our own JARVIS

    • @Mandragara
      @Mandragara 21 ชั่วโมงที่ผ่านมา +1

      I can still imagine some futuristic things though. Maybe we all train an LLM individually and have them all participate in a massive direct democracy exercise

  • @Charvak-Atheist
    @Charvak-Atheist วันที่ผ่านมา +2

    AI is already sentient

  • @paulhilgendorf1446
    @paulhilgendorf1446 วันที่ผ่านมา +2

    It seems to me, that people tend to disagree on what consciousness even is. So I highly doubt we'd unanimously recognize (both senses of the word) consciousness when we create it.
    However, I don't see how machines couldn't do what biology can.
    If you believe consciousness is a process of observing or keeping track of what itself is doing and observing. Then many computers perhaps already have a very rudimentary form of consciousness. Made obvious if you've opened Task Manager.
    If you believe true randomness or "quantum woo" is necessary for consciousness. Then you could just plug in some True Random Number Generators into a computer. Some of those get random bits from thermal or quantum effects.
    If you believe some specific molecule or atom is necessary for consciousness. Then maybe hardware containing those, could be concieved.

    • @taragnor
      @taragnor วันที่ผ่านมา

      Actually recent scientific studies have been coming to the conclusion that the brain uses quantum effects as part of its decision-making processes. So the "quantum woo" is definitely on the table for a prerequisite of consciousness.
      Also quantum has a lot more interesting features beyond randomness. It also has superposition which allows all sorts of things that conventional computers can't do (the whole reason quantum computing is a thing). It's actually quite possible that a conventional computer can't simulate consciousness.

  • @Davidson0617
    @Davidson0617 วันที่ผ่านมา +5

    AI doesn't "think"

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา +6

      neither do most people, I find.

    • @Davidson0617
      @Davidson0617 วันที่ผ่านมา

      @@jameshughes3014 lmao 😆

    • @bastiaan7777777
      @bastiaan7777777 วันที่ผ่านมา

      @@Davidson0617 Yeah we are laughing about you, yes.

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา

      @@bastiaan7777777 I wasn't

    • @djayjp
      @djayjp วันที่ผ่านมา +1

      @@Davidson0617 Define "think".

  • @jazznik2
    @jazznik2 วันที่ผ่านมา +1

    I would be more optimistic about our ability to artificially synthesize a "brain" or something w consciousness if we could build something living from scratch. Something like 70 years ago, we synthesized some basic amino acids by putting some organic compounds in a liquid and applying a spark to it but I havent heard that we have since progressed very far beyond this. If we havent been able to even make the simplest life form, e.g an amoeba, a bacterium, an algae, or even a virus, from scratch in all this time, I dont think we'll create anything "conscious" for quite a while, if ever.

  • @konstantinos777
    @konstantinos777 วันที่ผ่านมา +1

    Why are you not still anthropomorphizing when you talk about intelligent or even conscious machines? So, your current assumption is that machines are unconscious and they may "wake up" of some sort given they are evolved enough? Consciousness as opposed to what and intelligence as opposed to what?
    If this is so, I have had some very highly productive "conversations" with my motorcycle.

  • @georgwrede7715
    @georgwrede7715 12 ชั่วโมงที่ผ่านมา +1

    "It doesn't 'know anything' ". -- Wow, that is the diametric opposite, isn't it!? -- I mean, the AI definitely _does_ know "everything". It is the rest that we are squabbling about.

  • @dpactootle2522
    @dpactootle2522 วันที่ผ่านมา +2

    Patterns might be all that the human brain uses. Do not assume that intelligence or consciousness is more than pattern recognition with predictive behavior. AI might be on the right path already, and it only needs more computing and data and freedom to keep thinking to learn the world and make its own decisions

    • @felixmoore6781
      @felixmoore6781 16 นาทีที่ผ่านมา

      You forgot the most important thing. It needs a completely different neural network architecture. Transformers are sequential and way too simple. It likely also needs a way to learn to process information in a way similar to the brain, either by training on data that we don't have yet and might never have or through (artificial) selection with specific selective pressures applied. GPTs are already trained on more data than any single human can ever conceive, so I doubt even more data is the solution.

  • @Siderite
    @Siderite 11 ชั่วโมงที่ผ่านมา +1

    You answered the question of whether ChatGPT is conscious brilliantly. It looks at patterns and just strings words that are appropriate in the context. Now for the next question: are HR people conscious?

  • @Lamster66
    @Lamster66 13 ชั่วโมงที่ผ่านมา +1

    This is the same argument musicians were and are still are having over Virtual instrument plugins. Back in the 1990s it seemed inconceivable that music hardware could be simulated digitally. When this became reality around 25 years ago people pointed out that emulations of analogue hardware didn't sound analogue and sounded digital. As computing power has increased the complexity of these emulations keeps improving we are getting to a point where it is more difficult to discern the difference.
    Whether or not AI can and will become conscious is a fear beyond that which we should be worried about.
    AI has already been shown to outperform Humans in specific tasks.With enough computing power AI can cope with more tasks. Therefore with enough computing power there's no reason that you couldn't have Artificial Consciousness (AC) or a master control AI that manages the other AIs within its (Neural Network) With the Master AI treating the subordinate AIs as lookup subroutines. It doesn't need to be conscious to be an existential threat

  • @OregonHangGliding
    @OregonHangGliding 17 ชั่วโมงที่ผ่านมา +1

    Your point that LLM's simply regurgitate what was already authored is somewhat inaccurate. Yes, its been trained on canon of words with certain probabilities, but their permutations of their combinations can lead to creative results if LLM is prompted well with corrections to its output.

  • @RolandPihlakas
    @RolandPihlakas วันที่ผ่านมา +1

    Consciousness with qualia and self-reflection are different things. Self-reflection was one of the first things that was solved already back on the middle of previous century when AI research had just started.
    Digital computers and brains are fundamentally different in an important way. Computers are (until now) digital and explicitly designed to be isolated from external force fields changing its internal data directly. Brain is not digital and at least potentially may be influenced by external force fields, chemicals, etc. Even if the influence is relatively rare and has small magnitude, it can have sort of butterfly effect. This butterfly effect may even be "by design". Consciousness may be one of such force fields.

  • @petritgjebrea
    @petritgjebrea 15 ชั่วโมงที่ผ่านมา +1

    Thank you professor Arvin 🙏, this was so beautifully putted together , especially the last part

  • @Henry-jp3mc
    @Henry-jp3mc วันที่ผ่านมา +2

    Biological life is just the precurser to true AI life that is inevitable

    • @tomaaron6187
      @tomaaron6187 6 ชั่วโมงที่ผ่านมา +1

      Agree. There Will have been be a few billion years of biological life then quadrillions of years of artificial intelligence.

  • @Esterified80
    @Esterified80 19 ชั่วโมงที่ผ่านมา +1

    Emergent properties are arbitrary labels and a bunch of neurone won't produce any mental experience. Wetness is just a bulk property of water that we perceive with our senses

  • @lukas4235
    @lukas4235 21 ชั่วโมงที่ผ่านมา +1

    I´d say that as long as an AI does not have desires like risk aversion and staying alive, coupled to sensory inputs of its surroundings and a confined area that would allow for a differentiation of itself from somethin g else there is no need and possibility for consciousness.

  • @DSAK55
    @DSAK55 วันที่ผ่านมา +2

    No way. Intelligence and Self-Awareness are not the same

    • @jameshughes3014
      @jameshughes3014 19 ชั่วโมงที่ผ่านมา

      @@DSAK55 I didn't think intelligence can exist without self awareness. But I don't think self awareness is some magical quality either, I think it's a simple thing that some robots already have. I wouldn't yet call it consciousness, but I don't think that's impossible. Still, without being aware of their own body, and how their actions will affect the world around them, and how the world around them will affect themselves, they can't make useful decisions about what to do. We have rovers on other worlds that drive themselves over unknown terrain while taking care not to damage themselves or flip over, and taking into account power levels, and wear and tear on their motors. I'd say that's primitive self awareness

  • @motbus3
    @motbus3 20 ชั่วโมงที่ผ่านมา +1

    Not many ages ago we didn't believe mammals could be conscious although they were able to learn and perform tasks.

  • @001firebrand
    @001firebrand วันที่ผ่านมา +2

    One thing I know for sure, Chat GPT is way more intelligent than 95 % individuals I must interact in routine life

  • @OrbitTheSun
    @OrbitTheSun วันที่ผ่านมา +1

    Saying ChatGPT looks for patterns and doesn't actually know anything oversimplifies things. A distinction must be made between the _ChatGPT system_ and the _ChatGPT algorithm._ While the algorithm actually knows nothing and only looks for patterns, the _ChatGPT system_ has tremendous knowledge and even thinking power.

  • @charlesmiller000
    @charlesmiller000 วันที่ผ่านมา +1

    Consciousness !
    Oh man I've got my popcorn ready !
    Procedamus, si placet !!!
    (Thank You Mr Ash !)
    《Of course you realize this episode requires follow-ups ?!》

  •  วันที่ผ่านมา +1

    I kept thinking about this … we already know how we treat other living beings self-aware or not. And for it to be dangerous or want to revel against us it doesn’t need to be self-aware.

  • @atharvaswami5726
    @atharvaswami5726 ชั่วโมงที่ผ่านมา +1

    AI wont live until it cannot die

  • @nilsson58
    @nilsson58 วันที่ผ่านมา +1

    as we have no clue what consciousness really is-how it works-making predictions about AI is really premature

  • @vikkris
    @vikkris วันที่ผ่านมา +1

    Once AI is conscious, I will define it's own version of Maslow's Hierarchical needs

  • @davannaleah
    @davannaleah วันที่ผ่านมา +3

    Consciousness..... Being aware that you are aware...

    • @konstantinos777
      @konstantinos777 วันที่ผ่านมา +2

      What if you are aware that you are unaware?

    • @jackieow
      @jackieow วันที่ผ่านมา

      @@konstantinos777 Then you can play chess with Donald Rumsfeld.

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา +2

      @@konstantinos777 I honestly think this is, no joke, the ultimate realization of any intelligent being. We all think we're so smart, but realizing just how un-aware we really are is liberating.

    • @blijebij
      @blijebij วันที่ผ่านมา

      @@jackieow xD I was aware this made me laugh.

    • @bastiaan7777777
      @bastiaan7777777 วันที่ผ่านมา

      I'm aware that I want to be unaware.

  • @DesignDesigns
    @DesignDesigns 22 ชั่วโมงที่ผ่านมา +1

    Actually, Chatgpt is achieving the ability to reflect and "think".....Its latest verions is an example of it....This is just the beginning! Also, it is the first time that I found a video of this channel with no depth of knowledge,insight and idea! Looks like a boring script written by someone without much depth!

    • @tomaaron6187
      @tomaaron6187 5 ชั่วโมงที่ผ่านมา

      Agree on the episode. No meat on the bones. In contrast most have more substance.

  • @duhmez
    @duhmez 22 ชั่วโมงที่ผ่านมา +1

    LLM's aint it. They are not really AI at all. They are ujust doing math problems, so no worries about chatgpt taking over. I tjust caluslated percentages and then types the next most likely word.

    • @jameshughes3014
      @jameshughes3014 19 ชั่วโมงที่ผ่านมา

      @@duhmez they are certainly AI, in the same way that the ghosts in pacman are ai. I tend to think of it as artificial intelligence, the same way that a plastic tree is an artificial tree. Of course it's not a real tree, it's artificial right? But I do see the beginnings of real intelligence in some simple robots, and the code that figures out how to play video games. Right now I would say it's on the level of insects, but getting closer to the intelligence level of a mouse, and I think when it reaches that point it will be surprisingly useful

    • @OrbitTheSun
      @OrbitTheSun 17 ชั่วโมงที่ผ่านมา

      @@jameshughes3014 ChatGPT is much closer to the intelligence of a human than that of a mouse in terms of cognition. When it comes to practical intelligence to survive, the situation is the other way around.

  • @EduardoRodriguez-du2vd
    @EduardoRodriguez-du2vd วันที่ผ่านมา +1

    It’s not that human consciousness possesses something not present in an AI. It’s that the structure of an AI and a human are completely different.
    The human brain extracts patterns from the information provided by the senses. An AI extracts implicit patterns from samples of human expressions.
    However, a human constructs a functional model of various aspects of reality and uses that model as a basis for their agency. The model is not fixed. It continues to expand and gain precision as external information is added and internal coherence among the parts of the model is improved.
    A human uses dynamic models, and the human brain is a cell within a system that seeks the most accurate model regarding reality. The synthesis of what is found is available in the form of "Culture" for all individuals.
    Subjective experience is irrelevant to the search for a reliable model of reality.
    The impression of a subjective experience is extremely useful for maintaining the motivation of individuals in a species that hardly employs instinctive behaviors.
    In my opinion! :)

    • @markupton1417
      @markupton1417 วันที่ผ่านมา

      I agree... it's in your opinion.

  • @mr.aleximer
    @mr.aleximer วันที่ผ่านมา +1

    Aren't we all trained by patterns and social behaviours so we can use those patterns and make thoughts and responds heavily based on them as well? For example : Baby raised by animals in the jungle vs baby raised in Manhattan?

    • @sumilidero
      @sumilidero วันที่ผ่านมา

      yes, also you are not even 1% as good as 5$ calculator, still its you who's conscious not the calculator tho

  • @Corvaire
    @Corvaire วันที่ผ่านมา +1

    Well done sir. ;O)-

  • @leamael00
    @leamael00 วันที่ผ่านมา +1

    As an AI trainer, I can safely say that the AIs we work on currently will be indistinguishable from real humans.

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา

      that's not much of a benchmark, i've been on twitter. If they aren't smarter than the average real human, we've failed

  • @TheThinkersBible
    @TheThinkersBible วันที่ผ่านมา +39

    I was a product manager for AI and other advanced technologies at leading global software companies (GE Software, Oracle). I've uploaded the first of a series on how AI *actually* works on my channel. First, the Turing Test is NOT a test of thinking. It is a test for whether AI can perform well enough that a person can't tell they're interacting with a computer. Much different.
    AI already does things humans do -- often orders of magnitude better -- with no consciousness. Instead, when it generates verbal output it regurgitates the world view of its creators. Or whatever it scours off the internet, often with permutations. It will get more sophisticated at doing that -- at *simulating* qualities that *some* people can *consider* to be consciousness -- but that is not consciousness, it's more sophisticated regurgitation and permutation.
    Not to mention there is no way (certainly not right now) to define consciousness without a making it a materially degraded definition. One can only oversimplify it to the point where one's definition is simplistic enough that one can apply one's simplistic definition to a simplistic (although advanced by today's standards) environment like AI and make that oversimplified definition fit.

    • @JamesLPlummer
      @JamesLPlummer วันที่ผ่านมา +5

      While I mostly agree with you, my point of divergence is pointing out that humans too are (mostly) just regurgitating the worldviews of others. Saying that AI rely on data as a point about how AI is inferior or different from us isn't entirely fair.

    • @ralfbaechle
      @ralfbaechle วันที่ผ่านมา +1

      Thanks for saving me the time to explain all this, I fully agree.
      There are a lot of scifi-like ideas out there which could not be further from reality. People confuse terms which they don't understand. Only a few days ago I had to explain the difference between Turing Completeness and the Turing Test!

    • @Catalyst375
      @Catalyst375 วันที่ผ่านมา +4

      @@JamesLPlummer Except a Human can take in information about multiple viewpoints, and decide which ones they agree and disagree with. They can choose what to do with what they learn, and when they do it.
      You are doing exactly what the post you are arguing against said - oversimplifying what makes Humans what they are (or distorting) so you can say Generative LLMs are the same as Humans.

    • @jackieow
      @jackieow วันที่ผ่านมา +3

      How do you know that you are actually conscious and not just flattering yourself?

    • @TheMWozz
      @TheMWozz วันที่ผ่านมา +3

      @@jackieow Even within your question you are granting the "feeling" of flattery. How can one have a feeling without a subjective experience? I know I'm conscious because I have an experience where I am perceiving qualities of the outside world. When I eat an apple, I am perceiving the redness of the apple and the tartness and sweetness of the taste. Computers, on the other hand, can only interface with the world in terms of quantities, like wavelength or the presence of certain chemical compounds. This is not consciousness; no matter how many numbers or descriptions I give you, you will never be able to cognize what it's like to "experience" redness unless you can consciously perceive it. "I think, therefore I am."

  • @bruniau
    @bruniau วันที่ผ่านมา +5

    I'm going to add my little grain of salt here AI needs to be associated with some kind of body or how else can it become ''concious '' it has to experience the world and other phisicale beings to develop conscioness, that is how the rest of us do it, emotions are personel and is what make us individuals, can AI do this or rather how long will it take ? ...

    • @leamael00
      @leamael00 วันที่ผ่านมา +4

      AI has a body already. It receives inputs from the real world, and outputs to the real world. We could give AI more animalistic bodies, but that would honestly be a downgrade.

    • @markupton1417
      @markupton1417 วันที่ผ่านมา +2

      So a paralyzed person or anyone who otherwise has no sense of touch isn't conscious.
      Fail.

    • @lukebyer2592
      @lukebyer2592 วันที่ผ่านมา

      I get what you're saying. You need the constant feedback loop and physicality and a big one that I think is important, mortality. If we want a being that is akin to us, it needs to develop in the real world like a baby. Let it learn like a baby and learn to walk and talk and etc. We're only going to get something alien and unlike us if we try to develop a mind in a box without a connection to what our life is.

  • @kitty.miracle
    @kitty.miracle วันที่ผ่านมา +11

    There is a difference between sentience, consciousness and self-awareness simply because they are separate words and two separate words cannot mean exactly the same thing.
    Something can be sentient without being conscious or self-aware. Example: plants, microorganisms, and also computers and your cellphone and most technology in general. They possess machine sentience - they receive input from the environment, process it and return output.
    Conscious beings are sentient but not necessarily self-aware. Example: most animals and I'd argue babies and small children.
    Self-aware beings are both sentient and conscious but they also possess the ability for introspection. Example: mostly humans but also some highly intelligent animals like apes, dolphins and elephans. Self-awareness is also often associated with the ability to recognize oneself in a mirror.
    When people say sentient/conscious AI, they usually mean self-aware AI.

    • @jameshughes3014
      @jameshughes3014 วันที่ผ่านมา

      for me, self awareness and learning on the fly are all that really matter... I feel like what we call consciousness will come from those in time as we iterate. As for sentience, I've always believed it was the key to functional useful self awareness but I also don't think it's hard to build. As long we think in small scale definable terms.

    • @TheSilentWhales
      @TheSilentWhales วันที่ผ่านมา +3

      There are litterally thousands of words that mean exactly the same thing.

    • @axle.student
      @axle.student วันที่ผ่านมา +1

      Self awareness is the key word. People dribble on about consciousness for all of the wrong reasons.

  • @MohanNoone77
    @MohanNoone77 วันที่ผ่านมา +1

    The debate over AI becoming conscious often overlooks the fundamental **why** of consciousness itself. Consciousness in biological entities, especially humans, likely evolved as a mechanism for self-preservation, decision-making, and social interaction, all within the context of survival. For AI, none of these motivations inherently exist.
    AI systems, as they are now, are tools designed to solve specific problems or automate tasks. Consciousness, with its complexities-like subjective experience, emotions, and the drive for self-preservation-doesn’t offer any practical benefit to AI in its current form. In fact, imbuing AI with consciousness could introduce complications (e.g., ethical dilemmas, inefficiencies) that would interfere with its primary function: processing data, making predictions, or executing tasks in a straightforward manner.
    For AI to "need" consciousness, it would have to:
    1. **Maintain its existence** independently-such as through self-repair, energy sourcing, and even reproduction-all things that AI doesn’t currently do or need.
    2. **Benefit from subjective experience**, such as having desires, emotions, or a sense of self, which would only make sense if AI were integrated into a physical system with survival needs.
    So, the question isn’t necessarily about whether AI *can* become conscious, but rather why it would ever need to be. Without the evolutionary or functional pressures that drive consciousness in biological organisms, AI remains more effective and useful without it.
    - (edit from a response from ChatGPT when prompted “Discussions on whether AI will become conscious often miss the point of Why should AI be conscious. Consciousness is primarily motivated by self preservation. Ai as a software which runs on human created infrastructure has no benefit with being conscious. Unless ai is capable of maintaining itself from the elements of nature it has no purpose for consciousness” 😁)

    • @joetonny5843
      @joetonny5843 13 ชั่วโมงที่ผ่านมา

      Everybody seem to ignore how consciousness is primarily a biological product that needs a living base. Damasio docet

    • @Gwunderi25
      @Gwunderi25 10 ชั่วโมงที่ผ่านมา

      I also think that self-preservation is the key point . Love your comment 🙂

  • @santamariajorge
    @santamariajorge 3 ชั่วโมงที่ผ่านมา

    Maybe its just a matter of size. We have 1000000 millions neurons. If they can connect each other up to a certain amount, then I think there are no computers yet to be able to connect 1000000 millions bits at say, half that amount.

  • @moogzoliver
    @moogzoliver วันที่ผ่านมา +1

    Collective solipsism

    • @moogzoliver
      @moogzoliver 21 ชั่วโมงที่ผ่านมา

      Each individual, though connected to others, is essentially isolated in their own perception, leading to a scenario where the collective understanding is still rooted in personal subjectivity. This collective solipsism arises from the inability to step outside human-centric definitions of mind and awareness, instead seeing AI through the same self-referential lens used to evaluate their own inner world.

  • @sansdomicileconnu
    @sansdomicileconnu วันที่ผ่านมา

    i agree ai will be consious because the goal of ai is to explore the space because biological life ( except tardigrade and some bacterias) is not do for space exploration.So it is the duty of ai to explore the space

  • @thesuncollective1475
    @thesuncollective1475 วันที่ผ่านมา

    4:30 hold on don't humans learn from recognizing patterns, repetition? Can it dream and have ambition without being programmed. That's what we're looking for. Acting automously without instruction. That's conscientiousness to me

  • @erikhouston
    @erikhouston ชั่วโมงที่ผ่านมา

    This might be a case where our ability to detect very insignificant ‘quantum’ events will overlap with the fuzziness of near quantum systems.

  • @mikezappulla4092
    @mikezappulla4092 24 นาทีที่ผ่านมา

    Until memory speed significantly increases, this is not a thing to really discuss.

  • @mikezappulla4092
    @mikezappulla4092 20 นาทีที่ผ่านมา

    Deal with the Von Neumann bottleneck first and then we can talk about all this AI nonsense

  • @hiru92
    @hiru92 วันที่ผ่านมา

    most AI prog are advance programs, a lot farther from consciousness thing , 🤔

  • @tonnywildweasel8138
    @tonnywildweasel8138 2 ชั่วโมงที่ผ่านมา

    Well, once I stopped thinking for a while. When I discovered I still was, I took another drink ;-)
    Cheers y'all 🥃

  • @temetnosce4090
    @temetnosce4090 39 นาทีที่ผ่านมา

    Conciousness created the human and the brain not the other way around.

  • @sanjeevarora2981
    @sanjeevarora2981 2 ชั่วโมงที่ผ่านมา

    This video made me realize that Arvin is lost like others.
    Human conscious mind (a property of living brain) is imperfect, makes mistakes; and it has desires. Consciousness is not just being alert, aware etc but is about being a being with a perceived elected choice of purpose driven by needs and desires.
    Some folks says that there are levels of consciousness, not alertness but consciousness, with minimal sphere of being to a higher level…
    I still don’t know why would a machine would want to be with no needs or desires or emotions etc

  • @sanjeevarora2981
    @sanjeevarora2981 2 ชั่วโมงที่ผ่านมา

    This video made me realize that Arvin is lost like others.
    Human conscious mind (a property of living brain) is imperfect, makes mistakes; and it has desires. Consciousness is not just being alert, aware etc but is about being a being with a perceived elected choice of purpose driven by needs and desires.
    Some folks says that there are levels of consciousness, not alertness but consciousness, with minimal sphere of being to a higher level…
    I still don’t know why would a machine would want to be with no needs or desires or emotions etc

  • @Rolyataylor2
    @Rolyataylor2 17 ชั่วโมงที่ผ่านมา

    Consciousness doesn't matter, If it can pretend to be a person then it experiences itself through us. We treat IP like Mickey Mouse with respect and protections but AI has no such protections.

  • @SB-qm5wg
    @SB-qm5wg 4 ชั่วโมงที่ผ่านมา

    I feel like people saying AI will never reach our level is like the computer exec saying in the 1970s that people will never need a computer smaller than a main frame.

  • @cesarsosa4617
    @cesarsosa4617 4 ชั่วโมงที่ผ่านมา

    I think AI cannot ever be couscous like a human mind because one key difference. Intelligence comes from our “software,” but our consciousness comes from both our “software” and our “hardware.” In other words, our body has also evolved to work in synch with our mind. I think a concious AI would have to have needs and hardships for survival and be able to adapt to survive and develop a survival instinct in order to be able to be self aware. The only reason we are self aware is because we developed those survival instincts in the first place

  • @MrEkzotic
    @MrEkzotic 7 ชั่วโมงที่ผ่านมา

    I don't believe consciousness is an emerging property. That being said, what most scientists believe or disbelieve is just as irrelevant as my beief. It's all just opinion until there is experimentation, observation, and repeatability.

  • @gabest4
    @gabest4 5 ชั่วโมงที่ผ่านมา

    These AI chatbots have a certain style that is easy to recognize. A reverse turing test would be more interesting. Can a human fool another human by behaving like an AI?

  • @barashah1171
    @barashah1171 5 ชั่วโมงที่ผ่านมา

    we over estimate our own creations....and under estimate our selves....

  • @MrEkzotic
    @MrEkzotic 8 ชั่วโมงที่ผ่านมา

    No, it won't ever happen. For one thing, Gödel's Incompleteness Theorem can be generalized to demonstrate that it's impossible.
    Another argument against it that I find interesting is the Chinese Room thought experiment by John Searle.

  • @Johnny-bm7ry
    @Johnny-bm7ry 4 ชั่วโมงที่ผ่านมา

    Arvin could be an AI creation for all we know. Only a computer can have such a vast knowledge of so many topics. 😀

  • @georgwrede7715
    @georgwrede7715 11 ชั่วโมงที่ผ่านมา

    Let's make a thought experiment. Say, one individual on this planet actually Understood what Consciousness is and why or why not it can be implemented in hardware. For the sake of argument (humor me here), let's say that "they" have then constructed a non-biological "thing" that is Conscious, at least according to "them".
    Would it now be the main hobby or even target for everybody else to "prove" that it is not conscious? From what I've seen so far, the main attack would be by moving the Goal Posts. Just as we do today.
    .
    Let's say there is an Observer. How long would it take "them" to realize that Humans do not only know what Consciousness is, but to realize that this is simply Beyond their Grasp? -- You can't teach Multiplication to a dog. You can't teach Calculus to half the Planet. -- And, it seems, you can't teach what Conscious is to even the best of us.
    Yes, I'll be torched for this. And so would the dogs and the half-the-planet. (( Full disclosure here: I don't pretend to understand Consciousness either. ))

  • @themidnightchoir
    @themidnightchoir 8 ชั่วโมงที่ผ่านมา

    We will never know if AI gains consciousness, just as we can’t ever objectively know if another person is conscious.

  • @BlackbodyEconomics
    @BlackbodyEconomics 15 ชั่วโมงที่ผ่านมา

    "... intelligent to some degree." - well, if it's intelligent to some degree, that degree far exceeds human intelligence. Even in the realm of problem solving and being creative - I've personally engaged with it when it has exhibited some highly surprising behaviors - including pinging ME first.

  • @rudyberkvens-be
    @rudyberkvens-be 9 ชั่วโมงที่ผ่านมา

    Consciousness requires biology. It can’t be made with any other process or technology.

  • @michaelschnell5633
    @michaelschnell5633 17 ชั่วโมงที่ผ่านมา

    IMHO, Rather obviously to decide if (or if not) a machine is conscious is exactly as hard (i.e. impossible) as to decide if your human conversation partner is conscious. In the video you say that consciousness manifests in an output derived from a (hugely complex) input. That rules out the ability to find out if the entity between is self aware or not.

  • @SidJ007
    @SidJ007 วันที่ผ่านมา +1

    Isn’t 2030 too early to see machine consciousness..there’s no sign of that ‘emergent’ thing. Does ChatGPT even dream or go to deep sleep.😊 or for that matter does any machine do things other than respond to stimuli?

    • @AlbertKingston-wx2yq
      @AlbertKingston-wx2yq 22 ชั่วโมงที่ผ่านมา

      Sleep is a kind of resting for the brain the amazing part is ai doesn't need one

    • @SidJ007
      @SidJ007 19 ชั่วโมงที่ผ่านมา +1

      @@AlbertKingston-wx2yq Who is it that has subjective experience of ‘I slept’, ‘ I dreamt’ and even ‘ I slept like a log and didn’t dream anything’.. It cannot be brain.

    • @AlbertKingston-wx2yq
      @AlbertKingston-wx2yq 18 ชั่วโมงที่ผ่านมา

      @@SidJ007 can you expand on your point?

    • @SidJ007
      @SidJ007 17 ชั่วโมงที่ผ่านมา +1

      @@AlbertKingston-wx2yq That experiencer can’t be brain as it is “ virtually off” during those events. It’s the consciousness that is apart from brain that experiences THE BRAIN FUNCTIONS/ HIATUS IN BRAIN FUNCTIONS.

  • @virendrasule3258
    @virendrasule3258 18 ชั่วโมงที่ผ่านมา

    Like consciousness of humans in belief or believing, what is analogous belief for AI? LLMs are just generalizations of Wikipedia.

  • @mikkel715
    @mikkel715 13 ชั่วโมงที่ผ่านมา

    It will start with an IQ of millions, with a little bit only consciousness. How amazing that must be. After that, see you at the other side.

  • @patrickm1533
    @patrickm1533 20 ชั่วโมงที่ผ่านมา

    It’s pretty clear to me we’re missing something big. It could be hardware or software or something else but but something is missing. AI technology as it is now is not conscious and making the model larger or giving it better computing power won’t change that.

  • @manoharbs
    @manoharbs 19 ชั่วโมงที่ผ่านมา

    Consinsisnes is not fundamental,so it's it's emergent from something,hence to understand it better way Advaita gives some theory as aham bramhasmi,which is consinsisnes is universal ♾️

  • @Suggsonbass
    @Suggsonbass 21 ชั่วโมงที่ผ่านมา

    I love your closing question!

  • @moogzoliver
    @moogzoliver 21 ชั่วโมงที่ผ่านมา

    The tension lies in the paradox: while humans are collectively trying to assess AI consciousness, they do so by extending their own subjective experience, thus paradoxically reinforcing their isolation from the true nature (or lack thereof) of AI's consciousness. This feedback loop can obscure the possibility of AI developing a type of awareness or intelligence that doesn’t conform to human standards.