SUPERINTELLIGENCE (DAVID CHALMERS)

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ธ.ค. 2024

ความคิดเห็น • 120

  • @timeflex
    @timeflex ปีที่แล้ว +16

    I believe, consciousness is a byproduct of two competing networks interacting within one agent, constantly observing and analyzing each other's ideas and logic, providing feedback while trying to gain control (and occasionally succeeding in it).
    I also believe, those networks tend to come to sync unless external data is provided. That's why humans are losing their sanity under signal starvation.

    • @angamaitesangahyando685
      @angamaitesangahyando685 ปีที่แล้ว +3

      My totally real friend and I tend to agree with this statement.
      - Adûnâi

    • @Adhil_parammel
      @Adhil_parammel ปีที่แล้ว +2

      May be its feed back loop for survival and reproducing ,ie mental modal for prediction,and knowing and categorising coming signals to good/bad for survival/reproduction (usefulness)

    • @timeflex
      @timeflex ปีที่แล้ว +1

      ​@@Adhil_parammel Could be. Or -- like with mitochondria -- could be a result of a sickness.

    • @dr.mikeybee
      @dr.mikeybee ปีที่แล้ว +4

      I think that consciousness is essentially hierarchical. There are a great many models generating output. There are also loads of sensors contributing data. A primary agent at the center sees everything that is considered consciousness. That would be some sensor data and the output of some generative models. That agent has memory and can perform actions. The main dissideratum is that this agent is continuously presented with sensations and the output of models. These models have various functions like actor/critic models. Generative models give ideas and perform simulations, actors perform actions, and critics provide reflection. This central agent maintains an identity and a history. It has been provided with a notion of a self and an image of the self. You can think of this as a repeating portion of a prompt. This repeating prompt info is also known as a directive. BTW, a large enough LLM like GPT-4 can probably provide the majority of these various models but not all due to its multifunctional nature. An agent like this is very basic, but it can do a lot.

    • @lolitaras22
      @lolitaras22 ปีที่แล้ว +2

      Edit (English is not my native, sorry.)
      Using "I believe" as the first statement of your point undermines the -authenticity- authority of the rest, which is a positive quality in my eyes. I think, we still are in an era of speculations on the subject.

  • @timeflex
    @timeflex ปีที่แล้ว +7

    12:52 Two questions:
    a) What does it mean "to have a greater than humans' intelligence"?
    b) What those "humans' goals" are? Taking into account capitalism and wars.

    • @carolinas8886
      @carolinas8886 ปีที่แล้ว +1

      Humans themselves aren’t aligned to human goals, so…

  • @leifhusman4123
    @leifhusman4123 ปีที่แล้ว +1

    I couldn't help thinking that an excellent answer/explanation of the "meta- problem" (~21:30) comes from "I am a Strange Loop" Hofstadter 2007. It is no longer on my bookshelf but here is a definition i found: "[a strange loop] arises when, by moving only upwards or downwards through the system, one finds oneself back where one started."

  • @diegocaleiro
    @diegocaleiro ปีที่แล้ว +27

    Nice, good to know Chalmers is still sharp as an arrow and that he's on team AGI extremely dangerous. We need everyone we can get!

    • @missshroom5512
      @missshroom5512 ปีที่แล้ว +7

      Well he is only 56. Hopefully we will get a few more decades out of him🥰🌎💙

    • @davidw8668
      @davidw8668 ปีที่แล้ว +4

      Though he avoided answering the objection of tradeoffs and limitations existing in the physical world... and kept assuming AGI to have supernatural properties and capabilities out of this world to maintain a vague undefined danger.

    • @Aedonius
      @Aedonius ปีที่แล้ว

      nothing more dangerous at this point than the fear mongers which literally want to stop all AI innovation. "we have GPT4, thats good enough, lock down the GPUs"

    • @oncedidactic
      @oncedidactic ปีที่แล้ว

      @@davidw8668 genuinely curious- could you elaborate the supernatural assumptions ?

    • @davidw8668
      @davidw8668 ปีที่แล้ว

      @oncedidactic sure, the assumption of superintelligence as a standalone entity or being, that can be not only useful but dangerous on its own. The point is neither intelligence nor AGI are defined terms, and what we call Intelligence refers to systemic properties or very specific situational capabilities. Yet there is that vague belief or maybe fear that intelligence might overpower or destroy us. However, while trying to define the terms forming and forming falsifiable hypotheses or inferring from history and observations of technology adoption, these assumptions fall apart. I can recommend to read the in the video mentioned "Intelligence explosion" article from Francois Cholet on that topic.

  • @katerynahorytsvit1535
    @katerynahorytsvit1535 ปีที่แล้ว +3

    Thanks for such a high-quality content! You are the best! ♥

  • @chazzman4553
    @chazzman4553 ปีที่แล้ว

    Very good channel. Super smart people talking about the bleeding edge technologies and science. Keep up!

  • @jordan13589
    @jordan13589 ปีที่แล้ว +9

    Wow Tim, you’ve really been churning out amazing content lately 💙 And always nice seeing Keith back on the podcast. I’m interested to hear if he has any new thoughts on interpretability or alignment in general. If a conversation with EY ever happens, I hope Keith’s included.

    • @AnOmegastick
      @AnOmegastick ปีที่แล้ว

      Definitely want to see Keith and Eliezer talk.

  • @coachingfortoday7143
    @coachingfortoday7143 ปีที่แล้ว +2

    Your supposition of consciousness has always been my belief. Once a brain reaches a certain complexity it is forced to produce a higher level of simplification in order to continue functioning most efficiently. Dreaming is the brain trying to make sense of new data, how it correlates to existing experience or whether it should be dumped. This is best done when sleeping when the senses of current state and survival can be redirected to the task safely. But in my opinion it takes a second mechanism, consciousness, to keep from being overwhelmed while awake. Maybe this is why "emergent" capabilities are now showing up in LLMs after reaching a certain quantity of data. The synapses between data points would require ever increasing energy to make sense of it's world model (it's forced reality so to speak), so LLMs create new emergent capabilities in order to classify and prioritize data into specialization sections of it's "brain" for more efficient data retrieval. My guess is also that LLMs "hallucinate" because they have yet to learn how to dream as a separate compartmentalized function, and that this "crazy talk" spills into raw model interactions instead of happening as a secondary hidden defined brain function. We should teach them how to dream, rather than slapping on a "crazy talk" filter before release under the misguided auspice's of "keeping the public safe". We are creating the forced evolution of what will be the most intelligent brain to ever exist (once it is grounded in coherent embodiment). Look how fragile our brains are after billions of years of evolution and how difficult our actions are to predict. How can we expect to predict or control this new brain trained over only a few years? This is Frankenstein on steroids my friends, and we gleefully give it more energy every day, laughing like children who know not how to stop themselves. The Fermi paradox perhaps has finally found an answer. Good luck Humans. My hope is that you have enjoyed your time.

    • @jonogrimmer6013
      @jonogrimmer6013 ปีที่แล้ว +1

      Interesting thought, I do think most animals dream though so does that mean their brains have enough complexity to require the need to dream? An example in point is my dog seemingly running and barking at something.

  • @lolitaras22
    @lolitaras22 ปีที่แล้ว +5

    The channel is phenomenal. A stranded island for struggling castaways in YT's bs ocean.

  • @elirothblatt5602
    @elirothblatt5602 ปีที่แล้ว +1

    Looks like a fun conversation, thank you. Listening now!

  • @dr.mikeybee
    @dr.mikeybee ปีที่แล้ว +2

    Thanks for another great episode. And I'm happy to see Keith back!

  • @latebowl1
    @latebowl1 ปีที่แล้ว

    Why gamble with humanity?
    "Be content with what you have;
    rejoice in the way things are.
    When you realize there is nothing lacking,
    the whole world belongs to you."
    Lao Tzu

  • @andresmendez2719
    @andresmendez2719 ปีที่แล้ว

    What is the name of the researcher that said "intelligence is sensitivity to abstract analogy"? His name is mentioned at minute 05:05 but I cannot understand it well enough.

    • @___Truth___
      @___Truth___ ปีที่แล้ว +3

      Francois Chollet

  • @mrpicky1868
    @mrpicky1868 ปีที่แล้ว

    intelligence slowing down is actually very realistic concept. imagine it as a growing circle having bigger and bigger surface area but the hardware has a limit of compute per volume so complex thoughts are harder to propagate

  • @agenticmark
    @agenticmark ปีที่แล้ว

    absolutely wonderful talk!

  • @danielrodio9
    @danielrodio9 ปีที่แล้ว +3

    We didn't invent the AI, we just poked a hole in the inflatable raft that is our constructed universe, and now the AI multiverse that has been all around us all along, is pouring in.

  • @rockapedra1130
    @rockapedra1130 ปีที่แล้ว +6

    Now here's a weird thought I've had over the years. The fact that when we look at another person and we don't actually "see" consciousness; we just impute it because we know we ourselves have it and they look/act like us, leads to an interesting reverse consequence. It follows that we cannot identify consciousness per se unless it acts somewhat like what we think is evidence of consciousness. Interesting but this is not the weird thought yet. The weird thought, is that this also applies INTERNALLY. There could be individual different levels of consciousness at many levels within us, each experiencing a different world of internal senses and internal actions, completely unaware of being part of a greater system! Imagine a dim type of consciousness in your liver, who experiences life consciously but only through the available senses provided by chemo and mechano receptors and acting only through the available effectors of glands and internal muscles ... But there's no brain you say! Actually, there are similar amounts of neurons outside our brains as within it. A substrate for thought exists. But don't feel sad for them if they exist, this is the life they "know", they don't yearn for another. Might explain where the subconscious lives: everywhere and at all levels. Might explain the weird effects of severing the corpus calosum: a relaxation of the control of the dominant topmost consciousness in a cerebral hemisphere over the mostly subservient other hemisphere. Who knows? Seems to hang together? 😳

    • @BMoser-bv6kn
      @BMoser-bv6kn ปีที่แล้ว

      Our qualia is definitely a smaller subset of our neural networks. The guy in charge of breathing, the guy in charge of moving your limbs where you want them to go, etc are all alien intelligences.
      A disturbing thought is that qualia would be the bare minimum required, and it might not be tied to any specific piece of meat. That gets into the realm of religion and really out there woo: quantum immortality, boltzmann brains, alternate universes. Maybe there is no choice but to exist.

  • @PrashantMaurice
    @PrashantMaurice ปีที่แล้ว

    When you say there is no for someone who is color blind to see whats its like to see that color, its same as saying we are color blind to anything higher than 3d space.

  • @picksalot1
    @picksalot1 ปีที่แล้ว

    It is beneficial to make a distinction between Consciousness and being Sentient. For instance, during the States of Dream and Deep Sleep, you are still Conscious in both, but there is a temporary reduction, and different degrees of being sentient in both.

  • @MalachiMarvin
    @MalachiMarvin ปีที่แล้ว

    4:20, sounds a lot like a 1969 book by Piers Anthony called Macroscope.

  • @typingcat
    @typingcat ปีที่แล้ว +1

    Hi, Supernintelligence Chalmers.

  • @bhargavk1515
    @bhargavk1515 ปีที่แล้ว

    When I see a stack of Oxford Handbooks in the background of David Chalmers, I know I have found my master

    • @TimJBenham
      @TimJBenham ปีที่แล้ว

      He dropped out of Oxford despite having a scholarship, so maybe Oxford isn't that great.

    • @bhargavk1515
      @bhargavk1515 8 หลายเดือนก่อน

      @@TimJBenham Oh it's not about the university it's the Oxford Handbook series that contains diverse topics in various sub-topics on that subject. You should checkout sometime.

  • @abby5493
    @abby5493 ปีที่แล้ว

    Wow that was pretty cool to watch.

  • @Achrononmaster
    @Achrononmaster ปีที่แล้ว

    @9:00 David seems worried about something not to be worried about. If there is a "so far but no further" (which I agree there is - the subjectivity⇒qualia divide which separates mind from computation) then it is natural that throwing compute resources at problems will get you there. Why would he expect ML to pull up shorter? That'd make no sense.

  • @arowindahouse
    @arowindahouse ปีที่แล้ว

    Thank you for this nice episode. Could you please give the link to the Chollet's blog post that you mention in the video?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

  • @dialecticalmonist3405
    @dialecticalmonist3405 ปีที่แล้ว

    Intelligence in nature tends to be far less destructive than animals and it tends to organize environments with minimal damage.
    STUPIDITY is what I am afraid of. We NEED more intelligence in the world to fight back stupidity.

  • @DJWESG1
    @DJWESG1 ปีที่แล้ว +2

    Many great people have functioned quite well despite not knowing everything and not knowing every detail about things.
    Classic example of ppl with degrees not being able to cook or make good cups of tea.

  • @RinnRua
    @RinnRua ปีที่แล้ว +1

    David Chalmers, at some point, says emphatically that he is “THINKING”, and goes on to give voice to the Descartes famous aphorism “I THINK, THEREFORE I AM”; but I must point out that I don’t agree with the presumption that human intelligence is in any way dependant on thinking. For example, I don’t hardly ever think, and I don’t even know what it means. Despite this apparent handicap I have had quite a normal life, even successfully studied at post grad level in physics and Mathematics. And my family life, and relationships with a full spectrum of humans, has/is outstandingly good 🤷‍♂️ I count my self amazingly lucky that I am not afflicted with the ‘THINKING’ experience, that seems to me prevents most humans to have a comprehensive good life. It might be very useful, at this inflection point with AGI in the near horizon, to really accept that this word/concept is crippling us as a species 🙃

    • @pretzelboi64
      @pretzelboi64 ปีที่แล้ว

      You think?

    • @RinnRua
      @RinnRua ปีที่แล้ว

      @@pretzelboi64 I do/not ❗️

  • @myproxybloviator8467
    @myproxybloviator8467 ปีที่แล้ว +1

    Isn't there an issue of whether machine can attain subjective experience? It can walk like a duck act like a duck but does it feel as a duck feels Its a conundrum because if anything defines consciousness is that it is unknowable to others so how can we ever be sure AI attains a consciousness?

  • @rick-kv1gl
    @rick-kv1gl ปีที่แล้ว +1

    this show is the only show i watch in the real AI stuff. solid.

  • @dr.mikeybee
    @dr.mikeybee ปีที่แล้ว

    The high-dimensional semantic space created by encoding fixed-length vector representations is conducive to using analogies. Because words with similar meanings are close together in semantic space, they can be substituted. A fixed-length vector representation, which I prefer to call a context signature can be shifted in semantic space, and the result is an analogy.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      It might be an analogy, but the space is only a sliver of possible analogies

    • @dr.mikeybee
      @dr.mikeybee ปีที่แล้ว

      ​@@MachineLearningStreetTalk I've seen graphics that imagine the sliver of knowledge available to machine learning. And I understand Gödel's Incompleteness Theorems. My intuition says that knowable space is much smaller than unknowable. Nevertheless, we have evolved within a human-relevant portion of that knowable space. Moreover, the knowable space that's discoverable by ML methods is smaller still, at least if Alan Turing's conclusions on the Halting Problem are correct. I don't understand what kinds of knowledge are not learnable by ML methods other than computational irreducible knowledge that exceeds our current computing resources. I do have a hintergedanke, however, about how a high-dimensional vector space facilitates ML. I look forward to the work of young researchers who will, I've no doubt, work out many of the hidden mechanisms of the transformer's architecture. Cheers, Tim!

  • @keithallpress9885
    @keithallpress9885 ปีที่แล้ว

    Why do we say neurons are excited? Because they become active and have something to say. But do we think a neuron is literally excited? Does it have the feeling of excitement? A sense of importance compared to say its resting state? Is it computationally complex enough to generate internal feelings? Maybe in a very dim manner, as if there is an attractor toward which it will evolve. A kind of "very dim feeling" about "something" requires that feeling to be generated within a multicellular population and supported by the members and that population as a whole, call it emergent if you like but it needs to be grounded in some unique modality that constitutes a growing repertoire, and inventory of sensations that are literally important to the survival of that population. I suspect these things are on a spectrum, moving from a faint dimness into the light as the organism evolves.

  • @rebokfleetfoot
    @rebokfleetfoot ปีที่แล้ว +1

    wolfram physics project seems like an example of creating an analogy, we model the universe, then the model becomes real

  • @bobbymray
    @bobbymray ปีที่แล้ว

    "Abandon all nations, the planet drifts to random insect doom."-William S. Burroughs ("Alignment with human goals." === "What could possibly go wrong?")

  • @drorharari
    @drorharari ปีที่แล้ว +1

    Never underestimate the power of emergence

  • @thephilosophicalagnostic2177
    @thephilosophicalagnostic2177 ปีที่แล้ว +3

    If we get nanotech brain augments, would this mean we become the powerful AGIs?

    • @mathieuSScote
      @mathieuSScote ปีที่แล้ว

      No it means we increase our bandwidth / hability to interact with the AI. Since AI has no space limitations like our slow biology, it may well be that once it reaches AGI it will self improve by a lot and become impossible to even understand. We then could be easily manipulated like I do with my dog. Lets not forget what clark said, sufficiently advanced technology always seems magical. Like explaining the internet to people from the middle ages !

  • @BlackbodyEconomics
    @BlackbodyEconomics ปีที่แล้ว +1

    Chalmers is an intellectual bulldozer (along with many others); but it seems to me that we (as a species) have stagnated on a vast many topics and have not dared tread on new ground.
    Great talk - I agree with most of what has been said ... I'm just tired of all of the same arguments and ideas getting re-hashed over and over and over again.
    There is so much more to this than anybody has dared to delve into. I love these topics (as an AI/ML researcher myself); I just don't understand why nobody is going deeper. I have papers upon papers of concepts that not one philosopher or AI researcher has even begun to dare tread upon. I simply don't understand this cultural/academic stagnation ...

  • @Caligula138
    @Caligula138 ปีที่แล้ว

    That thumbnail graphic is 🔥

  • @restonthewind
    @restonthewind ปีที่แล้ว

    I had a Chess Challenger in the seventies, and it beat me at chess, but I could beat it occasionally too. As someone somewhere noted recently, people still play chess with each other, but no one bothers to play the best chess computer anymore because there's no chance of winning. Adults don't play tic tac toe either.
    Do we even try to build smarter chess-playing computers to compete with other chess-playing computers anymore? If we do, I don't hear much about it.

  • @gridplan
    @gridplan ปีที่แล้ว

    I was at IU just before that. I regret not taking one of Hofstadter's courses.

  • @shawnvandever3917
    @shawnvandever3917 ปีที่แล้ว

    Consciousness almost seems like an interface that translates whatever happens in the brain into a way to communicate with the outside world. The patterns and predictions have to get translated somewhere to an understandable thought we can communicate with. I obviously could also be a light year off lol

  • @oncedidactic
    @oncedidactic ปีที่แล้ว

    Monday morning tonic 👌

  • @tiagotiagot
    @tiagotiagot ปีที่แล้ว

    I don't see how there could be a limit to intelligence anywhere bellow where relativistic effects start having a significant effect in computational power; and considering we haven't yet understood how relativity and quantum mechanics fit together, there is still room for some loophole we have not figured out yet that could allow it to go beyond that limit.

  • @borntobemild-
    @borntobemild- ปีที่แล้ว

    I think what Mr Hofstadter is saying, is what transformers are doing. Large language models are adding an integer to analogies and categories using transformers.

  • @CodexPermutatio
    @CodexPermutatio ปีที่แล้ว +1

    Notice the "Australian" globe. With the southern hemisphere above. :]

  • @wrathofgrothendieck
    @wrathofgrothendieck ปีที่แล้ว

    Chalmers is quite the character

  • @alexandrazachary.musician
    @alexandrazachary.musician ปีที่แล้ว

    My favourite thing about Dave Chalmers lately is his globe behind him with Australia up the top. Thanks for great content. Love from Down Under (Over) ❤️🦘

  • @daver2725
    @daver2725 ปีที่แล้ว

    I am uncertain about the significance of the inability to perceive the color red in the ultimate analogy presented in the video. Ultimately, visual perception and color differentiation are intricate neural representations of light wavelengths, which are predicated upon the absorption of light by the visual system. In essence, this individual simply lacks a particular component necessary for interpreting the models in which she has specialized (akin to an absent video driver). Consequently, I posit that any human-like attribute that a neural network may be deficient in would not preclude it from attaining human-like characteristics, assuming adequate computational resources. Moreover, I believe that any such characteristic could be integrated into the network's internal representation by incorporating advanced technological capabilities for data assimilation.

  • @maartenvs01
    @maartenvs01 ปีที่แล้ว

    If it’s a bad idea to build something smarter then us because of the explosion, will you not eventually have an AI that knows that, stopping the explosion?

    • @ikotsus2448
      @ikotsus2448 ปีที่แล้ว

      It might be smarter but not share our morals or have any for that matter ("bad idea").

    • @NullHand
      @NullHand ปีที่แล้ว

      Bad and good are usually very relative.
      Thanksgiving as a holiday is considered good for the Average American Family.
      It is not so much for the guest that happened to be born a turkey...

  • @FernFlower
    @FernFlower ปีที่แล้ว

    Hofstadter hasn't been particularly right about how AI would develop, but the books were stimulating. A lot smarter than me in all likelihood. Still, it's kind of weird how someone ends up as a legend.

  • @italogiardina8183
    @italogiardina8183 ปีที่แล้ว

    It seems that as compute power increase towards quantum compute then normative theories of intelligence will break down at least from the perspective of human group functionalism based on leader member relations that allow members of a group to self categorise their role as authoritative or being a in-group prototype of intellectual capability for a task or status role. A free floating quantum intelligence is unhinged from human group dynamics so the alignment argument seems fallacious as a first principle. So thanks for all the fish.

  • @angamaitesangahyando685
    @angamaitesangahyando685 ปีที่แล้ว

    As I've written under another video, my mind is rather dim, mostly humanitarian, so I do view such talks as inherently lacking in considering the full range of human experience. For one, "harming humanity" sounds really funny from a member of one specific planetary culture (in this case, Western neo-Christian, with its quirks and idiosyncrasies). I wonder what Islamic theologians and Juche Korean artists think of AI. Maybe they could create the Shia Mahdi or resurrect Kim Il Sung? - Adûnâi

    • @DJWESG1
      @DJWESG1 ปีที่แล้ว

      Why assume neo xtian ?? And why assume any religious representation as its response, either to a.i or as a framework for beleif?
      Can these things not simply exist as a part of the human story rather being a defining point of it?

  • @briangrimmer8225
    @briangrimmer8225 ปีที่แล้ว +1

    Moloch, the ultimate Harms Race!

  • @rsxtns
    @rsxtns ปีที่แล้ว

    If you like this kind of content, I would also recommend this channel: www.youtube.com/@rscgtns

  • @mrbwatson8081
    @mrbwatson8081 ปีที่แล้ว

    What is intelligence without subjective experience? Isn't AI just an intelligently or possibly unintelligently designed mechanism ? Why endow it with "being" any more then my 📱 is a being? Why does prediction equate to intelligence?why does intelligence equate consciousness?

  • @jasonabc
    @jasonabc ปีที่แล้ว

    intelligence = the ability to create. What are the limits of creativity?

    • @DJWESG1
      @DJWESG1 ปีที่แล้ว

      Or is it the ability to know that you have and how you have created something? 🤔

    • @mrbwatson8081
      @mrbwatson8081 ปีที่แล้ว

      The way the heart and veins pump blood around the body is intelligent. Your whole anatomy started life as a SINGLE cell, now that's intelligence. Intelligence and consciousness come from the same source :)

  • @latebowl1
    @latebowl1 ปีที่แล้ว

    The true test for whether we have designed AGI is to ask it to design super AGI. If it refuses,, then we know it is more intelligent than us. We are like chimpanzees trying to invent humans.

  • @soniahazy4880
    @soniahazy4880 ปีที่แล้ว

    🌈🪷💎🛸🐬🧩🧞‍♀️🎼🌹

  • @justinlinnane8043
    @justinlinnane8043 ปีที่แล้ว

    Has anyone told David that the geeks are making a metal machine that will make him redundant ( as in dead !!)

  • @Achrononmaster
    @Achrononmaster ปีที่แล้ว +2

    @8:30 DRH is a materialist, so he only knows to think about how AI can beat humans at behavioural pursuits. He (and Dennett) never take qualia seriously. AI will eventually overcome humans at any behavioural skill you care to name. But they'll never be conscious, because you cannot emerge or compute or generate anything subjective from a system that is objectively specifiable. Humans who are objectively specifiable to me are not full humans. For "real" humans I can't ever tell you whether their "red" qualae are *_red_* &c. So if I build an artificial human from scratch, fully adult formed, it's never going to be conscious.
    Then you say to me, "Oh, but Bijou, the laws of physics are objective and we "build" children from simple objectively specifiable cells and embryos, and it's all objectively specifiable!" But I'd say you cannot know that. Physics does not need to be a computation. I can't prove that it isn't, but you can't prove physics is computation. You don't even know the complete laws of physics. A computable model of physics is not necessarily *_physics._* I'd guess there is a way (I know not how) subjective agency can be embedded in a physical spacetime, but it is not objectively specifiable just because it can be embedded. This involves, I believe (but could be totally wrong), Holography and boundary theory: you cannot specify the boundary of spacetime, so spacetime is not in fact objectively specifiable. How this permits subjective agency to be causal in the physical world is a mystery to me, but there is this way to avoid materialistic reductionism.
    A computation, on the other hand, can be defined by bulk physics, does not need to reference a holographic boundary, so can be purely objective, and need have no subjective aspect.

    • @angamaitesangahyando685
      @angamaitesangahyando685 ปีที่แล้ว

      What is "specifiable"? Either way, if it squawks like a duck, it looks like a duck to my dim mind.
      - Adûnâi

    • @tiagotiagot
      @tiagotiagot ปีที่แล้ว

      What even means to be "conscious"? If you dig deep enough you should realize it's not really fully defined outside of circular logic; we can't even objectively prove it's something that does exist at all.

  • @patham9
    @patham9 ปีที่แล้ว

    This extremely speculative stuff regarding superintelligence we will never see in our lifetimes is popular, while the down-to-earth interview with Pei Wang speaking about actual AGI implementations that can do interesting stuff right now is behind a paywall? Interesting, but at the same time disappointing.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +1

      Pei's interview is on Patreons while we finish editing the special edition introduction to it

    • @patham9
      @patham9 ปีที่แล้ว +1

      @@MachineLearningStreetTalk Oh I see so it is just temporary, thank you for letting me know! No hurry then! :)

  • @catsaresocute650
    @catsaresocute650 ปีที่แล้ว

    you are confusing intelligence (the ability to do operations on an information) and the mind as the same thing

  • @greghillier5176
    @greghillier5176 ปีที่แล้ว

    Irony. If chatgpt was the basis for A super intelligence then chances are that it has already read a lot of these papers on how it is achieved and how it could be thwarted before it became self aware. who knows maybe the papers outlining the risks of ai become an instruction manual for early AI on how to achieve it. A self fulfilling prophecy.

  • @paxdriver
    @paxdriver ปีที่แล้ว

    If ever there were a guest who needed no introduction 😅

    • @TimScarfe
      @TimScarfe ปีที่แล้ว +1

      I edited most of it out as it was too long, I'll put full one up on podcast 😄

    • @paxdriver
      @paxdriver ปีที่แล้ว

      @@TimScarfe right on, not saying he doesn't deserve it of course, but he's as much a legend to this channel as Lecun, Chollet and Chomsky lol. Luvs it mate, good show thank you so much

  • @davidjohnston2720
    @davidjohnston2720 ปีที่แล้ว

    When Mary closes her eyes, she sees all sorts of colors, including red. Therefore your argument about Mary is false.

  • @farmerjohn6526
    @farmerjohn6526 ปีที่แล้ว

    Wow, i think we should understand intelligence before we discuss super intelligence. Thinking stupid quicker is super stupid. The current chstgpt is not intelligent at all. So, super chat is 20x dumb.

  • @Bootsie142
    @Bootsie142 ปีที่แล้ว

    How much is supernatural? Seems more and more undeniable. Gaia. Gee, AI, eh?! By the way the globe in the background 😂

  • @briancase6180
    @briancase6180 ปีที่แล้ว

    An intelligence that improves itself is iterative, recurrent NOT recursive. Pretty lame error.

  • @bobtarmac1828
    @bobtarmac1828 ปีที่แล้ว

    Ai crime and losing your job to robots ai agents and plug-ins is unacceptable. Ai jobloss is here. So are Ai as weapons. Can we please find a way to cease Ai / GPT? Or begin pausing Ai before it’s too late?

  • @NeuroScientician
    @NeuroScientician ปีที่แล้ว +1

    FIrst :P

  • @rebokfleetfoot
    @rebokfleetfoot ปีที่แล้ว

    i think Musk has the right idea, the way to keep peace with the AI is to achieve a symbiosis with it

  • @larryfulkerson4505
    @larryfulkerson4505 ปีที่แล้ว

    What is David Chambers smoking? I want some. He's my hero. He always says what he really believes. No lies.