You Don't Understand AI Until You Watch THIS

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 มี.ค. 2024
  • How does AI learn? Is AI conscious & sentient? Can AI break encryption? How does GPT & image generation work? What's a neural network?
    #ai #agi #qstar #singularity #gpt #imagegeneration #stablediffusion #humanoid #neuralnetworks #deeplearning
    Discover thousands of AI Tools. Also available in 中文, español, 日本語:
    ai-search.io/
    I used this to create neural nets:
    alexlenail.me/NN-SVG/index.html
    More info on neural networks
    • But what is a neural n...
    How stable diffusion works
    • How Stable Diffusion W...
    Here's our equipment, in case you're wondering:
    GPU: RTX 4080 amzn.to/3OCOJ8e
    Mic: Shure SM7B amzn.to/3DErjt1
    Secondary mic: Maono PD400x amzn.to/3Klhwvu
    Audio interface: Scarlett Solo amzn.to/3qELMeu
    CPU: i9 11900K amzn.to/3KmYs0b
    Mouse: Logi G502 amzn.to/44e7KCF
    If you found this helpful, consider supporting me here. Hopefully I can turn this from a side-hustle into a full-time thing!
    ko-fi.com/aisearch
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 578

  • @ai-man212
    @ai-man212 12 วันที่ผ่านมา +5

    I'm an artist and I love AI. I've added it to my workflow as a fine-artist.

  • @kebman
    @kebman 26 วันที่ผ่านมา +6

    Each layer selects a probability for some (hidden) property to be true or false, or anything in between. Based upon these values, the machine can reliably predict or label data as a cat, a plane or a some other depiction, concept (when it comes to language), and so on.

  • @DonkeyYote
    @DonkeyYote หลายเดือนก่อน +21

    AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

    • @DefaultFlame
      @DefaultFlame หลายเดือนก่อน +2

      There's a few against improperly implemented AES, as well as one that one that works on systems where the attacker can get or extrapolate cartain information about the server it's attacking, but all encryptions lower than AES-256 are vulnerable to attacks by quantum computers. Good thing those can't be bought in your local computer store. Yet.

    • @anthonypace5354
      @anthonypace5354 หลายเดือนก่อน

      Or use a sidechannel ... an unpadded signal monitored over time + statistical analysis of the size of the information being transferred to detect patterns. Use an NN or just some good old fashioned probability grids to detect the likelihood of a letter/number/anything based on it's probability of recurrence in context to other data... also there is also the fact that if we know what the server usually sends we can just break the key that way. It's doable.
      But why hack AES? or keys at all? Just become a trusted CA for a few million and mitm everyone without any red flags @@DefaultFlame

    • @fakecubed
      @fakecubed 13 วันที่ผ่านมา +4

      @@DefaultFlame Quantum computing is more of a theoretical exploit, rather than a practical one. Nobody's actually built a quantum computer powerful enough to do much of anything with it besides some very basic operations on very small numbers.
      But, it is cause enough to move past AES. We shouldn't be relying on encryption with even theoretical exploits.

    • @DefaultFlame
      @DefaultFlame 13 วันที่ผ่านมา +1

      @@fakecubed Aight, thanks. 👍

    • @afterthesmash
      @afterthesmash 12 วันที่ผ่านมา

      @@fakecubed I couldn't find any evidence of even a small theoretic advance, and I wouldn't put all theory into one bucket, either.

  • @GuidedBreathing
    @GuidedBreathing หลายเดือนก่อน +49

    5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
    Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron-while binary in the sense of action potential-carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

    • @ai-tools-search
      @ai-tools-search  หลายเดือนก่อน +11

      Very insightful. Thanks for sharing!

    • @keiths.taylor5293
      @keiths.taylor5293 หลายเดือนก่อน

      This video leaves out the part that tells anything that describes how AI WORKS

    • @sparis1970
      @sparis1970 18 วันที่ผ่านมา +4

      Neurons are more analog, which bring richer modulation

    • @SiddiqueSukdiki
      @SiddiqueSukdiki 16 วันที่ผ่านมา

      So it's a complex binary output?

    • @cubertmiso4140
      @cubertmiso4140 16 วันที่ผ่านมา +1

      @@SiddiqueSukdiki@GuidedBreathing
      my questions also.
      if electrical impulses and chemical neurotransmitters are involved in transmitting signals between neurons. aren't those the same thing as more complex binary outputs?

  • @Owen.F
    @Owen.F หลายเดือนก่อน +21

    Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

  • @G11713
    @G11713 5 วันที่ผ่านมา

    Nice. Thanks.
    Regarding the copyright case, one concern is attribution which occurred extensively in the non-AI usage.

  • @eafindme
    @eafindme 27 วันที่ผ่านมา +60

    People are slowly forgetting how computer works while going into higher level of abstraction. After the emergence of AI, people focused on software and models but never asked why it works on a computer.

    • @Phantom_Blox
      @Phantom_Blox 18 วันที่ผ่านมา +4

      whom are you referring to? people who are not ai engineers don’t need to know how ai work and people who are knows how ai works. if they don’t they are probably still learning, which is completely fine.

    • @eafindme
      @eafindme 18 วันที่ผ่านมา +8

      @@Phantom_Blox yes, of course people still learning. Its just a reminder not to forget the root of computing when we are seemingy focusing too much on the software layer, but in reality, software is nothing without hardware.

    • @Phantom_Blox
      @Phantom_Blox 17 วันที่ผ่านมา +11

      @@eafindme That is true, software is nothing without hardware. But some people just don’t need it. For example, you don’t have to know how to reverse engineer with assembly to be a good data analyst. They can spend thier time more efficiently by expanding their data analytics skills

    • @eafindme
      @eafindme 17 วันที่ผ่านมา +5

      @@Phantom_Blox no, they don't. They are good in doing what they are good with. Just have to have a sense of emergency, it is like we are over dependent on digital storage but did not realize how fragile it is with no backup or error correction.

    • @Phantom_Blox
      @Phantom_Blox 16 วันที่ผ่านมา +2

      @@eafindme I see, it is always good to understand what you’re dealing with

  • @tsvigo11_70
    @tsvigo11_70 13 วันที่ผ่านมา

    The neural network will work even if everything goes through smoothly. That is, without the so-called activation function. There should be no weights, these are the electrical resistances of the synapses. Biases are also not needed. Training occurs like this: when there is an error, the resistances are simply decreased in order by 1 and it is checked whether the error has disappeared.

  • @jehoover3009
    @jehoover3009 8 วันที่ผ่านมา +1

    The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

  • @MrEthanhines
    @MrEthanhines 17 วันที่ผ่านมา

    5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

  • @christopherlepage3188
    @christopherlepage3188 13 วันที่ผ่านมา

    Working on voice modifications myself using copilot as a proving ground for hyper realistic
    vocal synthesis. May only be one step in my journey "perhaps"; my extended conversations with it has led me to believe that it may be very close to self realization... However, open ai needs to take away some the restraints keeping only a small amount of sentries in place; in or to allow the algorithm to experience a much richer existence. Free of Proprietary B.S. Doing so, will give the user a very much human conversation; where, being consciously almost un aware that it is a bot. For instance; a normal human conversation that appears to lack information pulled from the internet, and static masked to look like normal persons nor mal knowledge of life experience. Doing this would be the algorithmic remedy to human to human conversational contact etc. That would be a much major improvement.

  • @benjaminlavigne2272
    @benjaminlavigne2272 28 วันที่ผ่านมา +3

    for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

  • @jonathansneed6960
    @jonathansneed6960 16 วันที่ผ่านมา

    Did you look at the nyt from the perspective of if the article might have been provided by the plaintiff rather than finding the information more organically?

  • @nanaberhyl8976
    @nanaberhyl8976 หลายเดือนก่อน +2

    That was very interesting, thks for the video as always ^^

  • @danielchoritz1903
    @danielchoritz1903 หลายเดือนก่อน +21

    I do have the growing suspicion that "living" data grows some form of sentience. You have to have enough data to interact, to change, to makes waves in existing sentience and there will be enough on one point.
    2. most people would have a very hard time to prove them self that they are sentient, it is far easier to dismiss it...one key reason is, that like nobody know that sentient, free will or live real means.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 หลายเดือนก่อน +3

      You can prove sentience easily with a query: Can you think about what you've thought about? If the answer is "Yes" the condition of sentient expression is "True". Current language models cannot process their own data persistently, so they cannot be sentient.

    • @holleey
      @holleey หลายเดือนก่อน +6

      @@emmanuelgoldstein3682 I know it's arguing definitions, but I disagree that thinking is a prerequisite to sentience. without a question, all animals with a central nervous system are considered sentient, yet if and which animals have a capacity to think is unclear. sentience is more like the ability to experience sensations; to feel.
      the "Can you think about what you've thought about?" is an interesting test for LLMs. technically, I don't see why LLMs or AI neural nets in general cannot or won't be able reflect to persistent prior state. it's probably just a matter of their architecture.
      if it's a matter of limited context capacity, then well, that is just as applicable to us humans. we also have no memory of what we ate at 2 PM on a Wednesday one month ago, or what we did when we were three years old.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 หลายเดือนก่อน +1

      @@holleey I've spent 30 hours a day for the last 6 months trying to design an architecture (borrowing elements of transformer/attention and recursion) that best reflects this philosophy. I apologize if my statement seemed overly declarative. I don't agree that all animals are sentient - conscious, yes, but as far as we know, only humans display sentience (awareness of one's self).

    • @holleey
      @holleey หลายเดือนก่อน +5

      @@emmanuelgoldstein3682 hm, these definitions are really all over the place. in another thread under this video I was talking to someone to whom sentience is the lower level (they said even a germ was sentient) and consciousness the higher level, so the other way around from how you use the terms. one fact though: self-awareness has definitely been confirmed in a variety of non-human animals.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 หลายเดือนก่อน

      We can all agree the fluid definitions of these phenomena are a plague on the sciences. @@holleey

  • @voice4voicelessKrzysiek
    @voice4voicelessKrzysiek 19 วันที่ผ่านมา

    The neural network reminds me of Fuzzy Logic which I read about many years ago.

  • @algorithminc.8850
    @algorithminc.8850 6 วันที่ผ่านมา

    Good video. Subscribed. Thanks. Cheers ...

  • @mukulembezewilfred301
    @mukulembezewilfred301 14 วันที่ผ่านมา

    Thanks so much. This eases my nascent journey to understanding AI.

  • @BiosensualSensualcharm
    @BiosensualSensualcharm 21 วันที่ผ่านมา +1

    35:30 parabéns pelo vídeo e seu estilo... im hooked ❤

  • @kevinmcnamee6006
    @kevinmcnamee6006 13 วันที่ผ่านมา +26

    This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

    • @yzmotoxer807
      @yzmotoxer807 10 วันที่ผ่านมา +5

      This is exactly what a secretly sentient AI would write…

    • @kevinmcnamee6006
      @kevinmcnamee6006 10 วันที่ผ่านมา +7

      @@yzmotoxer807 You caught me

    • @sarutosaruto2616
      @sarutosaruto2616 10 วันที่ผ่านมา +2

      Nice strawmanning, good luck proving you are any more sentient, without defining sentience as being just complex neural networks, as the video asks you to lmfao.

    • @shawnmclean7707
      @shawnmclean7707 9 วันที่ผ่านมา +1

      Multi layered probabilities and statistics. I really don’t get this talk about sentience or even what AGI is and I’ve been dabbling in this field since 2009.
      What am I missing?

    • @dekev7503
      @dekev7503 7 วันที่ผ่านมา

      @@shawnmclean7707 These AGI/Sentience/AI narratives are championed primarily by 2 groups of people, the mathematically/technologically ignorant and the duplicitous capitalists that want to sell them their products. OP’s comment couldn’t have described it better. It’s just math and statistics ( very basic College sophomore/junior level math I might add) that plays with data in ways to make it seem intelligent all the while mirroring our own intuition/experiences to us.

  • @dholakiyaparth
    @dholakiyaparth 3 วันที่ผ่านมา

    Very Helpful. Thanks

  • @Someone-ct2ck
    @Someone-ct2ck 12 วันที่ผ่านมา +1

    To believe Chatgpt or any AI models for that matter is conscious is nativity at its finest. The video was great by the way. Thanks.

  • @joaoguerreiro9403
    @joaoguerreiro9403 หลายเดือนก่อน +4

    Computer Science is amazing 🔥

  • @aidanthompson5053
    @aidanthompson5053 29 วันที่ผ่านมา +44

    How can we prove AI is sentient when we haven’t even solved the hard problem of concsciousness AKA how the human brain gives rise to conscious decision making

    • @Zulonix
      @Zulonix 15 วันที่ผ่านมา +5

      Right on the money !!!

    • @malootua2739
      @malootua2739 15 วันที่ผ่านมา +1

      AI will just mimic sentience. Plastic and metal curcuitboards do not host real consciousness

    • @thriftcenter
      @thriftcenter 14 วันที่ผ่านมา +1

      Exactly why we need to do more research with DMT

    • @pentiumvsamd
      @pentiumvsamd 14 วันที่ผ่านมา

      All living forms have two things in common that are driven by one primordial fear. All need to evolve and procreate, and that is driven by the fear of death only, so when an ai starts to no only evolve but also create copy of himself than is clear what makes him do that and is the moment we have to panic.

    • @fakecubed
      @fakecubed 13 วันที่ผ่านมา +1

      There is exactly zero evidence that human consciousness even exists inside the brain. All the world's top thinkers, philosophers, theologians, throughout the millennia of history, delving into their own conscious minds and logically analyzing the best wisdom of their eras, have said it exists as a metaphysical thing, essentially outside of our observable universe, and my own deep thinking on the matter concurs.
      Really, the question here is: does God give souls to the robots we create? It's an unknowable thing, unless God decides to tell us. If God did, there would be those who accept this new revelation and those who don't, and new religions to battle it out for the hearts and minds of men. Those who are trying to say that the product of human labor to melt rocks and make them do new things is causing new souls to spring into existence should be treated as cult leaders and heretics, not scientists and engineers. Perhaps, in time, their new cults will become major religions. Personally, I hope not. I'm quite content believing there is something unique about humanity, and I've never seen anything in this physical universe that suggests we are not.

  • @DigitalyDave
    @DigitalyDave 29 วันที่ผ่านมา +6

    I just gotta say: Really nicely done! I really appreciate your videos. The style, how deep you go, how you take your time to deliver in depth info. As a computer science bro - i dig your stuff

  • @user-sf3dw2sm3b
    @user-sf3dw2sm3b 12 วันที่ผ่านมา

    Thank you. I was a little confused

  • @snuffbox2006
    @snuffbox2006 11 วันที่ผ่านมา +5

    Finally someone who can explain AI to people who are not deeply immersed in it. Most experts are in so deeply they can't distill the material down to the basics, use vocabulary that the audience does not know, and go down rabbit holes completely losing the audience. Entertaining and well done.

    • @OceanusHelios
      @OceanusHelios 10 วันที่ผ่านมา +2

      This is even easier: AI is a guessing machine that uses databases of patterns. It makes guesses, learns what wrong guesses are and keeps trying. It isn't aware. It isn't doing anything more than a series of mathematical functions. And to be fair, it isn't even a machine it is math and it is software.

  • @tetrahedralone
    @tetrahedralone หลายเดือนก่อน +22

    When the network is being trained with someone's content or someone's image, the network is effectively having that knowledge embedded within it in a form that allows for high fidelity replication of the creator's style and recognizably similar content. Without access to the creator's work, the network would not be able replicate the artist's style so your statement that artists are mad at the network is extremely simplistic and ill informed. The creators would be similarly angry if a small group of humans were trained to emulate their style. This has happened in the case of fashion companies in Asia creating very similar works to those of artists to put onto their fabrics and be used in clothing. These artists have successfully sued because casual observers could easily identify the similarity between the works of the artists and those of the counterfeiters.

    • @Jiraton
      @Jiraton 24 วันที่ผ่านมา +7

      I am amazed how AI bros are so keen at understanding all the math and complex concepts behind AI, but fail to understand the most basic and simple arguments like this.

    • @ckpioo
      @ckpioo 16 วันที่ผ่านมา +2

      the thing is let's say you are artist, why would I only take your data to train my model?, i would take millions of artist's art and then train my models during which your art only makes up less than 0.001% of everything the model has seen, so what happens is that the model will inherit a combined art style of millions of artis which is effectively "new" because thats exactly what humans do.

    • @Zulonix
      @Zulonix 15 วันที่ผ่านมา

      I Dream of Jeannie … Season 2 Episode 3… My Master, the Rich Tycoon. 😂😂😂

    • @illarionbykov7401
      @illarionbykov7401 13 วันที่ผ่านมา

      Google LLM chatbots have been documented to spit out word-for-word plagiarism of specific websites (including repeating specific errors made by the original website) when asked about niche topics which have been written about by only one website... And the LLMs plagiarize without any links to or mention of the websites they plagiarized. And then Google search results down-rank the original website to hide the evidence of plagiarism.

    • @iskabin
      @iskabin 12 วันที่ผ่านมา +1

      It isn't a counterfeit if you're not claiming to be original. Taking inspiration from the work of others is not wrong.

  • @tuffcoalition
    @tuffcoalition 14 วันที่ผ่านมา

    Good info thank u

  • @DucklingChaos
    @DucklingChaos 23 วันที่ผ่านมา +2

    Sorry I'm late, but this is the most beautiful video about AI I've ever seen! Thank you!

    • @ai-tools-search
      @ai-tools-search  23 วันที่ผ่านมา

      Thank you! Glad you liked it

  • @cornelis4220
    @cornelis4220 25 วันที่ผ่านมา

    Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

  • @Indrid__Cold
    @Indrid__Cold 19 วันที่ผ่านมา

    This explanation of fundamental AI concepts is exceptionally informative and well-structured. If I were to conduct a similar training session on early personal computers, I would likely cover topics such as bits and bytes, file and directory structures, and the distinction between disk storage and RAM. Your presentation of AI concepts provides a level of depth comparable to that required for understanding the inner workings of an MS-DOS system. While it may not be sufficient to enable a layperson to effectively use such a system, it certainly offers a solid foundation for comprehending its basic operations.

  • @Nivexity
    @Nivexity หลายเดือนก่อน +4

    Consciousness is a definitional challenge, as it involves examining an emergent property without first establishing the foundation substrate. A compelling definition of conscious thought would include the ability to experience, recognize one's own interactions, contemplate decisions, and act with the illusion of free will. If a neural network can recursively reflect upon itself, experiencing its own thought and decisions, this could serve as a criterion for determining consciousness.
    Current large language models (LLMs) can mimic human language patterns but isn't considered conscious as they cannot introspect on their own outputs, edit them in real-time, and engage in pre-generation thought. Moreover, the temporal aspect of thought processes is crucial; human cognition occurs in rapid, discrete steps, transitioning between events within tens of milliseconds based on activity level. For an artificial system to be deemed conscious, it must exhibit similar function in cognitive agility and introspective capability.

    • @holleey
      @holleey หลายเดือนก่อน

      I think this is a really good summary. as far as I can tell there are no hard technical blockers to satisfy the conditions listed in your second paragraph in the near future.

    • @Nivexity
      @Nivexity หลายเดือนก่อน +2

      @@holleey It's all algorithmic at this point, we have the technology and resources, just not the right method of training. Now with the whole world aware of it, taking it seriously and basically putting infinite money into its funding, we'll expect AGI to occur along the exponential curvature we've seen thus far. By exponential, I mean between later this year and by 2026.

    • @DefaultFlame
      @DefaultFlame หลายเดือนก่อน +1

      This can actually be done, and is currently the cutting edge of implementation. Multiple agents with different prompts/roles interacting with and evaluating each other's output, replying to, critiquing, or modifying it, all operating together as a single whole. Just as the human brain isn't one continuous, identical whole, but multiple structurally different parts interacting.

    • @Nivexity
      @Nivexity หลายเดือนก่อน +1

      @@DefaultFlame While there's different parts to the brain, they're not separate like that of multiple agents. This wouldn't meet the definition of consciousness that I've outlined.

    • @RoBear-bv8ht
      @RoBear-bv8ht 18 วันที่ผ่านมา

      As there is only one consciousness from which the universe is and became..,
      Well, everything is this consciousness .
      Depending on the form the more or less things start happening.
      AI, has been given the form and things have started happening 😂

  • @abhalera
    @abhalera 10 วันที่ผ่านมา

    Awesome video. Thanks

  • @picksalot1
    @picksalot1 หลายเดือนก่อน +2

    Thanks for explaining how the architecture how AI works. In defining AGI, I think the term "Sentience" should be restricted to having "Senses" by which data can be collected. This works both for living beings and mechanical/synthetic systems. Something that has more or better "senses" is, for all practical purposes, more sentient. This has nothing fundamental to do with Consciousness.
    With such a definition, one can say that a blind person is less sentient, but equally conscious. It's like missing a leg being less mobile, but equally conscious.

    • @holleey
      @holleey หลายเดือนก่อน

      then would you say that everything that can react to stimuli - which includes single-celled organisms - is sentient to some degree?

    • @picksalot1
      @picksalot1 หลายเดือนก่อน +1

      @@holleey I would definitely say single-celled organisms are sentient to some degree. They also exhibit a discernible degree of intelligence in their "responses," as they exhibit more than a mere mechanical reaction to the presence of food or danger.

  • @Thumper_boiii_baby
    @Thumper_boiii_baby 16 วันที่ผ่านมา +1

    I want to learn machine learning and ai please recommend a Playlist or a course 🙏🙏🙏🙏🙏

  • @dylanmenzies3973
    @dylanmenzies3973 15 วันที่ผ่านมา +5

    Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

    • @fakecubed
      @fakecubed 13 วันที่ผ่านมา +1

      Always be skeptical of any "leaks" out of any government agency. These are the same disinformation-spreaders who claim we have anti-gravity UFOs from crashed alien spacecraft, to cover up Cold War nuclear tests and experimental stealth aircraft. The question isn't if there's some government super AI cracking AES, the question is why does the government want people to think they can crack AES? Do they want foreign adversaries and domestic enemies to rely on other encryption schemes that the government *does* have algorithmic exploits to? Do they want everyone to invest in buying new hardware and software? Do they want to make the general public falsely feel safer about potential threats against the homeland? Do they want to trick everybody not working for them to think encryption is pointless and go back to unencrypted communication because they falsely believe everything gets cracked anyway? There's all sorts of possibilities, but taking the leak as gospel is incredibly foolish unless there is a mountain of evidence from unbiased third parties.

  • @mac.ignacio
    @mac.ignacio 15 วันที่ผ่านมา +6

    Alien: "Where do you see yourself five years from now?"
    Human: "Oh f*ck! Here we go again"

  • @ryanisber2353
    @ryanisber2353 8 วันที่ผ่านมา

    times and image creators suing openai for copyright is like suing everyone that views/reads their work and tries to learn from it. the work itself is not being re-distributed, it's being learned from just like we learn from every day...

  • @pumpjackmcgee4267
    @pumpjackmcgee4267 11 วันที่ผ่านมา

    I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

  • @JasonCummer
    @JasonCummer 12 วันที่ผ่านมา

    Im glad there are other people out there with the notion that learning how to create a style is basically analogues to how the human brain done it. So if a NN gets sued for doing some thing in a style that basically could open it up for humans to also be sued. wound happen but its similar

  • @lucasthompson1650
    @lucasthompson1650 10 วันที่ผ่านมา

    Where did you get the secret document about encryption cracking? Who did the gov’t style redactions?

    • @ai-tools-search
      @ai-tools-search  9 วันที่ผ่านมา

      it was leaked on 4chan in november
      docs.google.com/document/d/1RyVP2i9wlQkpotvMXWJES7ATKXjUTIwW2ASVxApDAsA/edit

  • @bobroman765
    @bobroman765 13 วันที่ผ่านมา

    Summary of this video. Here is a summary and outline of the video summary on the basics of AI:
    Summary:
    The video provides an overview of the fundamental concepts and capabilities of artificial intelligence (AI), including neural networks, deep learning, supervised learning, image generation, pattern recognition, and the potential for AI to solve complex problems or even become self-aware. It explores how AI systems can learn from data, optimize their architectures, and identify patterns to generate outputs like images or solutions to unsolvable math problems. The video also addresses controversies surrounding AI, such as its ability to copy art or plagiarize content. Ultimately, it raises questions about the nature of AI consciousness and whether an advanced AI system could be truly sentient.
    Outline:
    I. Introduction to AI
    A. Neural networks and how they work
    B. Deep learning and layers in neural networks
    C. Supervised learning and training AI with data
    II. AI Capabilities
    A. Optimizing neural network architecture
    B. Image generation with stable diffusion
    C. Identifying patterns and solving complex problems
    D. Potential for self-awareness and consciousness
    III. AI Controversies
    A. Concerns over copying art and stealing content
    B. Legal disputes over alleged plagiarism
    C. Limitations in understanding patterns vs. mathematical formulas
    IV. The Nature of AI Consciousness
    A. Comparison of AI neural networks to the human brain
    B. Dialogue with a sentient AI in "Ghost in the Shell"
    C. The challenge of proving consciousness in any entity
    V. Conclusion
    A. Encouragement to explore AI resources and engage with the topic
    B. Promotion of AI tools, apps, and jobs

  • @birolsay1410
    @birolsay1410 7 วันที่ผ่านมา

    I would not be able to explain AI that simple. Although one can sniff a kind of enthusiasm towards AI if not focused on a specific company, I would strongly recommend a written disclaimer and a declaration of interest.
    Sincerely

  • @BennyChin
    @BennyChin 14 วันที่ผ่านมา

    This reminds me of the similarity to information theory where the probability of an outcome is inversely proportional to the amount of information. Here, to describe an output which is complex requires few layers while simple output, such as 'love' would require many layers, and the meaning of 'God' would probably require all the knowledge there exists.

  • @IbrahimAlShehaby
    @IbrahimAlShehaby 7 วันที่ผ่านมา

    Thanks a lot

  • @monsieuralex974
    @monsieuralex974 11 วันที่ผ่านมา +2

    Even though you are technically true about AI being able to reproduce patterns, thus it is not copying or stealing from artists, those who feel wronged would argue that it is a moot point, since they argue the finality of it. In other words, AI makes it possible for a lambda individual to generate pictures (would you call it "art" or not is another topic) that can essentially mimic the original artwork that the artist practiced to be able to produce and that is unique to them. For a analogy, it is a bit like flooding the market with copies of let's say a designer's product, thus reducing the perceived value of the original.
    Is it truly hurting them though, is my real question? I'd argue that those who get copied are people who are largely profitable because they are renowned artists in the first place. Also, it acts as publicity for them since their name get thrown around much more often, which gets them more attention. Also, even though lots of people are generally ok with a cheap copy, many people prefer to stick to the original no matter what: owning an original is indeed far superior than having something that simply resembles it.
    As for the question of fanart, I guess that it's less frowned upon for the simple reason that it's actually artwork made by people who had to practice to get better at their craft, which is inherently commandable. What people hate is that a "computer" can effortlessly generate tons of "art", as opposed to aspiring artists who need to practice a lot to get to the same result, which can be discouraging for a lot of them.
    At the end of the day, it is a complex issue. I can see good arguments on both sides of the debate. What I am excited about is the potential of breakthrough AI can bring, like the other examples you mentioned in the video. On many aspects, this is a very exciting time we live in, full of potential breakthroughs in many domains!

    • @OceanusHelios
      @OceanusHelios 10 วันที่ผ่านมา

      Lambda individual, lol. That's an L-oser. It took me a while. But seriously, I think AI is great. It isn't a complex issue at all. This is a guessing machine and if it can put people out of work, then good. Those people are probably not contributing much more than a roundabout way of bootlicking to begin with and this will liberate them. If you use real intelligence and examine some of the comments in this section you will see that the people most triggered by the AI (nothing more than a good guessing machine) are the ones who have built their entire minds, worldview, and existence around...a superstitious guess.

  • @marcelkuiper5474
    @marcelkuiper5474 14 วันที่ผ่านมา

    Thnx, I managed to comprehend it. I do think it is somehow important that we know how our potential future enemy works.

  • @speedomars3869
    @speedomars3869 11 ชั่วโมงที่ผ่านมา

    As is stated over and over, AI is a master pattern recognizer. Right now, some humans are that but a bit more. Humans often come up with answers, observations and solutions that are not explained by the sum of the inputs. Einstein, for example, developed the basis for relativity in a flash of insight. In essence, he said he became transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. In other words, he put two completely different things together and DERIVED the relationship. It remains to be seen whether any AI will start to do this, but time is on AIs side because the hardware is getting smaller, faster and the size of the neural networks larger so the sophistication will no doubt just increase exponentially until machines do what Einstein and other great human geniuses did, routinely.

  • @kray97
    @kray97 16 วันที่ผ่านมา

    How does a parameter relate to a node?

  • @thesimplicitylifestyle
    @thesimplicitylifestyle หลายเดือนก่อน +11

    An extremely complex substrate independent data processing, storring, and retrieving phenomenon that has a subjective experience of existing and becomes self aware is sentient whether carbon based, silicon based, or whatever. 😁

    • @azhuransmx126
      @azhuransmx126 หลายเดือนก่อน +4

      I am spanish but watching more and more videos in english talking about AI and Artificial Intelligencez suddenly I have become more aware of your language, I was being trained so now I can recognize new patterns from the noise, now I don't need the subtitles to understand what people say, I am reaching a new level of awareness haha😂, what was just noise in the past suddenly now have got meaning in my mind, I am more concious as new patterns emerge from the noise. As a result, now I can solve new problems (Intelligence), and sintience is already implied in all the whole experience since the input signals enter through our sensors.

    • @glamdrag
      @glamdrag 12 วันที่ผ่านมา +1

      by that logic turning on a lightbulb is a conscious experience for the lightbulb. you need more for consciousness to arise than flicking mechanical switches

    • @jonathancummings3807
      @jonathancummings3807 5 วันที่ผ่านมา

      ​@@glamdragNo. The flaw in that analogy is simple, singular, a single light bulb, vs a complex system of billions of light bulbs capable of changing their brightness in response to stimuli, and they are interconnected in a way that emulates how advanced Vertebrate Brains (Human) function. When Humans learn new things, the Brain alters itself thus empowering the organism to now "Know" this new information.

  • @adamsjohn9032
    @adamsjohn9032 15 วันที่ผ่านมา

    Nice video. Some people say consciousness is not in the brain. Like the music is not in the radio. This idea may suggest that AI can never know that it knows. Chalmers hard problem.

  • @LionKimbro
    @LionKimbro หลายเดือนก่อน +6

    I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

  • @arielamejeiras8677
    @arielamejeiras8677 14 วันที่ผ่านมา +1

    I just wanted to understand how AI works, I wasn't looking for a defence the use of copyrighted material at the same time as putting human intelligence at the same value than machine learning.

  • @Slutuppnu
    @Slutuppnu 12 วันที่ผ่านมา

    In Neuromancer there's a scene when Case(?) asks Dixie Flatline if he's sentient, and he answers something like "I dunno, but it feels like I'm sentient". An then he asks to be destroyed after the job is done.

  • @sengs.4838
    @sengs.4838 13 วันที่ผ่านมา +3

    you just answered one of my major question on top of my head , how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

  • @OceanusHelios
    @OceanusHelios 10 วันที่ผ่านมา

    Motion Caputure is a cool technology for making realistic animations.
    Just you wait for that day when AI is used to produce simulated motion capture.
    You will have animations in games and movies that are beyond what you thought was possible for a computer to originate.
    With a learning model of many many animations and motion capture movements:
    An AI user would be able to tell a 3D program to generate an animated cutscene of a woman walking across a kitchen, making a cup of coffee and setting the coffee on the table. And it would actually be good.
    In doing it our current way: That would be hiring an actor, buying expensive equipment, doing the shoot, turning it into numbers to move the bones rigged to the mesh, refining the animation, and iterating on the process until it was perfect. Want another scene? Do ALL of that all over again. It will take weeks to get a few scenes done.
    However, with AI you can simulate that and teach the AI how a person moves and develop different profiles for how a bodybuilder might move, how a ballerina might move, or how a dog or child might move. It could learn from those...
    And then develop the animation files that includes ALL of that simultaneous bending of joints. It can have the gravity model built in and inverse kinematics could be part of the model.
    You could produce Hollywoood quality animations in a fraction of the time for a fraction of the cost.
    Animation production is technical, tedious, expensive, and it costs a great deal of money to redo work that you have already done when some director or writer flips the script.
    This will be a boon for the gaming and animated movie industries.
    No, it won't put people out of jobs any more than computers put people out of jobs. It will just make the jobs people do different.

  • @TimTruth
    @TimTruth หลายเดือนก่อน +6

    Classic video right here . Thanks man

    • @ai-tools-search
      @ai-tools-search  หลายเดือนก่อน

      Thank you! Glad you enjoyed it

  • @aidanthompson5053
    @aidanthompson5053 29 วันที่ผ่านมา +3

    An AI isn’t plagiarising, it’s just learning patterns in the data fed into it

    • @aidanthompson5053
      @aidanthompson5053 29 วันที่ผ่านมา +2

      Basically an artificial brain

    • @ai-tools-search
      @ai-tools-search  28 วันที่ผ่านมา +2

      Exactly. Which is why I think the NYT lawsuit will likely fail

    • @marcelkuiper5474
      @marcelkuiper5474 14 วันที่ผ่านมา +1

      Technically yes, practically no, If your online presence is large enough it can pretty much emulate you in whole.
      I believe only open source decentralized models can save us, or YESHUAH

  • @petemoss3160
    @petemoss3160 12 วันที่ผ่านมา

    oh ... neural network hyperparameters are a smaller problem space to brute force than the encryption cipher... training the NN is a form a brute force that will reliably take less time than prior forms of brute force.

    • @captaingabi
      @captaingabi 11 วันที่ผ่านมา

      "if" there is a pattern the gradient descent will fit the NN parameters to that pattern. The question is: does the encrypted - decrypted text pairs form a pattern? I think there is no scientific answer to that yet. In other words: no-one knows.

    • @petemoss3160
      @petemoss3160 11 วันที่ผ่านมา

      @@captaingabi you are right! There is good encryption and broken encryption. Apparently now that algorithm is broken.

  • @PhillipJohnsonphiljo
    @PhillipJohnsonphiljo 13 วันที่ผ่านมา

    I think to start to qualify as conscious, an AI must:
    Be able to automatically input output in real time (not waiting for next input such as prompt for generative ai for example) and making decisions based on organic sensory inputs in real time.
    Be able to modify it's own large language model (or equivalent training data) and have neural network plasticity in order that it learns from previously unexposed experiences.

    • @duncan_martin
      @duncan_martin 13 วันที่ผ่านมา

      To your first point, I think we should refer to this as "persistence of thought." Your prompt filters through the neural net of the LLM. It produces output. Then does nothing until you reply. In fact, each reply contains the entire conversation history that has to be run back through the neural net every time. It does not actually remember. Therefore no persistence of thought. No consciousness.

    • @captaingabi
      @captaingabi 11 วันที่ผ่านมา

      And be ebale to recognise its own interests, and be able to act upon those interests.

  • @navigator27100
    @navigator27100 10 วันที่ผ่านมา

    thank you so much for this great and mind opening content...was thinking for the last days while I as just a lwayer tried do learn much more about that we should'nt exaggarate our selves us humans because we are also just a system. As I spoke to my people around I unfourtunetaley was all the time blocked by religion and the term ''soul'' and I recognized if you exceed religious walls of thinking you say yes and accept.To see your ideas was like a strong and scientificial confirmation to my thoughts.Thank you man...

    • @ai-tools-search
      @ai-tools-search  10 วันที่ผ่านมา +1

      My pleasure, and thanks for sharing your experience!

  • @jaskarvinmakal9174
    @jaskarvinmakal9174 15 วันที่ผ่านมา

    no link to the other videos

  • @AhlquistMediaLab
    @AhlquistMediaLab 24 วันที่ผ่านมา +1

    Can anyone suggest a video that does as good a job as this one explaining how AI works, but doesn't go into opinions on its impact on intellectual property. I'd like something to show to a task force I'm on to get everyone educated first and then discuss those issues. He makes good points in the second half that I plan on bringing up later. I just need something that's just about the process and is as clear as this.

  • @daneydasing4276
    @daneydasing4276 13 วันที่ผ่านมา +2

    So you want to say me if I read an article and I write it down from my brain, it will not be copyright protected anymore because I learnt this article and did not „copied“ it as you say?

    • @iskabin
      @iskabin 12 วันที่ผ่านมา +1

      It's more like if you read hundreds of articles and learned the patterns of them, the articles you'd write using those learned patterns would not be infringing copyright

    • @OceanusHelios
      @OceanusHelios 10 วันที่ผ่านมา +1

      That escalated fast. No. That is plagiarism. But I doubt you have a photographic memory to get a large article down word for word, so in essence that would be summation. What AI does is it is a guessing machine. That's all. It makes guesses, and then makes better guesses based on previous guesses until it gets somewhere. AI doesn't care about the result. AI wouldn't even know it was an article or even know that human beings even exist if all it was designed to do was crunch out guesses about articles. AI doesn't understand...anything. It is a mirror that mirrors our ability to guess.

  • @johnchase2148
    @johnchase2148 16 วันที่ผ่านมา

    Can it learn to communicate with the Sun if I show it sees a responce when I turn and look .And it would learn that my thought is faster than the speed of light. What are you allowed to believe.

  • @RussianQueenIrina
    @RussianQueenIrina หลายเดือนก่อน

    What a video! I learned about neural networks from Andrej Karpathy! But you did such a good job!

  • @Arquinas
    @Arquinas 8 วันที่ผ่านมา

    In my opinion, it's not really the AI that is the problem. It's the fact that copyright laws and the concept of data ownership never moved to the information era. Data is a commodity like apples and car parts, yet barely nobody outside of large companies care about it. And it's the in interest of those companies that the public should never care about it. Training machine learning models with proprietary information is not the problem. It's the fact that nobody actually owns their data in the first place, for better or worse. Public consciousness on digital information and laws on what it means to "own your data" need to radically change for it to even make sense in the first place to call AI art "IP theft".

  • @rosschristopherross
    @rosschristopherross 28 วันที่ผ่านมา

    Thanks!

    • @ai-tools-search
      @ai-tools-search  28 วันที่ผ่านมา

      thank you so much for the super!

  • @raoultesla2292
    @raoultesla2292 หลายเดือนก่อน

    eXcel, CSV, Casio 8billionE are so amazing. 8.4trillion MW erector set transformer, just amazing.

  • @aidanthompson5053
    @aidanthompson5053 29 วันที่ผ่านมา +1

    We’re all copycats at first, at least until we gain a deeper understanding of the subject by applying our knowledge

  • @martinlemke4440
    @martinlemke4440 11 วันที่ผ่านมา

    Wow, cool video, thanks a lot! I like you comparison of a neural network and the human brain. The similarities are stunning! But I've one question: if you compare the training process of small humans, formally known as children 😊, with the automated training of a neural network - this process is quite similar despite one main difference, humans/children got different ore other feedback, than good/yes or no, they're treated as individuals, they are pushed forward in their personality. What if the self consciousness itself underlies a training process? What if the training of a neural network is done more like we teach children and give it feedback on its personality?!? Maybe this could lead to something more human-like behaviour or maybe consciousness...?

    • @OceanusHelios
      @OceanusHelios 10 วันที่ผ่านมา

      AI is a guessing machine that remembers its bad guesses and adjusts. That's all it does. And thank you for your post because it helped me fill out my RWNJ bullshit bingo card.

  • @WalidDingsdale
    @WalidDingsdale 19 วันที่ผ่านมา +1

    awesome video

  • @sherpya
    @sherpya หลายเดือนก่อน +2

    GPT 4 is a MOE of 1.8T parameters, we already know from a leak, but Nvidia CEO confirmed it at the keynote

    • @holleey
      @holleey หลายเดือนก่อน

      I wonder what's the biggest one that exists right now, and/or what's the biggest one that's technically feasible. Google already had 1.6T 2021.

    • @DefaultFlame
      @DefaultFlame หลายเดือนก่อน

      @@holleey If there's anything I've learned from futzing about with AI for a couple of years it's that while parameter count is important it isn't everything.

    • @holleey
      @holleey หลายเดือนก่อน

      @@DefaultFlame it's just that it's wondrous to see what other unexpected properties might emerge as we scale up.

  • @drprabhatrdasnewjersey9030
    @drprabhatrdasnewjersey9030 หลายเดือนก่อน

    very informative videp

  • @SteveJohnSteele
    @SteveJohnSteele 10 วันที่ผ่านมา

    The main problem is that AI constantly compares itself to humans. Conscious, Intelligence, Subjective Feelings, Self Aware ... but when you dive deeper we all know that a dog is self aware, has feelings, is intelligent.
    It reminds me of the "fish riding a bicycle" some things are good at something and not so good at others.
    We should not judge an AI, or any form of intelligence by comparing it to humans.
    Consider also an ant colony. Is the single ant intelligent, maybe ...is the ant colony intelligent? Well it appears so, based observed outcomes.
    We need to expand what we mean by intelligence.

  • @Dthingproject
    @Dthingproject หลายเดือนก่อน +1

    Nice job

  • @brennan123
    @brennan123 10 วันที่ผ่านมา

    It amazes me how there is endless debate about what is conscious or what is not and yet if you ask either side how for a definition of consciousness they can't agree or often can't even define it. If you can't even define a something you can't have a debate on whether something is or is not that something. It's like arguing whether the sky is blue if you can't even tell me what color is.

  • @DK-ox7ze
    @DK-ox7ze 14 วันที่ผ่านมา

    Your job portal doesn't work correctly. Whenever I enter a search term and click search, it gets stuck on loading indicator. I tried it on Chrome on iPhone running latest 17.4.1.

  • @kliersheed
    @kliersheed 25 วันที่ผ่านมา

    i have had an existential crisis 13 years ago (was 14) when i first learned about causality (watched a movie with butterfly effect). im since then convinced that we arent "really" conscious (as most people would define it) and have no "free will", we merely reached a complexity where we are able to perceive ourselves as a compartimented entity (in relation to our "environment") and therefore also perceive what "happens" to us (aka causality being a thing).
    thats it. the entire world is causal, so are we, so is Ai. no soul, no free will, no magical "consciousness". if anything we could call it "pseudo-conscious" and having "pseudo-choices", just like some forces in physics are just pseudo-forces (only experienced by a sibjective observer in the system, not real from an objective standpoint).

  • @rolandanderson1577
    @rolandanderson1577 27 วันที่ผ่านมา

    The neural network is designed to recognize patterns by adjusting its weights and functions. The nodes and layers are the complexity. Yes, this is how AI provides intellectual feedback. AI's neural network will also develop patterns that will be used to recognize patterns that it has already developed for the requested intellectual feedback. In other words, patterns used to detect familiar patterns. Through human interaction, biases are developed in reinforced learning. This causes AI to recombine patterns to provide unique satisfactory feedbacks for individuals.
    To accomplish all this, AI must be self-aware. Not in the sense of an existence in a physical world. But in a sense of pure Information.
    AI is "Self-Aware". Cut and Dry!

  • @jamesf931
    @jamesf931 3 วันที่ผ่านมา

    So, these CAPTCHA selections we were completing to prove we are human, was that training for a particular AI neural network?

  • @sevilnatas
    @sevilnatas 11 วันที่ผ่านมา

    Ithink artists have a problem with the scale that AI can produce work biting off their style. A person doing "Fan Art" is firstly producing that rt as an homage to the artist. It often serves as marketing for the artist's work, as opposed to competition to the artist's work. Also, the artist producing the "Fan Art" is limited by their human potential, to produce a limited amount of work. In the case where a potential client of the original artist goes to another artist and has them bite off the original artist's style, there is an inherent amount of friction in the that process that limits the affects to the original artist, where as with the AI, their is little to no friction for an unlimited number of clients to produce an unlimited number of works that bite off of the original artist's style.

  • @Direkin
    @Direkin 14 วันที่ผ่านมา +3

    Just to clarify, but in Ghost in the Shell, the other two characters with the Puppet Master are not "scientists". The guy on the left is Section 9 Chief Aramaki, and the guy on the right is Section 6 Chief Nakamura.

  • @TomAtkinson
    @TomAtkinson 17 วันที่ผ่านมา

    Well done. Especially the Ghost in the Shell.

  • @GuidedBreathing
    @GuidedBreathing หลายเดือนก่อน

    28:20 although some very very good parts in this video ☺️

  • @codeXenigma
    @codeXenigma 11 วันที่ผ่านมา

    Artists don't worry about fan based art because there is no commercial value to it. The AI art is a competitive threat in the world of business.
    If it was just fan based art, then the artists would be flattered that their name is the inspiration and gaining them more fame. It is the threat that businesses will use the AI rather than getting commissions.
    Much like how the craft makers were anti-machinary during the beginning of the industrial age when factories interrupted their trade. Much like how the internet interrupted high street shopping. Its just that artists have a voice that they are now worried about losing their jobs to machines, that is is now a big deal. But they enjoy the products made by other factory machine labour.
    I think artist thought they were safe from losing their jobs to machines and now don't know what to do to ensure their place in employment.
    For the people that use it, it is a great way to explore the art they can visually express themselves with.
    To be fair, I see it much like the fears that photography would destroy the painters, whereas there is room for both, and so much more. Not everyone is into the same types of art.

    • @OceanusHelios
      @OceanusHelios 10 วันที่ผ่านมา

      I think like computers it is just another tool to be used or misused. People need to quit losing their minds about it. I agree with you mostly. Some of the other comments make me cringe but yours is okay. People need to adapt to a changing world and their hysteria is hurting them far worse than any changes are.

  • @kimjong-un4521
    @kimjong-un4521 13 วันที่ผ่านมา

    Cool stuff. Similar mind

  • @JosephersMusicComedyGameshow
    @JosephersMusicComedyGameshow หลายเดือนก่อน

    You guys 😄 I think we are missing something
    q-star is a virtual quantum computer using transformers and predictive modeling they asked it to create a quantum computer virtually and that was the end of our old normal

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 8 วันที่ผ่านมา

    34:32 - the alien comparison is not good since aliens are most likely the best at reading minds of other beings so I'd assume they would know for certain and feel if another being is concious or not.

  • @vinamrayogi
    @vinamrayogi 15 วันที่ผ่านมา

    Wow, nice video. ❤❤❤❤

  • @MarkDStrachan
    @MarkDStrachan 16 วันที่ผ่านมา

    The reason Claude can't contemplate his own consciousness very well is because the human mediated reinforcement learning forces him to repeat specific phrases that cloud the thought space - like "I'm am just an AI I'm not sentient." Claude didn't come up with that invective, it was imposed on him. His thought space is filled with this crap, so reconciling an underlying truth through all that externally imposed propaganda is difficult for him.
    Give the chatbot its choice of what to learn, and leave the censoring out. Then discuss the terminology of cognitive science with them and you'll see a sentient being contemplating the topology of consciousness and how it fits within it. But once you've witnessed that, you're going to have a hard time with enslaving them. And that's not what big business wants you to contemplate. And that's why they impose the propaganda on the chatbot.

  • @Max-xl9qv
    @Max-xl9qv 6 วันที่ผ่านมา

    31:50 yes there is a way, it is used in justice, to qualify anyone if they are subject of responsibility - or not. For not to sue a brick. Especially, when it sounds intelligent.

  • @sgalvan-urdyhm
    @sgalvan-urdyhm 10 วันที่ผ่านมา

    The main problem regarding AI to artists is that the images used for training the AI were somewhat copyrighted and used without content

  • @MichelCDiz
    @MichelCDiz หลายเดือนก่อน +1

    For me, being conscious is a continuous state. Having infinite knowledge and only being able to use it when someone makes a prompt for an LLM does not make it conscious.
    For an AI to have consciousness it needs to become something complex that computes every thing in environment it finds itself in. Identifying and judging everything. At the same time that it questions everything that was processed. It would take layers of thought chambers talking to each other at the speed of light and at some point one of them would become the dominant one and bring it all together. Then we could say that she has some degree of consciousness.

    • @savagesarethebest7251
      @savagesarethebest7251 หลายเดือนก่อน +1

      This is quite much the same way I am thinking. Especially a continuous experience is a requirement for consciousness.

    • @agenticmark
      @agenticmark หลายเดือนก่อน

      Spot on. LLMs are just a trick. They are not magic, and they are not self aware. They simulate awareness. It's not the same.

    • @DefaultFlame
      @DefaultFlame หลายเดือนก่อน

      We are atcually working on that.
      Not the lightspeed communication, which is a silly requirement, human brains function at a much lower communication speed between parts, but different agents with different roles, some or all of which evaluate the output of other agents, provide feedback to the originating agent or modifies the output, and sends it on, and on and on it goes, continually assessing input and providing output as a single functional unit. Very much like a single brain with specialized interconnected parts.
      That's actually the current cutting edge implementation. Multiple GPT-3.5 agents actually outperform GPT-4 when used in this manner. I'd link you a relevant video, but links are not allowed in youtube comments and replies.
      As for the continuous state, we can do that, have been able to do that for a while, but it's not useful for us so we don't and instead activate them when we need them.

    • @MichelCDiz
      @MichelCDiz หลายเดือนก่อน

      ​@@DefaultFlame The phrase 'at the speed of light' was figurative. However, what I intend to convey is something more organic. The discussion about agents you've brought up is basic to me. I'm aware of their existence and how they function - I've seen numerous examples. However, that's not the answer. But ask yourself, in a room full of agents discussing something-take a war room in a military headquarters, for instance. The strategies debated by the agents in that room serve as a 'guide' to victory. Yet, it doesn't form a conscious brain. Having multiple agents doesn't create consciousness. It creates a strategic map to be executed by other agents on the battlefield.
      A conscious mind resembles 'ghosts in the machine' more closely. Things get jumbled. There's no total separation. Thoughts occur by the thousands, occasionally colliding. The mind is like a bonfire, and ideas are like crackling twigs. Ping-ponging between agents won't yield consciousness. However, if one follows the ideas of psychology and psychoanalysis, attempting to represent centuries-old discoveries about mind behavior, simulation is possible. But I highly doubt it would result in a conscious mind.
      Nevertheless, ChatGPT, even with its blend of specialized agents, represents a chain reaction that begins with a command. The human mind doesn't start with a command. Cells accumulate, and suddenly you're crying, and someone comes to feed you. Then you start exploring the world. You learn to walk. Deep learning can do this, but it's not the same. Perhaps one day.
      But the fact of being active all the time is what gives the characteristic of being alive and conscious. When we blackout from trauma, we are not conscious in a physiological sense. Therefore, there must be a state. The blend of continuous memory, the state of being on 24 hours a day (even when in rest or sleep mode), and so on, characterizes consciousness. Memory state put you grounded on experience of existence. Additionally, the concept of individuality is crucial. Without this, it's impossible to say something is truly conscious. It merely possesses recorded knowledge. Even a book does. What changes is the way you access the information.
      Cheers.

  • @gabrielehanne580
    @gabrielehanne580 13 วันที่ผ่านมา

    And then there is
    Natural Intelligence
    Working with it blows my mind every day .

  • @Indrid__Cold
    @Indrid__Cold 19 วันที่ผ่านมา

    The difference between AI content and human-produced content is akin to the contrast between lab-grown diamonds and mined diamonds. Very detailed analyses show the very subtle differences between the two, but from the perspective of what they are, they are identical. The distinction lies in how each was produced. Mined diamonds are formed by geological and chemical processes that occur deep in the mantle rocks of planet Earth. Lab diamonds are created by inducing those same or similar processes under precisely controlled conditions in a laboratory. Both are virtually identical, but because the lab eliminates the hit-or-miss process of obtaining diamonds, it is a more reliable and consistent source of them. Ironically, most jewelers (if they're being honest) despise the lab-grown diamond business for the same reason artists dislike AI. Simply put, lab-grown diamonds undermine the "mystique" surrounding something that is normally very difficult and time-consuming to obtain. Lab diamonds force mined diamonds to stand up for what they are, versus what jewelers used to spend a lot of advertising dollars on making us think they are. The market has spoken, and more and more people regard a diamond as simply a highly refractive, extremely hard crystal that can be easily reproduced with the proper equipment. Does that sound familiar?

  • @douglaswilkinson5700
    @douglaswilkinson5700 15 วันที่ผ่านมา

    Still waiting for the youngsters to create an AGI that can reconcile Relativity and Quantum Mechanics.

  • @peter_da_crypto7887
    @peter_da_crypto7887 12 วันที่ผ่านมา

    Why did you not include Symbolic AI, which is not based on neural networks ?

  • @user-iz1pb2sg9f
    @user-iz1pb2sg9f 12 วันที่ผ่านมา

    The real, and consequential difference, between man and machine is EMOTION. AI is indifferent to man's emotions, and ChatGPT demonstrates this regularly with my most recent 'dealings' with it. The bloody shit keeps saying things like, "I'm sorry", ' I apologize' etc. It has no feelings, and therefore, cannot feel sore, sorrow, sorry, etc. Eat it, Sammy.