AI is NOT Intelligent (Harvard Univ, Cornell & MIT)

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ย. 2024

ความคิดเห็น • 33

  • @Jvo_Rien
    @Jvo_Rien วันที่ผ่านมา +5

    Thank you for sharing this publications. As someone working on mechanistic interpretability this gave me a few ideas.

  • @NeoKailthas
    @NeoKailthas วันที่ผ่านมา +5

    Can they define intelligence first. Secondly, can they show how human brain works and show that intelligence.

    • @elmichellangelo
      @elmichellangelo วันที่ผ่านมา

      That…they just surf on the actual definition

  • @mrpocock
    @mrpocock วันที่ผ่านมา +1

    Think we're past the point where those finding definitions to rule AI not intelligent are coming up with definitions that would rule out people as intelligent.

  • @WokeSoros
    @WokeSoros วันที่ผ่านมา +2

    If MS cap expenditures are more than their payroll, it would seem at a glance that AI solutions are more expensive than hiring humans? Seems about right. At best this tech is an added cost, not a replacement for employees.
    Currently trying to replace an employee with AI is about as logical as trying to replace a drywaller with a cordless drill.
    A cordless drill makes hanging drywall more efficient, it doesn’t replace your worker.

    • @seeibe
      @seeibe วันที่ผ่านมา +2

      AI is like C vs assembly. It can save you a bunch of time but you still need a human doing the work.

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n 18 ชั่วโมงที่ผ่านมา

    I have to ask what they meant by insufficient training data? Was a model trained only on the turns or on the image and the turns? Because if so the situation is very different. Consider if there was little training data and no context how would you as a person know what was in a given region of space? As long as you didn’t see enough routes through the region to you it would be an undefinable void. Could you clarify because without that information I can’t really be sure about the conclusion and I haven’t understood that from looking at the paper either.

  • @mrpocock
    @mrpocock วันที่ผ่านมา

    I've thought about it a bit more. What was the training regime? Did they train to groking? I wonder if they take to memorisation but not long enough for that phase change to understanding.

  • @dgrxd
    @dgrxd วันที่ผ่านมา +7

    Another "the grapes are bitter" paper.
    This is like "free will doesn't exist."
    BS definition, BS irrelevant conclusion.
    People don't have free will, but they act like they do. Same here, Neuronal networks are not intelligent, but they behave like they do. And this is all that matters.
    I can't prove that anybody else than me is sentient, but as long as you guys behave like you are sentient, that's all that matters. Hehe ❤

    • @Wrociem
      @Wrociem วันที่ผ่านมา +1

      Well its better than having the constant hype of AI and LLM's being something more than they really are. We need more sober research that give us an alternative view. In terms of free will or sentience, I think we are very far from answering that when it comes to the current AI.

    • @ronilevarez901
      @ronilevarez901 20 ชั่วโมงที่ผ่านมา

      And how exactly do you prove that you are sentiment?
      Maybe you aren't and you don't know XD

    • @dgrxd
      @dgrxd 26 นาทีที่ผ่านมา

      @ronilevarez901 "Sentience" is an ID/KEY/WORD/POINTER that I ascribe to a specific set of behaviours/VALUE/CONCEPT/DEFINITION that, from my perspective, some complex adaptive systems/organisms seem to exhibit. Behaviours that after enough observations to reach more than 5 sigmas of confidence, I, myself, I seem to exhibit.
      Some Notes:
      1. The IDs/KEYs/WORDS shouldn't be confused with the CONCEPTS/VALUES they represent. This is similar to how money is a representation of value, not the value itself.
      2. The common VALUE/DEFINITION ascribed to the WORD/KEY/ID: "Sentience" was, for a long time, passing the Turing Test. Now that AI can pass it (probably by pretending that it is dumber and slower than it really is), people are in the process of moving the goalpost, aka. changing the definition/VALUE that they ascribe to the ID/WORD "Sentience." Currently they haven't settled yet on the new definition/TEST/VALUE, so how can anything nowadays be considered "Sentient" when this ID/KEY/WORD/POINTER currently does not have a VALUE/DEFINITION/TEST ascribed to it.
      3. It's fine when people change the goalposts/DEFINITIONS/TESTS to increase accuracy, precision, and confidence, or to challenge themselves, but most of the time, people are doing this because they aren't happy with the result of the TEST, in this case that something else other than themselves passed the TEST/DEFINITION. So people are now changing the TEST/DEFINITION ascribed to "Sentience" (aka. they are cheating) such that AI won't be able to pass. This is like when playing chess, and you're extra kind and agree to let your opponent to undo/redo their moves when they blunder, and then having them go around bragging that you can't beat them at chess.
      4. You may never be able to prove anything if you are insisting that everything must be proven all the way down, including proving the axioms.
      5. I was born with 5 times 4 fingers and 5 times 4 teeth. My teeth later got replaced by 2 to power 5 teeth, but my fingers didn't. Why God, why ? I would have loved 32 fingers❤️

  • @samyio4256
    @samyio4256 วันที่ผ่านมา +1

    Hello! Thanks for your valuable content, i learned so much!
    Btw: i think i found a way to enable temporal reasoning and real time self awareness. Its crazy i cant stop testing

  • @thielt01
    @thielt01 วันที่ผ่านมา +1

    I am also fragile to detours

  • @CielMC
    @CielMC 18 ชั่วโมงที่ผ่านมา

    Artificial Intelligence is intelligent. But the modern LLMs are not AI, despite how much all the companies that invested in them want you to think that it is. They are technologies that came about in the research of AI, they are a side project, not the end result. We are not close to AI, we just have something that's a pretty good parrot.
    We have had ML for tens of years, yet it only blew up now because it can kinda speak to you, enough to shock the general public. The products of AI research has benefited the world long before everything got an AI tag slapped on it.

  • @RealisiticEdgeMod
    @RealisiticEdgeMod 22 ชั่วโมงที่ผ่านมา

    Sad but true. The frontier LLMs dont understand anything. They sometimes chance upon the correct answer.

  • @MusingsAndIdeas
    @MusingsAndIdeas วันที่ผ่านมา

    They do realize that Anthropic has an entire series of articles on this? I don't trust this study

    • @code4AI
      @code4AI  วันที่ผ่านมา +4

      You decide what and whom to believe. The company that sells you the product or other sources. Therefore: Do not continue to think about my video or read the Research by three universities. You know it better.

    • @dennisestenson7820
      @dennisestenson7820 วันที่ผ่านมา

      What was Anthropic's conclusion? I tend to agree with this paper.

    • @context_eidolon_music
      @context_eidolon_music วันที่ผ่านมา

      @@code4AI Stop producing garbage tier content if you aren't going to take a stance.

    • @ckckck12
      @ckckck12 วันที่ผ่านมา

      Regurgitation isn't intelligence. I have asked many of the best AI systems questions whose answers require synthesis and none have been capable of doing so. Furthermore, they usually reveal how poorly they were trained and refer to theories by others as facts.

    • @Wrociem
      @Wrociem วันที่ผ่านมา +1

      @@context_eidolon_music How is it garbage content? In my opinion this is a very important question to have some answer to. There is too much hype in the AI space and too many people wishing for something that may not be possible.