Confirmation bias and AI is going to be a huge problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2024

ความคิดเห็น • 13

  • @Ahmonza
    @Ahmonza ปีที่แล้ว +2

    this confirms how I thought these AIs work, its more like an algorithm spitting out reverent gibberish without really understanding what its saying. many corrections must be made on the devs end with pre-made functions and variables to match. questions must be simple enough to trigger the pre-made functions that exist beyond the AIs base function.

    • @hadet
      @hadet  ปีที่แล้ว

      It's more about weighting and biases. You can read more about that here. towardsdatascience.com/whats-the-role-of-weights-and-bias-in-a-neural-network-4cf7e9888a0f I also recommend watching some videos from www.youtube.com/@TwoMinutePapers

  • @tobiasfellmann7692
    @tobiasfellmann7692 ปีที่แล้ว +1

    They are worried about AGI. The podcast by lex friedman with openai head about 1 or two weeks ago. I understand the worry about AGI and it's ok in my mind what they are doing. There should be a statement about breaking gpl and how they will resolve this(we make gpt2+ dataset available after 1 or 2 months after creating an updated version, aka we release gpt4 in 2 months you get gpt 3 or we will donate to carity.)
    Thanks for the video, good demonstration of gpt failure!👍

  • @JR-mk6ow
    @JR-mk6ow ปีที่แล้ว +2

    The strangest thing is that if you tell chatGPT "are you sure about that?" he'll actually recognize he said some lies. But he can't recognize it beforehand because he doesn't know the answer himself will provide.
    I'm actually more afraid of supervised AIs: some bad actors can do more damage with with than some unsupervised AI (cause nobodies going to place a non-super well tested AI in front of some critical part)

    • @hadet
      @hadet  ปีที่แล้ว

      That's a major concern of mine. Especially given how few people fact check the information they're getting from news sources already. The potential for malicious disinformation by weighting AI assistants a certain way is astronomically bad. Which is another reason why I think OpenAI needs to start being truly OPEN.

    • @Димитрије-ч4б
      @Димитрије-ч4б ปีที่แล้ว

      @@hadet If we're as close to AGI (2 decades or less) as some people think we are, openness is far more dangerous. MAD doesn't work if everyone has the launch codes.
      Not that I think we are, no one knows.

    • @hadet
      @hadet  ปีที่แล้ว

      @@Димитрије-ч4б this is the goofiest response to this I've ever seen. Mutually assured destruction doesn't exist there is no country on the planet that can stand up to the United States in a military conflict let alone NATO and we've already proven in the past couple weeks that we can shoot down hypersonic missiles nuclear launch codes are meaningless now. Artificial intelligence is not a threat to national Security in any meaningful way other than its ability to plan out logistics better than human beings can the only thing that we need to worry about with AI is its ability to replicate human beings in create misinformation

    • @Димитрије-ч4б
      @Димитрије-ч4б ปีที่แล้ว

      @@hadet “Universal public to a tool with the ability to produce artificial viruses which we have no inbuilt immunity to is not a threat to national security”
      “Universal public access to a tool with the ability to invent new bio weapons is not a threat to national security”
      And mutually assured destruction absolutely does exist, but go ahead and stay deluded about your ability to shoot down 10000 nukes if it makes you feel safer.

    • @Димитрије-ч4б
      @Димитрије-ч4б ปีที่แล้ว

      @@hadet Extremely weird that my comment somehow triggered your jongoism…
      You were dying to talk about US air defence capabilities all day weren’t you?