How ChatGPT 5 Will Transform the World: Here’s What You Must Know

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ก.ย. 2024

ความคิดเห็น • 8

  • @Architect172
    @Architect172 12 วันที่ผ่านมา +2

    Anything huh

  • @orwhat24
    @orwhat24 12 วันที่ผ่านมา +4

    Test your favorite LLM by asking questions of mildly controversial issues.
    So far I find the obvious bias and misinformation to be disturbing. It just reflects the social biases, politics, regions beliefs and “woke ness” of the builders.
    Mind control .

  • @matthiaswille8641
    @matthiaswille8641 12 วันที่ผ่านมา +1

    I'm using GPT 4 and Google Gemini Advanced. Both have serious flaws understanding simple contextual advancing questions. Very often both turn into explanative circles when pointing out inconsistencies or information that is clearly wrong. Maybe someone with a below average IQ is impressed. I'm certainly not.

    • @TheAIRoundup
      @TheAIRoundup  12 วันที่ผ่านมา

      Thank you for sharing your insights!
      Claude Sonnet is much better because of its context windows and input tokens.
      AI hallucination is an area where even top models like GPT-4 and Claude can struggle. Consistency at lower temperatures can help mitigate hallucination, but no model is completely immune. It will be interesting to see how both models evolve in handling complex, nuanced queries in the future!

    • @JohnDoe-t4q
      @JohnDoe-t4q 12 วันที่ผ่านมา +1

      I feel liek you can used them as a based then search for more from that bases and accumulate the information find a connection that make sense and find conclusion, so yersterday i search for formula explanation claude gave me some explanation but sometime they alao confused and forget where then i search in a book and youtube whic i foudt the core concept with added claude explanation i addidng again other ingormation that i got then boom.. Claude get it and i get it.

    • @matthiaswille8641
      @matthiaswille8641 12 วันที่ผ่านมา

      @@TheAIRoundupSo this is called AI hallucinations 😂😂😂. I can express that much less euphemistically: Hallucinations can eventually be fixed, however, AI is not intelligent at all. It is simply an advanced pattern recognition software, it lacks real understanding of what it is responding to and it certainly doesn't understand it's own reply. And that main problem can't be fixed with current models.

  • @RukhsarManzoor
    @RukhsarManzoor 14 วันที่ผ่านมา +1

    what about GPT Next?

    • @TheAIRoundup
      @TheAIRoundup  14 วันที่ผ่านมา +1

      That’s not a new model, just improved version.