Test your favorite LLM by asking questions of mildly controversial issues. So far I find the obvious bias and misinformation to be disturbing. It just reflects the social biases, politics, regions beliefs and “woke ness” of the builders. Mind control .
I'm using GPT 4 and Google Gemini Advanced. Both have serious flaws understanding simple contextual advancing questions. Very often both turn into explanative circles when pointing out inconsistencies or information that is clearly wrong. Maybe someone with a below average IQ is impressed. I'm certainly not.
Thank you for sharing your insights! Claude Sonnet is much better because of its context windows and input tokens. AI hallucination is an area where even top models like GPT-4 and Claude can struggle. Consistency at lower temperatures can help mitigate hallucination, but no model is completely immune. It will be interesting to see how both models evolve in handling complex, nuanced queries in the future!
I feel liek you can used them as a based then search for more from that bases and accumulate the information find a connection that make sense and find conclusion, so yersterday i search for formula explanation claude gave me some explanation but sometime they alao confused and forget where then i search in a book and youtube whic i foudt the core concept with added claude explanation i addidng again other ingormation that i got then boom.. Claude get it and i get it.
@@TheAIRoundupSo this is called AI hallucinations 😂😂😂. I can express that much less euphemistically: Hallucinations can eventually be fixed, however, AI is not intelligent at all. It is simply an advanced pattern recognition software, it lacks real understanding of what it is responding to and it certainly doesn't understand it's own reply. And that main problem can't be fixed with current models.
Anything huh
Test your favorite LLM by asking questions of mildly controversial issues.
So far I find the obvious bias and misinformation to be disturbing. It just reflects the social biases, politics, regions beliefs and “woke ness” of the builders.
Mind control .
I'm using GPT 4 and Google Gemini Advanced. Both have serious flaws understanding simple contextual advancing questions. Very often both turn into explanative circles when pointing out inconsistencies or information that is clearly wrong. Maybe someone with a below average IQ is impressed. I'm certainly not.
Thank you for sharing your insights!
Claude Sonnet is much better because of its context windows and input tokens.
AI hallucination is an area where even top models like GPT-4 and Claude can struggle. Consistency at lower temperatures can help mitigate hallucination, but no model is completely immune. It will be interesting to see how both models evolve in handling complex, nuanced queries in the future!
I feel liek you can used them as a based then search for more from that bases and accumulate the information find a connection that make sense and find conclusion, so yersterday i search for formula explanation claude gave me some explanation but sometime they alao confused and forget where then i search in a book and youtube whic i foudt the core concept with added claude explanation i addidng again other ingormation that i got then boom.. Claude get it and i get it.
@@TheAIRoundupSo this is called AI hallucinations 😂😂😂. I can express that much less euphemistically: Hallucinations can eventually be fixed, however, AI is not intelligent at all. It is simply an advanced pattern recognition software, it lacks real understanding of what it is responding to and it certainly doesn't understand it's own reply. And that main problem can't be fixed with current models.
what about GPT Next?
That’s not a new model, just improved version.