Prevent AI hallucinations with these prompts

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024
  • Does your LLM keep making stuff up? 🤔 Stop the nonsense. Get precise answers by asking more specific questions. Hallucinations happen when you wing it.
    Learn how to get confidently correct information from your LLMs with these simple fixes from Shawn Dawid. 👇
    Remember: Specific prompts = solid answers. 👏
    Keep your LLM grounded with this effective flow: clear prompt → grounded context → accurate answer.
    Things to avoid:
    🚫 Vague prompts:
    🚫 Missing or poorly defined context.
    🚫 Being unable to confirm the validity of the answer.
    Watch the full video for a deeper dive into recursive prompting and get AI outputs you can trust with confidence.

ความคิดเห็น • 2

  • @ThePlayerOfGames
    @ThePlayerOfGames 19 วันที่ผ่านมา +1

    Prompts won't stop hallucinations because that's not how the statistical weighting machine works. The computer doesn't understand context, only statistical correlation.