Scott and Mark learn responsible AI | BRK329

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ม.ค. 2025
  • Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.
    To learn more, please check out these resources:
    aka.ms/TCL/Mic...
    learn.microsof...
    aka.ms/IgniteA...
    aka.ms/Ignite2...
    𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
    Mark Russinovich
    Scott Hanselman
    𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
    This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at ignite.microso...
    BRK329 | English (US) | Security
    #MSIgnite

ความคิดเห็น •