NIST AI Risk Management Framework & Generative AI Profile | Lunchtime BABLing 36

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ส.ค. 2024
  • 🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies.
    🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management.
    📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias.
    🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies.
    🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices.
    🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions.
    🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!
    0:00 - Intro
    1:15 - What is the NIST AI Risk Management Framework and the Generative AI Profile?
    3:16 - What is generative AI?
    6:28 - How does generative AI affect misinformation/disinformation?
    13:18 - How do we get companies to adopt the NIST AI RMF?
    16:33 - Will the NIST AI RMF ever become required by law?
    19:11 - Building trust and improving the bottom line.
    20:34 - Can NIST compliance be like SOC 2?
    27:48 - What is AI assurance?
    31:27 - US AI Safety Institute Consortium work
    36:00 - What is NIST?
    39:13 - How do you start implementing the NIST AI RMF?
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 5

  • @WandaBarquinG
    @WandaBarquinG 3 หลายเดือนก่อน +1

    Thank you!

  • @devsuniversity
    @devsuniversity 3 หลายเดือนก่อน +1

    Great stream!

  • @learnwithashishtripathi
    @learnwithashishtripathi หลายเดือนก่อน

    How is the industry market in terms of revenue of setting up a consulting firm which specialises in AI Risk management based on laws governing various regions across the globe, like NIST is US, EU have there own and so on...

  • @devsuniversity
    @devsuniversity 3 หลายเดือนก่อน +1

    Is there an alignment between ISO 42001 (AI Management) and NIST? @bablai

    • @bablai
      @bablai  3 หลายเดือนก่อน

      The NIST AI RMF does have quite a bit of overlap with ISO 42001, in that many elements of the Govern, Map, Measure, and Manage function can be mapped onto ISO 42001 controls. However, it's not a perfect mapping, and NIST is both more high-level and very specific at the same time. For example, these Generative AI guidelines that NIST released are not present at all in ISO 42001.