Finding a Balance: LLMs, Innovation, and Security

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ต.ค. 2024
  • The MLSecOps Podcast | Season 2 Episode 6
    With Guest Sandy Dunn
    And Hosts from Protect AI: Daryan Dehghanpisheh and Dan McInerney
    In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks.
    Exploring the swift pace of innovation juxtaposed with the imperative of maintaining robust security measures, the trio examines the critical need for organizations to adapt their security posture management to include considerations for AI usage.
    Thanks for listening! Find more episodes and transcripts at bit.ly/MLSecOp....
    Additional MLSecOps and AI Security tools and resources to check out:
    Protect AI Radar (bit.ly/Protect...)
    Protect AI’s ML Security-Focused Open Source Tools (bit.ly/Protect...)
    LLM Guard - The Security Toolkit for LLM Interactions (llm-guard.com)
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform (bit.ly/aimlhuntr)

ความคิดเห็น • 1

  • @Waitwhat469
    @Waitwhat469 7 หลายเดือนก่อน

    On agentic execution I personally think it's not there yet, but the next step, once the competency gap improves, I see treating using agentic LLMs as continuous contributors to a project, in the same way open source allows anyone to contribute code and audit new code. Basically, you don't trust the code to work, be secure, or be ran in prod/delivery, just like you don't trust anonymous merge requests. With a robust enough CI/CD system it should filter out straight attacks, known CVE introductions, code that fails unit, integration, and e2e tests, but the role of maintainer is still too high of a position of accountability to give quite yet to.