Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs)

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ก.ค. 2024
  • Workshop links:
    WhyLabs Sign-up: whylabs.ai/free
    LangKit GitHub (give us a star!): github.com/whylabs/langkit
    Colab Notebook: bit.ly/whylabs-OWASPLLM10
    Join the Responsible AI Slack Group: join.slack.whylabs.ai/
    Join our workshop designed to equip you with the knowledge and skills to use LangKit with Hugging Face models. Guided by WhyLabs CEO Alessya Visnjic, you'll learn how to assess the security risks of your LLM application and how to protect your application from adversarial scenarios.
    This workshop will cover how to tackle the OWASP Top 10 security challenges for Large Language Model Applications (version 1.1).
    LLM01: Prompt Injection
    LLM02: Insecure Output Handling
    LLM03: Training Data Poisoning
    LLM04: Model Denial of Service
    LLM05: Supply Chain Vulnerabilities
    LLM06: Sensitive Information Disclosure
    LLM07: Insecure Plugin Design
    LLM08: Excessive Agency
    LLM09: Overreliance
    LLM10: Model Theft
    What you’ll need:
    A free WhyLabs account (whylabs.ai/free)
    A Google account (for saving a Google Colab)
    Who should attend:
    Anyone interested in building applications with LLMs, AI Observability, Model monitoring, MLOps, and DataOps! This workshop is designed to be approachable for most skill levels. Familiarity with machine learning and Python will be useful, but it's not required to attend.
    By the end of this workshop, you’ll be able to implement security techniques to your large language models (LLMs) .
    Bring your curiosity and your questions. By the end of the workshop, you'll leave with a new level of comfort and familiarity with LangKit and be ready to take your language model development and monitoring to the next level.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 2

  • @charlieir1
    @charlieir1 14 วันที่ผ่านมา

    Very helpful, thanks Alessya

  • @TheAckdog
    @TheAckdog หลายเดือนก่อน

    Great intro to LLM Security. Thank you! :)