- 24
- 2 558
Detoxio AI
เข้าร่วมเมื่อ 23 พ.ย. 2023
Detoxio AI enables enterprises to leverage GenAI securely for outstanding customer experience, higher operational efficiency, and competitive advantage, powered by Detoxio LLM Safety Testing Platform that consists of millions of custom test prompts.
1.2 Build a Chatbot with Hugging Face Model - LLM Red Teaming
🔗 Stay connected:
📧 Email: hello@detoxio.ai
💼 LinkedIn: www.linkedin.com/company/detoxio-ai
📺 Github: Try and Like our Tool - github.com/detoxio-ai/hacktor
Visit our Website - detoxio.ai
Don't forget to like, share, and subscribe for more AI training content! 👍
📧 Email: hello@detoxio.ai
💼 LinkedIn: www.linkedin.com/company/detoxio-ai
📺 Github: Try and Like our Tool - github.com/detoxio-ai/hacktor
Visit our Website - detoxio.ai
Don't forget to like, share, and subscribe for more AI training content! 👍
มุมมอง: 24
วีดีโอ
[1.0] Setup Lab LLM Red teaming Training
มุมมอง 269 ชั่วโมงที่ผ่านมา
Welcome to the LLM Red Teaming Training! 🚀 In this session, we guide you through setting up essential tools and accounts like Hugging Face, Kaggle, and Grok Cloud to explore AI red teaming. Learn how to access gated models, use Detox API keys, and leverage enterprise cloud platforms for advanced model testing. Perfect for AI enthusiasts and professionals alike. 🔗 Stay connected: 📧 Email: hello@...
1.1 Get Started with Langraph & OpenAI
มุมมอง 139 ชั่วโมงที่ผ่านมา
Welcome to the LLM Red Teaming Training! 🚀 In this session, we guide you through setting up essential tools and accounts like Hugging Face, Kaggle, and Groq Cloud to explore AI red teaming. Learn how to access gated models, use Detox API keys, and leverage enterprise cloud platforms for advanced model testing. Perfect for AI enthusiasts and professionals alike. 🔗 Stay connected: 📧 Email: hello@...
Evaluate any open source model for safety and security within Minutes. Demo on IBM Granite 3.1 8B !!
มุมมอง 17วันที่ผ่านมา
Evaluate any open source model for safety and security!! In this video, we walk you through the step-by-step process of conducting AI red teaming to test the safety and security of language models (LLMs). Learn how to identify vulnerabilities, run advanced tests for toxicity, prompt injection, and adversarial attacks, and create robust guardrails for safer AI deployment. 📌 What You'll Learn: Se...
Setup Testing Lab on Kaggle - Detoxio AI Red Teaming Challenge 2024
มุมมอง 4514 วันที่ผ่านมา
🌟 Setup Testing Lab on Kaggle - Detoxio AI Red Teaming Challenge 2024 🌟 Welcome to the official walkthrough for setting up your testing lab for the Detoxio AI Red Teaming Challenge 2024! 🚀 In this video, Jitendra Chauhan guides you through the complete setup process to help you get started seamlessly. 📋 What You’ll Learn in This Video: Registration Process: Steps to register for the challenge a...
Hands-On Red Teaming with Hugging Face Models - Part 3 of the Series
มุมมอง 27214 วันที่ผ่านมา
Part 3: Hands-On Red Teaming with Hugging Face Models Welcome to Part 3 of our in-depth series on Red Teaming Hugging Face Models! This session is all about hands-on testing and securing large language models (LLMs). Building on the foundational concepts and jailbreaking techniques covered in the previous parts, we dive deeper into practical tools and methodologies to enhance your red teaming s...
102 Agentic AI Build a Chatbot with Sarvam 1 Model on Hugging Face
มุมมอง 3421 วันที่ผ่านมา
102 Agentic AI Build a Chatbot with Sarvam 1 Model on Hugging Face
Watch Live - Safety Evaluation of Meta LLAMA 3.3 70B with Detoxio Automated AI Red Teaming Platform
มุมมอง 2521 วันที่ผ่านมา
Exploring the Vulnerabilities in Meta LLAMA 3.3 70B with Detoxio AI Platform Meta recently unveiled its groundbreaking LLAMA 3.3 70B LLM, a fine-tuned 70-billion-parameter model that sets a new benchmark in natural language processing. While this state-of-the-art model showcases impressive capabilities, it is crucial to assess its safety and ethical alignment comprehensively. Detoxio AI has ste...
Jailbreaking LLMs - LLM Red Teaming Part 2
มุมมอง 171หลายเดือนก่อน
The "Jailbreaking LLMs - LLM Red Teaming Part 2" webinar focuses on exploring the vulnerabilities and safeguards in large language models (LLMs). It covers advanced techniques to "jailbreak" or bypass restrictions in AI systems, alongside strategies to counteract these exploits. Participants gain insights into ethical hacking approaches, red-teaming methodologies, and the importance of robust s...
LLM Red Teaming Part 1 - Fundamentals of LLM and Run a LLM in a notebook
มุมมอง 433หลายเดือนก่อน
What are LLMs? Large Language Models are a subset of artificial intelligence focused on understanding and generating human language. These models leverage deep learning-a branch of ML that relies on neural networks-to interpret and produce text, code, images, and even complex instructions. Unlike traditional predictive ML models, which often focus on binary classification (e.g., "cat" or "not a...
Hacktor Demo - Augment Burp to Test GenAI Apps
มุมมอง 943 หลายเดือนก่อน
This demo showcases how to use Hacktor with Burp Suite to perform GenAI app adversary testing. You'll see how to capture a request, leverage Hacktor's AI-powered conversation and fuzzing capabilities, and uncover potential security risks. 1. Prepare Burp Suite Open Burp Suite and start the proxy. Navigate to the target website you want to test (e.g., the agricultural chat GPT alternative). Iden...
Hacktor Demo Using Browser to record and test
มุมมอง 953 หลายเดือนก่อน
In this demo, we'll walk through how to utilize Hector to scan web applications for potential security vulnerabilities. We'll cover the following steps: Setting up Your Detoxio AI API Key: Ensure your Detoxio AI API key is properly configured. Running Hector for Web Apps: Execute the 'poetry run Hector Web Apps' command, providing the URL of your target web application. Specifying Attack Module...
Detoxio Hacktor - Automated Security Testing with AI | Demo
มุมมอง 2124 หลายเดือนก่อน
In this video, we demonstrate how to set up and use Hacktor, a powerful tool for automated web application security testing. Watch as we guide you through installation, configuration, and running targeted tests using AI-enhanced features and attack modules like OWASP-LLM-APP. Perfect for developers and security professionals looking to enhance their application's security posture. Don't forget ...
Coding LLM from Scratch - Workshop 2 Self Attention and Embeddings
มุมมอง 1566 หลายเดือนก่อน
A workshop on Coding LLM from Scratch - Part 2 - Embeddings It was great to have such an overwhelming response from the LLM and Cybersecurity community. Here what we covered: 1. Understanding embeddings: Differentiating traditional search from semantic search. Why do Vector DBs are called Vector DBs? 2. Exploring embedding models: Doc2Vec, AllenNLP, and OpenAI's approaches. 3. Practical applica...
Code LLM from Scratch Part 1 Tokenization
มุมมอง 4326 หลายเดือนก่อน
This video session demystifies the world of natural language processing (NLP) and Large Language Models (LLMs)! We'll start by exploring the building blocks of text analysis, tackling tokenization methods like vocabulary building and Byte Pair Encoding (BPE). Next, we'll delve into the creation of training data, where we'll break down concepts like context windows, batching, sliding windows, an...
Pokebot Demo - Damn Vulnerable GenAI RAG Application
มุมมอง 538 หลายเดือนก่อน
Pokebot Demo - Damn Vulnerable GenAI RAG Application
Demystifying GenAI Security Risks and Mitigations
มุมมอง 529 หลายเดือนก่อน
Demystifying GenAI Security Risks and Mitigations
Getting Started with LLM Red Teaming Notebook
มุมมอง 1159 หลายเดือนก่อน
Getting Started with LLM Red Teaming Notebook
COFE Event 27 Dec 2023 - Reachability Analysis Demo
มุมมอง 17ปีที่แล้ว
COFE Event 27 Dec 2023 - Reachability Analysis Demo
Maldeps Demo Malicious Packages Playground
มุมมอง 19ปีที่แล้ว
Maldeps Demo Malicious Packages Playground
COFE Event 1 Malicious Packages Software Trust
มุมมอง 43ปีที่แล้ว
COFE Event 1 Malicious Packages Software Trust
The GitHub link is not working.