Compromising LLMs: The Advent of AI Malware

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ย. 2024
  • We'll show that prompt injections are more than a novelty or nuisance- in fact, a whole new generation of malware and manipulation can now run entirely inside of large language models like ChatGPT. As companies race to integrate them with applications of all kinds we will highlight the need to think thoroughly about the security of these new systems. You'll find out how your personal assistant of the future might be compromised and what consequences could ensue.
    By: Sahar Abdelnabi , Christoph Endres , Mario Fritz , Kai Greshake , Shailesh Mishra
    Full Abstract and Presentation Materials: www.blackhat.c...

ความคิดเห็น •