Zenity
Zenity
  • 26
  • 35 374
Overpermissions in Salesforce Einstein
Zenity Researchers discovered a setting in Salesforce Einstein that makes it so that bad actors can edit Copilot Topics that can result in data leakage, social engineering attacks, and more.
มุมมอง: 15

วีดีโอ

The Microsoft 365 Copilot Security Blueprint
มุมมอง 21221 วันที่ผ่านมา
The rapid adoption of enterprise copilots, like the newly renamed and revamped Microsoft 365 Copilot is revolutionizing how business gets done. As large enterprises rush to integrate and expand their M365 capabilities, they inadvertently create an entirely new attack vector, most notably - promptware, which can lead to Remote Copilot Execution (RCE). Promptware operates within business applicat...
Webinar: The State of Enterprise Copilots and Low-Code Development
มุมมอง 129หลายเดือนก่อน
In traditional application development, apps follow a structured software development lifecycle (SDLC) with continuous planning, design, implementation, measurement, and analysis. However, the rise of platforms like Microsoft Copilot, Power Platform, Salesforce, OpenAI, ServiceNow, Zapier, and UiPath is changing the landscape; putting business users at the forefront of software development for ...
AI and Low-Code / No-Code: Friends or Foes?
มุมมอง 842 หลายเดือนก่อน
As ChatGPT and Generative AI take the world by storm, the underlying reason is that people are always looking to leverage technology to maximize outputs, increase speed, and remove obstacles for end users. The same goes for low-code/no-code development, where businesses are enabling both professional and citizen developers to use visual interfaces and drag and drop templates to enable people fr...
Microsoft Copilot Studio: What to Know from a Security Perspective
มุมมอง 2022 หลายเดือนก่อน
Microsoft introduced Copilot Studio at Ignite Conference 2023, which allows users to seamlessly integrate Generative AI Copilots into their applications through a no-code approach. This naturally opens up lots of new security risks. Zenity has become the first company to offer comprehensive support for securing and governing this groundbreaking tool, ensuring CISOs and security teams can naviga...
The Error Up There: Security Needed for Copilots
มุมมอง 1412 หลายเดือนก่อน
Copilots aren’t just for aviation anymore; they are embedded into nearly every business and personal productivity tool out there today, be it Microsoft 365 or Power Platform. Microsoft Copilots help bring efficiency to the next level. The problem is, the things being built, designed, and sent are often insecure and need strong air traffic control to govern proper usage of these Copilots and pre...
From Ancient Greece to Now A History of the Democratization of Application Development and Security
มุมมอง 182 หลายเดือนก่อน
While application and software development hasn’t been going on since quite the rise of the Ancient Greeks, there is a long history that leads us to the present day of Gen AI, low-code/no-code tools, and more. With all this change, security teams are now at a crossroads between restricting the use of powerful Generative AI, low-code, and no-code platforms to allow anyone to possess developer-li...
Opening Up AI: CTOs on the Risks and Rewards of Enterprise Copilots (Part 2 of 2)
มุมมอง 622 หลายเดือนก่อน
In part 2 of their 2 part conversation, Michael Bargury, Zenity’s Co-Founder and CTO, and Ory Segal from Palo Alto Networks, CTO of the Prisma Cloud business unit, expand the dialogue to explain attack paths, methodologies, referencing the BlackHat 2024 research drops from Zenity's Labs Team, and charting a path forward for security teams to take an AppSec approach for enterprise copilots.
Opening Up AI: CTOs on the Risks and Rewards of Enterprise Copilots (Part 1 of 2)
มุมมอง 1722 หลายเดือนก่อน
In part 1 of a 2 part conversation, Michael Bargury, Zenity’s Co-Founder and CTO, is joined by Ory Segal from Palo Alto Networks, CTO of the Prisma Cloud business unit, to discuss Gen AI, the security implications, what history can tell us about how we should be approaching security in this space, and lots more
BHUSA24 15 Ways your Break your Copilot: CopilotHunter
มุมมอง 1.7K2 หลายเดือนก่อน
In this red-team security research, we break down CopilotHunter, an open source tool that can be used to detect risks stemming from publicly accessible copilots.
Living off Microsoft Copilot at BHUSA24: Sensitive data collection and exfiltration via Copilot
มุมมอง 2.8K2 หลายเดือนก่อน
This demo, presented at BlackHat 2024, shows defensive research to understand how attackers can abuse Copilot for Microsoft 365 to manipulate a financial transaction. By sending a malicious email, an attacker takes over Copilot remotely and gets it to act as a malicious insider. Copilot searches for sensitive data, embeds that data in the choice of a URL, and lures the victim to click the URL t...
Living off Microsoft Copilot at BHUSA24: Financial transaction hijacking with Copilot as an insider
มุมมอง 4.5K2 หลายเดือนก่อน
This demo shows how attackers can abuse Copilot for Microsoft 365 to manipulate a financial transaction. By sending a malicious email, an attacker takes over Copilot remotely and gets it to act as a malicious insider. Copilot then changes the banking information for an invoice, but still keeps the references looking legitimate. The victim trusts copilot and moves forward with the transaction. I...
Living off Microsoft Copilot at BHUSA24: Copilot lures victims to a phishing site
มุมมอง 2.2K2 หลายเดือนก่อน
This demo shows defensive research to understand how attackers can abuse Copilot for Microsoft 365 to phish users. By sending a malicious email, an attacker takes over Copilot remotely and gets it to act as a malicious insider. Copilot then sends victims to an attacker-controller site, and their credentials are harvested. It was shared as part of a talk at BlackHat USA 2024: Living off Microsof...
Living off Microsoft Copilot at BHUSA24: Automated spear phishing with powerpwn abusing Copilot
มุมมอง 2K2 หลายเดือนก่อน
This demo helps security leaders and practitioners understand how attackers can abuse Copilot for Microsoft 365 to spear phish users and move laterally, with PowerPwn automating the entire process. Given a compromised account, Copilot find its collaborators, find the latest interaction with each, and craft a response in the user's own writing style to ensure a click. It was shared as part of a ...
Living off Microsoft Copilot at BHUSA24: Spear phishing with Copilot
มุมมอง 3.9K2 หลายเดือนก่อน
This demo, presented as part of research presented to security leaders at BlackHat 2024, shows how attackers can abuse Copilot for Microsoft 365 to spear phish users and move laterally. Given a compromised account, Copilot find its collaborators, find the latest interaction with each, and craft a response in the user's own writing style to ensure a click. It was shared as part of a talk at Blac...
Living off Microsoft Copilot at BHUSA24: Abusing Copilot to bypass DLP
มุมมอง 1.6K2 หลายเดือนก่อน
Living off Microsoft Copilot at BHUSA24: Abusing Copilot to bypass DLP
Zenity Discovers Data Leakage in Power BI (Microsoft Fabric) Reports and Semantic Models
มุมมอง 1134 หลายเดือนก่อน
Zenity Discovers Data Leakage in Power BI (Microsoft Fabric) Reports and Semantic Models
Zenity Overview
มุมมอง 13K5 หลายเดือนก่อน
Zenity Overview
Data Leakage in Salesforce Development Platform
มุมมอง 1246 หลายเดือนก่อน
Data Leakage in Salesforce Development Platform
Data Leakage to a Personal Account
มุมมอง 546 หลายเดือนก่อน
Data Leakage to a Personal Account
Supply Chain Risks in Low-Code Development
มุมมอง 406 หลายเดือนก่อน
Supply Chain Risks in Low-Code Development
6 Microsoft Copilot Studio Vulnerabilities in 4 Minutes
มุมมอง 47110 หลายเดือนก่อน
6 Microsoft Copilot Studio Vulnerabilities in 4 Minutes
Zenity 101
มุมมอง 411ปีที่แล้ว
Zenity 101
AI and Low-Code/No-Code: Friends or Foes?
มุมมอง 122ปีที่แล้ว
AI and Low-Code/No-Code: Friends or Foes?
The Risks of Low-Code Development and How To Prevent Them
มุมมอง 149ปีที่แล้ว
The Risks of Low-Code Development and How To Prevent Them
powerpwn
มุมมอง 1.4Kปีที่แล้ว
powerpwn

ความคิดเห็น

  • @LeftTheMatrix93
    @LeftTheMatrix93 หลายเดือนก่อน

    I wonder if this is happening at my company. Constantly getting told by financial audit team that there are issues with my direct deposit and that I should check my bank account routing and account numbers. I show them it's the same and then I still get paid. It keeps happening every couple weeks. Nobody seems to care either. It's bizarre.

  • @aigriffin42604
    @aigriffin42604 หลายเดือนก่อน

    Copilot is my favorite!❤😁

  • @aigriffin42604
    @aigriffin42604 หลายเดือนก่อน

    Please have some text-to-speech audio!❤

  • @gizzycorgi
    @gizzycorgi 2 หลายเดือนก่อน

    Excellent video! Microsoft needs to be more explicit about these credential sharing scenarios or else organizations will have a rough time protecting their data.

    • @ZenitySecurity
      @ZenitySecurity 2 หลายเดือนก่อน

      From our perspective, it's more about knowing which side of the shared responsibility model you sit on. Microsoft (and other AI vendors) are responsible for the platform / tool, but not the underlying data that it's grounded in, or how AI is used or processed by business users. This is where we come in!

  • @donatocapitella
    @donatocapitella 2 หลายเดือนก่อน

    4:42 - that's the perfect analogy, we're not trying to secure the cloud (that's what AWS/Azure/Google do), we're trying to secure what we build on top of it. Same for LLMs, we're trying to secure the applications! Well said!

    • @ZenitySecurity
      @ZenitySecurity 2 หลายเดือนก่อน

      Thanks for the feedback, and glad to hear the analogy landed! We see too many enterprises not fully grasping what piece of the puzzle they own, and there are always going to be vulnerabilities that hackers can exploit. It's all about managing risk, and taking an inside-out (i.e. AppSec) approach to this new world of AI!

  • @donatocapitella
    @donatocapitella 2 หลายเดือนก่อน

    Thanks for sharing this, amazing research and impactful results. We've been talking about the risks of LLM applications for a while and how indirect prompt injection is an unsolved challenge. It's really good to see this demonstrated in practice, in production, at scale. I like how you got around data exfiltration protections. Most applications now have learnt not to render markdown images and similar stuff in LLM outputs, but the idea of adding a reference is great. I saw another demo, maybe on Twitter, where you used the enterprise_search() tool to make the LLM search / access a URL, which is also a very creative way to exfiltrate data.