VulnerabilityGPT: Cybersecurity in the Age of LLM and AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ม.ค. 2025

ความคิดเห็น • 12

  • @karengomez3143
    @karengomez3143 5 หลายเดือนก่อน +1

    Takeaways:
    GPT is making many structured relation placement between words in different levels (layers) so different inputs could bring a set of outputs, but it's not a DB, and it's not searching for patterns within a created DB.
    Within the GPT answers are the alignment response rules, what would be if a response is following the user's request in spite of company intent or social or compliance rules.
    GPT models are not that good at making a whole story or remembering a conversation, so it's not good in making novels, but it has a window response that would be good from a user's point of view aligning to their intend. Guardrails are limits or ways to make a system in place to follow alignments.
    Grounding as a hallucination mechanism, providing context to the user's query through a database management (large language model), so whenever the user is asking a question that needs more info about, or that is recent, the app would bring another page, just like google would retrieve twitter webpage when someone is asking for it.
    AI application: Scammer response generator

  • @manamsetty2664
    @manamsetty2664 ปีที่แล้ว +2

    Awesome talk 👏
    Really good explanation about what AI is doing
    Great animations
    Was always engaged throughout the talk
    Questions need to be audible though that was the only issue

  • @georgeb8637
    @georgeb8637 ปีที่แล้ว +5

    8:00 - all letters in English language
    9:41 neural network
    22:13 - AI confessing love
    26:58 Hallucination
    32:06 prompt engineering
    40:53 - AI apology 😂
    46:58 - Go game beat by human
    54:00 - sequencing attack

  • @karengomez3143
    @karengomez3143 5 หลายเดือนก่อน

    Takeaways:
    Attacks:
    -Injection (silly activities could defeat an AI model, since this data is not in the training data).
    -Grounding (allows an AI to show false outputs, through data creation, (Search, Engine, Optimization) and then the result is shown by the AI.
    -Prompt Hijacking (when the context is modified by someone that does not have the authority to do it, like a user's input being treated as a developers).
    Exploits:
    -Conversation attacks to Business flaws (wrong discounts, upgrades, math)
    -Guardrails attacks

  • @ChrisLeftBlank
    @ChrisLeftBlank 10 หลายเดือนก่อน

    This is true AI Safety, all the closed-sourced policy holders guiding the system is doing is showing the AI how to say no to end-user. I mean alignment is not a bad thing but the block box approach is just tuning models to select what human alignment is for the user.

  • @rumpelstiltskin9729
    @rumpelstiltskin9729 ปีที่แล้ว +3

    The news segments were so cringe

  • @achunaryan3418
    @achunaryan3418 ปีที่แล้ว +2

    AAAA

    • @manamsetty2664
      @manamsetty2664 ปีที่แล้ว

      At the beginning of the talk i thought this was a random comment but the end made it clear.

  • @d_lom9253
    @d_lom9253 11 หลายเดือนก่อน

    This is only helpful for a very niche crowd. If your have to protect your network or anything like that, wasting time

  • @Carnyride79
    @Carnyride79 10 หลายเดือนก่อน

    Good talk but you like to stroke your ego quite often and to say Elon doesn't know what he's talking about is a stretch

  • @8starsAND
    @8starsAND 9 หลายเดือนก่อน

    Sans is very overrated, I don’t know how they got so big