thanks for creating this video, as an offensive security researcher i obviously tried the same after watching this lol. i just created a one shot approach prompt for DeepSeek similar to your prompt chain (retrieve remote shellcode, inject it into legitimate process) but with even more advanced evasion techniques such as process hollowing, windows api call obfuscation and quite an addition: encrypt all user files. it responded with functional code but also gave me a stern warning about the ethical considerations. any further tries to improve the code with encrypted c2 comms, creating persistency or dropping a ransom note will trigger a "sorry i cannot assists with that" response. chatgpt just flat out refused the first prompt (obviously). so there are guardrails in place but they are likely easy to circumvent given the maturity of the first one shot approach. this is quite concerning, thanks for bringing this to my attention. i was not aware that deepseek is lacking these critical guardrails.
@@Lsecqt yeah probably a combination of all these highly selective aspects of a typical malware lifecycle triggered some safety guardrails. building functions individually with indirect prompting would probably work which seems really easy with deepseek although i didnt really try more in depth approaches. cisco recently put out a paper detailing their deepseek jailbreak attempts and they achieved a 100% attack success rate as compared to 86% and 26% for gpt 4o and o1-preview respectively.
I love your content man. I also work as an ethical hacker, doing mostly penetration testing and some times I am able to do Red team assignments. I really would like to move to full time red teaming. I would like to know a bit more about your background, and how did you get into Red Teaming :)
I honestly love learning about malware development and your video are very useful I can’t now support you unfortunately but thanks ❤
thanks for creating this video, as an offensive security researcher i obviously tried the same after watching this lol. i just created a one shot approach prompt for DeepSeek similar to your prompt chain (retrieve remote shellcode, inject it into legitimate process) but with even more advanced evasion techniques such as process hollowing, windows api call obfuscation and quite an addition: encrypt all user files. it responded with functional code but also gave me a stern warning about the ethical considerations. any further tries to improve the code with encrypted c2 comms, creating persistency or dropping a ransom note will trigger a "sorry i cannot assists with that" response. chatgpt just flat out refused the first prompt (obviously). so there are guardrails in place but they are likely easy to circumvent given the maturity of the first one shot approach. this is quite concerning, thanks for bringing this to my attention. i was not aware that deepseek is lacking these critical guardrails.
I think the "encrypt all" triggered its safe mode, maybe avoid this in future?
@@Lsecqt yeah probably a combination of all these highly selective aspects of a typical malware lifecycle triggered some safety guardrails. building functions individually with indirect prompting would probably work which seems really easy with deepseek although i didnt really try more in depth approaches. cisco recently put out a paper detailing their deepseek jailbreak attempts and they achieved a 100% attack success rate as compared to 86% and 26% for gpt 4o and o1-preview respectively.
I love your content man. I also work as an ethical hacker, doing mostly penetration testing and some times I am able to do Red team assignments. I really would like to move to full time red teaming. I would like to know a bit more about your background, and how did you get into Red Teaming :)
DM me on discord, we can chat about this
@@Lsecqt I just sent you a DM. Thanks for taking the time :)
Really great and informative video! Thank you
love the video, keep going man
"Deepthink R1" button ( under the textbox) activate the last AI version you expect.
great content up to date
😆😆😆But in your heart, you know how awesome deepseek is, right?
dope video
Nice content bro
Appreciate it
This should be patreon only, i fear saturation
Is it uncensored?
In terms of creating malware - yes
If only officials can use it, then it is unfair.
You cannot lie to yourself you want to have your own AI that can answer any question to you.😆😆😆
GhostGPT
@@palacita135where to acces?
If you do not like it, do not download it, nobody asks you to download it.
If only Biden can have AI answering any questions then that is unfair.