AI security vs ML security (vs MLSecOps)
ฝัง
- เผยแพร่เมื่อ 9 ก.พ. 2025
- This week I muse over the difference between AI security, ML security and MLSecOps through a few different case studies. I'm interested to hear everyone's thoughts!
Links mentioned:
techcrunch.com...
atlas.mitre.or...
skylightcyber....
securelist.com...
www.ericswalla...
web.archive.or...[…]ed-up-with-three-fake-journals-in-its-top-10-philosophy-list/
amazing vedio 💗
Thank you! ☺️🙏
I would love to see you do a video about Gen AI model red teaming and ML offsec.
Ooh actually that's a great idea, thanks! I suspect that crowdsourcing experience from all the viewers here would be better than anything I'd come up with on my own 😅
Let me ask you something: Is it possible to build a model for security checks. Basically, the model will interact with the other model to find its weaknesses.
I mean, it's an ambitious project but could prove very helpful in patching up models and even further enhancing them for next releases.
That's a great question, and yes - there are lots of researchers and companies trying to do this. The DARPA AIxCC (AI x cyber challenge) Tania and I talk about on the podcast was a competition we entered where the US department of defense put a lot of money into trying to build something like this. In theory, building something like this is absolutely possible. In practise, it's quite challenging building an AI system that is robust enough to detect weakness in different coding languages at different stages (before vs after compilation) and to recommend the right patches. As you say, ambitious, but I believe possible!
I know there is already lot of AI good and bad tools out there ,most of them are open source, how would you trust the sources, when you pull the repo.
Awesome question, the challenge right now is that it's up to the user to determine whether the repo is trustworthy based on external signals. There are things we can do - we can check if it’s maintained by a reputable organisation (e.g., NIST, MITRE, academic labs) and review its commit history for active updates. We can look for security audits, reported vulnerabilities, community feedback and signed releases, scan dependencies with existing tools and test in a controlled environment before deployment. However in practise, is everyone going to do all these things? I would like to see more formality around verifying the security of open source materials, but then the upside of the open source movement is that it is free from regulation (and the downside of control) so it's up to us to have these conversations about the right balance!
@HarrietHacks yeah right that makes sense ,I would also look for the ratings and the authors
Safety in general has been the opposite of growth. It's like the breaks to the car and growth is the gas in this analogy.
Ml is a new technology so it need to grow. But because people don't fully understand it. The boundaries are vague. People will cross it unknowingly.
If you want to insure people's safety, you have to make them understand it.
Ml is very far from how humans' perception works.
This is the biggest issue at the moment because people think of it as artificial intelligence but it's not operating on the same principles as human intelligence does.
For example. Humans read text one word then the next. But language models digest all the words at the same time and add a position key to every word.
Difference might not be obvious to the normal person but it's very obvious to people that are building it. The term artificial intelligence is misleading cause it's not comparable to human intelligence.
They are different. It's as if you are fooling people into trying to use their ears to see. Next thing you know, people having accidents cause they were closing their eyes while driving and daredevil style the whole way😂
This is such a great point.. AI and ML is anthropomorphised because it seems like its making decisions the same way humans do. But it doesn't. You're right, education and understanding is the answer.. thank you for being here!