And even so (all things considered), AI (LLM) is far more dependable than human staff. Which is not necessarily a good thing because there are times when orders should be disobeyed.
And regarding conventional hacking such as the NHS leak: the interviewee is wrong that the red team will always win. Every time the red team wins is a case of the incompetence of the blue team. In practice, vulnerabilities are a combination of true stupidity and feigned stupidity masking intentional betrayal. Perfect security isn't rocket science. But corrupt human nature makes it seem so. The solution to this problem involves psychiatry, not technology.
The technological singularity-or simply the singularity-is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.
I understood that the NHS attacker used freely available large AI models to find breaches in their systems. However, I'm not sure if they explicitely mentioned that. For sure, they talked about how hackers can remove safety guards in the AI models to use AI as a tool to cause harm or hack others.
The doomer talking aka Connor just wants attention. The 2 interviews I seen with him are a laugh. This one is insane.. LLM stole an hospital database. At least this guy that his spreading his doomsday scenarios should know what he is talking about, the other 2.. meh.. but this one, come on. The intro was ok, the women explained jailbreaking ok, but after that it just goes to pure nonsense for the masses.
As a product of the 90s and a hacker, that spent about 14 years of his life in prison, due to said activities, Gen x, The term hacker and what they are talking about he did, is far from impressive, with that being said there are more issues with ai than you can imagine.
Everyone talks about 1984 and Orwell. There’s a fantastic series of games called metal gear solid. The second one covers AI with an angle I’ve never seen before or since. The AI is housed in a giant server the size of a town. It filters the entire internet. It’ll show you what it wants you to see. You ask it for a news story. It’ll edit the news stories as it displays them for you. The news thinks you’re seeing their story. But you’re not. Everywhere you try to look. It goes through their filter. To quote the AI “our goal isn’t to control the content, it’s to create the context” This is where we are going. It’s scary. I should decide if what I’m seeing is the truth. Is the earth flat? No, but I like the fact I can listen to flat earthers and know they’re speaking s***. But it’s my god-given right to determine that.
Even before Computers, this was happening in many ways. Newspapers editing stories based on what they wanted you to believe in. Religious leaders telling you how to think. There are always people out there who want to manipulate you in some way.
There is no truth. Truth implies information is "correctly" encoded in everyone's mind identically. Hah. People are dumb, and therefore, there is no truth.
I actually thought he was really good, jumping in when guests were going a little off-topic (though what they were saying was interesting, as he rightly said, they only have 20 mins), being inquisitive, respectful and thoughtful.
He doesn't have a clue what he's talking about, the host. By contrast, Connor knows what he's talking about, but his bias is entirely skewed to the unlikely worst case imaginable and suggests that since he's wealthy and comfortable and doesn't need AI to substantially improve the education of his kids or his own prosperity / productivity, that we should be scared enough to all stay suffering to ensure his protection from algebraic lambda functions. I don't think either men realize how little sense they are making when real people are at stake, not just their own comfortable lives being threatened by people who fear destitution and opportunity more than they fear poor people competing economically with their luxurious selves. Not differentiating real from fake would benefit everyone. We'd be forced to all apply critical thinking by default instead of trusting talking heads. It would force people to be informed by logic, cross referencing, consensus, and by reading well vetted authors. It wouldn't force everyone to never believe anything ever again as this whole panel suggests, it's far more likely to do the opposite when common knowledge is to be suspicious and critical of everything. That's healthy, that's not "thinking based on feelings" it's thinking based on thinking - which we're not doing. The singularity is not a thing unless you're taking about either end of the universe. Computers are not doing "2 years of thinking per day", they don't think they associate tokens in matrices. Humans have agency by way of the senses coalescing, and we're fragile because we die when some of those senses stop working by consequence. If a machine developed agency but couldn't die from impaired senses then it wouldn't really be conscious or self aware without ever having any appreciation for its own death. Connor Leahy knows how these systems work, he knows the code and the math right down to the assembly, probably. His fear is that 0.000001% chance of catastrophe isn't worth the risk to his great life, so everyone else should just suck it up and stop being so loose with our models. Poor people could leverage those models and lift the world to a new minimum standard but that tiny % risk isn't worth it to him and 10% of the rest of the world if it means not only AI threatens his comfortable life, but lifting the poor to compete for his wealth is the even greater threat. Don't get me wrong, I lime the guy, he's not evil, he's not crazy, he's a father. He's a guy who sincerely wants good in the world but clearly doesn't even recognize how little sense he makes when he speaks about the risks. He's been on mlst a tonne of times and I listen to every episode because there's a lot to learn from him, lots of insight and perspective, and most importantly he sets a great example for discourse with differing views; it seems pretty clear over the years his strongest argument is a preference to preserve the status quo, and not many people on earth would think that's an acceptable reason to keep them trapped in exploited labour their entire lives. A lot of people suffer and can't defend themselves for lack of education or tutoring, adequate language skill or stimulating dialog by virtue of the world they inherited through no fault of their own. It's not our fault either, except it is if there's a tool that would certainly help a healthy percentage of that population and compounding over time. If we withhold access to AI then it is our fault because suddenly we decided for them it wasn't worth the risk. They ought to just sacrifice themselves for the West (the least in need and most capable of defending themselves again an Ai-mageddon. Indeed far more people are not well off than who are, so to suggested his fear of protecting his civilized life merits closing that door to the many millions of times more people who would at least have the option to work hard to catch up with him is patently selfish and logically asinine for a man of his dignified belief systems - unless he's just a man blinded by love. That would be completely understandable but not in the least bit justified. TLDR, this whole conversation is a red herring to distract from license agreements, patent farming, privacy, rentseeking enterprise, and corruption of politics. This is the Houdini act, misdirection and pearl-clutching, while the bank robbers keep an unbroken congo-line strong carrying the future's wealth out the door in broad daylight.
@@paxdriver well vetted authors: how will you trust who they were if everything you see online would be skewed by deep fake and local libraries shrunk to some community rooms with aged novel books for youth? Even proper science is hidden behind paywall these days, more and more, for those actually able and willing to read scientific papers. With academia shifting towards a mill mass producing papers, anyway, some later retracted because the honesty and quality seem to be in a short supply. They just want tp publish, publish, publish, anything. Just push for as many publications as possible. Enshitification od search engines, enshitification of science. You need to have a deeper knowledge about a particular topic to be able to sense a rat in such a paper or you can get pretty confused.
Jailbreaking is getting responses from the AI model that the AI was programmed not to give like harmful content. It means you could leverage the learning ability of AI against it through the prompt..it is not hacking in any sense
Critical to understand that the developers do not understand to any fine degree how their 'AI' models actually work (in terms of being able to accurately predict what it may do in any given scenario). The 'reformed' hacker in the video was absolutely right. Also charmingly naive to think that any rules and regulations we agree as a society will protect us from AI down the line. How did that work for nuclear weapons? Someone, somewhere will ignore the rules if they see it can benefit them. It's a good job we're having this discussion (finally) if we are still talking this way....
Long term we seem to be de-skilling ourselves as a species via tech. What was said about us not making the brain connections due to our ai usage makes perfect sense to me, I think we are seeing the impact of this already.
I actually worry about this quite a bit. Like in the future once we've handed over running the world to the AIs, if what something like a solar flare wipes out the electronics of the earth. Humans may have lost the skills that would allow us to rebuild, which would send us back into a bit of a dark age.
@@Peter-mj6lz you're quite right, and I expect that the jury will be out for some time before we have a clear answer...which would "hopefully" be a positive one. The brain is like a muscle though and it needs exercise. I believe that to store memories, retain the ability to focus ,and to gain skills we need more than to passively push a button and be given a response. Should anything interfere with our ability to access this tech in the future, future generations could easily find themselves back in the stone age as far as human skills and understanding is concerned. Anyone might be able to build a house (for example) using say a vr headset telling them where to position the stones, but only someone with skill and experience can tell you why and then apply that knowledge to different situations. One person can place a stone where they are told to whereas the other can envision and build a cathedral. It's a big difference...in my mind.
@@Peter-mj6lz How long do you estimate that it took our species to get started out of the trees? I can't even begin to guess. How long before we learnt to smelt or navigate by the stars. My son can't find his way around our home town without gps and it actually does worry me.
@@YouTViewer It disappeared? Where did it go? ;) It seems that with AI concretizing so many questions about rules, computation, knowledge, mind, consciousness, culture... the awareness about associated paradoxes and mysteries grows as well. Thus, challenges to convenient shortcuts and common beliefs. Astounding how the societal thread of the AI story reshapes perspectives, as the new tools simultaneously change the economic game, progressively feedback into paradigms of thinking, investing time & energy, creating. With huge change comes huge uncertainty. Political strategies will have to be developed in order to ensure that, unlike with the industrial revolution, this time the coming manifold increase in prosperity doesn't come through a period of extreme ideologies, terrible wars and social unrest. The fascination for visions of a "bleak future" is at its healthiest in dystopian movies at the cinema, while the real world stays reasonably optimistic, moderate and calm. "Good sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess." - Descartes
This is why we must vote out the surveillance state and demand they protect our data, not put citizens at risk for their political control. Demand back your human rights at the ballot.
On a similar note, you can bet that there are criminal groups, government departments, etc that are training AI to hack systems like you've never seen before, and that is gonna be a big story when that takes off, if it hasn't already without us knowing
@@YouTViewer despite your generalization, i figured that's what you meant. i have serious doubts that ML is finding vulns better than fuzzing and formal verification. ML may augment labelling and can aid with generation of familiarizing content to pop an account, but in terms of shaking actual bugs out of a piece of software...highly unlikely. ML can barely correlate context between two distinctly separate pieces of logic.
@@eyezikandexploits If you search for "Unleashing AI The Future of Reverse Engineering with Large Language Models" related to REcon 2024, you can read some slides that talk about using LLMs in regards to reverse-engineering. They're prolly better when setting up for the automation required for some webapps, but in terms of vuln-discovery...the weaknesses are pretty apparent. Perhaps it'll change in the distant future (as tech and capabilities change), but "already being used for finding bugs" (in the capacity for finding something other than low-hanging fruit) is pretty doubtful. Still, I'm looking forward to the results of DARPA's next CGC.
IMO interviewer is really good, has refreshing curiosity and passion for a wide range of subjects, but we wish he had more time with the exceptional guests. Actually, the less technical knowledge the interviewer has, the more likely that his questions will be representative of the broad public. So, over time, he's bound to lose performance in this regard ;)
We already do that. We love fake news, fake people, fake politicians, fake schools, fake journalists, fake watermarking. We click, we fake get depressed, we fake consume, we die with a fake smile, within an illusion of fake meaning. 20:18 is why we are doomed by TikTok attention span.
1:00 how did "AI" somehow get blamed for a Russian state-sponsored cyber-criminal attack on the NHS? What kind of baseless nonsense intro is that to setup a discussion on LLM models jailbreaking? And what can you get by jailbreaking a LLM? Only the ability to answer questions based on its training data, which is public data from the web, nothing more.
Ah, my sweet summer child 😔 LLMs can - and have - been used to massively expedite the generation of exploit code for multiple architectures & languages. My team have been using the approach for some time now, whether by jailbreaking public LLMs or using bespoke LLMs. The latter of which you can be sure "Fancy Bear" has access to; the former can be used by anyone.
That might well be true, this knowledge is somewhere on the Web, if you look. That is how LLMs are made, they gobble up the web and learn to regurgitate it. However, a LLM allows people with next to zero programming ability to get a LLM to output fairly sophisticated code. I am currently using Claude to output code, Julia programming language, that takes a colour image and converts it to a black and white image using Stucki dithering, a variation of Floyd-Steinberg dithering. I have nearly zero knowledge of Julia coding, I know enough to run Jupyter notebook and copy and past, and run code. I know if I get an error, I copy the error into Claude and ask it to fix the error, until the code runs. I don't understand these errors, that it effortlessness corrects. I have code converting images to colour and black and white dithered images, it's interesting. Yes, I could learn this on the Web, spend a few weeks to a few months learning Julia programming, and do this myself. But LLM allows complete novices like me to ask for code, including stupid 14-year-olds that hack hospitals.
@@Diamonddavej it's true that using a LLM can help you write code in a language that you don't know. It's awesome and it feels like magic. But it doesn't mean that it's gonna be anywhere near what an expert would write, or even work correctly. It won't be capable of solving novel problems for you either. That's despite what some AI companies and influencers use as marketing. Like Sam Altman from OpenAI and others profiting from the AGI and super intelligence hype. Neither of them are real, in any shape or form. Was there any hard evidence that the NHS data leak resulted from the use of jailbroken Large Language Models? How could one even tell anyway? You can't tell if code was written by a machine, a human or mostly copied from Stack Overflow. Or is that pure speculation presented as a fact (I didn't follow the details of that story)
I made the effort to write a detailed reply to someone else's interesting comment and both messages just disappeared. This feels like it wasn't a good use of my time...
4 หลายเดือนก่อน
The Last Comments what the lady in the show stated regarding intelligence is extraordinary.
Can machines really “read” you? You should have given that question to Connor Leahy, he would have told you how well it can read you and how! Great seeing you, Connor!
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
What is weird is the fact that you guys can't see that information was always manipulated and that this kind of use for AI is just the perpetuating of that modus operandi.
There is so much broad conversation and dramatic speculation but very little about how the technology actually works. The world needs to get on the same page as to what it is we are even talking about before there can be any actions taken.
Red team blue team activities have so much controls implemented to prevent breaking the production environment. They are good for finding a few vulnerabilities but often time the tools being used in red team attacks are wildly different than the tools hackers use. That being said, I dont see jail breaking gpts a super serious issues. All the information that they give once unrestricted can be pretty easily accessed on the internet anyways. As long as those ai systems are not giving sensitive data input away there's not much harm. Thats coming soon though
I was hoping to actually learn something from this since I am in the cyber security and AI field, but this didn't tell us anything. If you get access to an internal AI system then you have already bypassed all of the multi layers of security. You can use AI to code malware, but that is it.
They are conflating concepts. The nhs doesn’t have public facing AI that hold patient records. People are using LLMs but they don’t need to be jailbroken. People can make their own and there is no stopping that now
This whole discussion is like watching the blind leading the blind.. I have so many questions. The LLM’s are like a personal googler, meaning it can sift through all the data you already can access online and respond in a more personal and seemingly intelligent way. But it’s still just like a glorified search engine for whatever data you feed it. So what does ”hacking it” even mean? Why in gods name would you feed any type of personal data to such a system and then try to censor the output when you can just reformulate the input prompt (the question you ask it) to basically trick the system to output that same data. What would the application even be, like why would it need sensitive information to begin with? It’s like putting up a website with all your secrets, and then try to censor sites like Google to make it difficult to find. Never impossible, just difficult. 🙄
The problem is with LLM, you ask the LLM a question, and as you say the data is already out available online, the LLM provides an answer, the LLM explains the answer that makes it sound like the correct answer yet it could be completely incorrect. And people will rely on the answer since they could not be bothered to fact check. You say who would put personal (confidential) data in one of these systems, plenty of people do. Just look at how many people have put information in Facebook. With LLM, one example could be that someone wants to impress their boss so they enter confidential Business proposals that they have been working on into the LLM to provide a summary of the proposals, the LLM now takes this data and provides the summary, however now the confidential information is now incorporated into the backend data. The "Prompt breakout" issue is that some guard rails have been put in place to limit the sort of dangerous information being presented as an answer. One example could be, if you asked the LLM how to build a bomb with common house hold items, the guard rails would kick in and not provide the answer. Breaking out from the guard rails would then allow someone with limited knowledge to be able to build a bomb. Yes that information is already available on the Internet but you would need to do research to find it.
@@SteveGillhamI know, so the “danger” with LLM’s is it allows idiots to do idiotic things? Shocker. And Google is preferable since it requires more effort? Sure, okey. Also, I’m pretty sure most of them work on a static training set, and won’t actually retain data from input prompts in between sessions, but I could be wrong on this one. Either way, training or feeding personal information to a model that you have no control over is just stupid.
@@bubach85 I totally agree, if we all did what is the best and most secure ways of doing things, there would not be a need for this sort of protection. However people/Businesses will always choose the quick option, not what is safe and secure as it could give them the edge over others.
i can tell you never used chatgpt. they are not just search engines. i literally pasted malicious code into chatgpt and got it to tune the code to however i wanted even tho its not supposed to make malware
Fun fact: Kaspersky (the man) actually went to a KGB-affiliated technical college... Today its 'Computer and Technology College' of Russian intelligence agency FSB
Interestingly for some weird reason neither Kaspersky himself nor his ex-wife Natalia got sanctioned by the US Department of the Treasury’s Office of Foreign Assets Control. They sanctioned the COO and the admin staff: his legal guy, his HR lady, the marketing guy and the business development guy (focused on Russia-only-sales)
They are worrying about 'Chat'. That's just the shiny object being used to sell LLMs. We are doing a huge amount of dev on top of (private) LLMs with controlled inputs and outputs.
Recently APP fraud reimbursement implemented by PSR is a good step by the regulators. I think soon Together we can bring innovative ideas to resolve this issue too
14:35 - Good judgement is always the burden of a responsible and considerate person. I don't think that is the same as attributing a blame. You can't off-load this critical psychological defence still to companies. I think this is a chance to enhance our judgements in order to discern fake/simulated information. IMHO
You can tell and see the differences in AI created contents if you look closely and if you learnt quickly to spot what is really wrong with the contents that been created by AI
This is the sad reality when these companies get there hands on these tools they pay more attention profits and ignore risk, I mean come really who didnt see this coming a world controlled by computers is a nightmare
There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable. The punishment for not should be deletion, especially if the AI is used to make deep fakes or child pornography.
Robust (i.e. unremovable) watermarking is mathematically impossible, but removable watermarking is _much_ better than none at all! Either way, liability is exactly what is needed. Safety isn't the user's responsibility. It's not even the app-developer's responsibility. The responsibility lies with the companies who are creating the foundation models. If they can create a model capable of autonomously committing cyber terrorism, but they have no idea how to prevent it from committing cyber terrorism, then they shouldn't be making it at all! D'oh!
"There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable." Why? Hollywood movies show things that didn't happen, and most music is generated. "child pornography" They wouldn't really be children, and not really having sex.
@@abram730 They literally caught people using AI to make child pornography. It was on the news, look it up. Deep fakes are made with the expectation to fool, ruin, and scam people, whereas everyone expects Hollywood to make stuff up for entertainment. That and you clearly didn't listen to the video. They went over why it's a problem.
@@41-Haiku Yeah. So far they have been relying on existing laws to combat AI, but I would love to see new laws imposing such liabilities onto the makers of AI.
only remedy is to associate an actual chain of custody with all content, signing it w/ a digital signature, then for individuals amongst society to personally de-value unsigned content or content signed with an immature signature. at this point, software/platforms/communities track signed content, and punish signatures that are used for things that society (or platform) disagrees with. problem is, none of this (or laws) are actually enforceable until things change in a number of different ways (some good, some bad).
Anything can be broken into. Critical thinking is a skill that is exstinct and as long as profit is the driving force, these conversations are pointless.
The "NHS Hack" had nothing to do with AI nor with the 'cyber attacking' the NHS hospitals directly. A private pathology lab called SynLab ( a part of Synnovis private-public partnership) got ransomewared due to poor cyber security measures at the lab (unlike the NHS itself which had their cyber defences improved after the previous attack). Synnovis refused to pay the ransom and the cyber criminals published the stolen data on the Russian-owned Telegram messaging app( the stolen data is still there by the way). The stolen data allegedly had the names, addresses and the blood types of everyone in the UK who was ever was blood-tested by the Synlab. As a knee-jerk reaction NHS stopped all operations in two hospitals to allow cyber-forensic investigations.
Unfortunately, there are many consumers who are unable to do that. They just want their quick fix of "short sound bites" and are not prepared to put any effort into finding the truth. 😕
Or even if you have. See this paper: "Teams of LLM Agents can Exploit Zero-Day Vulnerabilities" The AI system independently discovered new vulnerabilities and successfully exploited them. They used existing vulnerabilities that were discovered after the training date cutoff, which allowed them to run a proper test, where they knew what vulnerabilities were there to find and whether the AI found them. But as far as the AI knew, it was the first to discover these vulnerabilities. (This wasn't clearly communicated in the paper, so I reached out to the first author Richard Fang and he confirmed that the AI was not given any information whatsoever about the vulnerabilities.) But that's old news already. They used GPT-4 Turbo, which isn't state-of-the-art anymore. Next-generation models (including OpenAI's GPT-5, Anthropic's Claude 4 Opus, and Google's Gemini 2 Ultra) will all be significantly better at autonomously committing cyberattacks.
@@volkerengels5298 oh, gosh, how did I do that? I must have accidentally pressed the wrong button or something. 😂 I actually need an ethical hacker to teach me tech... it's a "brave new world" to me. 🥰
@@volkerengels5298 Do our achievements only count if everyone knows about it? Hmm, I want to say no but I imagine many would say yes. I'm choosing to see ethical hackers as the firemen (or firewomen) of the tech world and feel grateful for their efforts...#heroes.
@@SquawkingSnail OF COURSE they are!! And as you imagine - common sense is clear here: "Fame must be public - or it doesn't count" With our changing social_climate and physical_climate - firehumans burn out like straw. Didn't thought the joke would lead to a serious conversation :)
This sort of reporting makes me realise how far behind we already are. We're pandering to old audiences when we're already very aware, we're beyond screwed as a younger populace. Presenters talk about the 'colour red', a lack lustre attempt as scaring or trying to diffuse the alarmingness of this situation. But it's exactly this realxed footing and reporting on it that has got us into this mess of lack of governance, lack of leadership and more
The only way to counter these attacks is to stay steps ahead. AI language models are always hackable… lack of funding is affecting development and many more irrespective of the technology…. Pay people to check for loops and redundancy
AI is coming to a point where it could be bottlenecked by the energy capacity of an organized "society" to be able to strike a goal through the vulnerabilities of synchronized and distributed systems on the tradeoff of other group's interests. And we nowadays can't figure a way to keep this away. I mean, end-to-end technology safety protocols have the same flaws of striking ideas to reach concrete consequences to the physical world ...
The fearmongering is insane. AI has the capability to become the single most useful and uplifting development in the world and all the public wants to do is restrict and lobotomize for the average consumer. You realize such restrictions won't apply to malicious, powerful actors, just making sure the average person can never have any form of useful knowledge or power.
Hello. One should recognize that a.i. is a large data collective. Think how palentir can maximize value with all of the data the governments have there hands on. Good luck.
The NHS hack has absolutely zero to do with AI large language models. The entire premise of this program is wrong.
Thank you. Regarding AI, it's no different from employing a human: don't trust blindly either. The same safeguards apply.
And even so (all things considered), AI (LLM) is far more dependable than human staff. Which is not necessarily a good thing because there are times when orders should be disobeyed.
And regarding conventional hacking such as the NHS leak: the interviewee is wrong that the red team will always win. Every time the red team wins is a case of the incompetence of the blue team. In practice, vulnerabilities are a combination of true stupidity and feigned stupidity masking intentional betrayal. Perfect security isn't rocket science. But corrupt human nature makes it seem so. The solution to this problem involves psychiatry, not technology.
The technological singularity-or simply the singularity-is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.
I understood that the NHS attacker used freely available large AI models to find breaches in their systems. However, I'm not sure if they explicitely mentioned that. For sure, they talked about how hackers can remove safety guards in the AI models to use AI as a tool to cause harm or hack others.
bbc probably asked an intern to gather everything about ai and hackers and now we are watching this.
…and then the intern used a GPT to put it together ;-) 🤷🏻♂️
The doomer talking aka Connor just wants attention. The 2 interviews I seen with him are a laugh.
This one is insane.. LLM stole an hospital database. At least this guy that his spreading his doomsday scenarios should know what he is talking about, the other 2.. meh.. but this one, come on.
The intro was ok, the women explained jailbreaking ok, but after that it just goes to pure nonsense for the masses.
Even small info is invaluable to someone who's just putting their toes into AI.
I had to be attacked & told I opened a Bitcoin wallet, when I had not
u win 🙂
As a product of the 90s and a hacker, that spent about 14 years of his life in prison, due to said activities, Gen x, The term hacker and what they are talking about he did, is far from impressive, with that being said there are more issues with ai than you can imagine.
under-rated comment.
"Within 5 - 10 years we don't know what is real or fake". Ok then we can go outside again to see what's real.
if only that was a guarantee
wont be driving a car, the cleaning bill would cost a lot
@@STCatchMeTRACjRo Yes, I wouldn't be driving a cae, because the cleaning bill would cost a lot
@@watermyfriend6242 yeah right, make it easier for YT to auto delete my comments.
Everything is fake😂
The best series on AI so far is "Person of interest"! You all gotta watch it.
70% of breaches don't make the headlines apparently. How much money is lost on data breaches is insane.
Don't put your networks on the internet. The internet will never be secure.
Exactly. It was a huge mistake putting so much infrastructure online.
@@runnergo1398 Agree... for some reason, very smart people do very stupid things
Everyone talks about 1984 and Orwell.
There’s a fantastic series of games called metal gear solid. The second one covers AI with an angle I’ve never seen before or since. The AI is housed in a giant server the size of a town. It filters the entire internet. It’ll show you what it wants you to see. You ask it for a news story. It’ll edit the news stories as it displays them for you. The news thinks you’re seeing their story. But you’re not. Everywhere you try to look. It goes through their filter. To quote the AI “our goal isn’t to control the content, it’s to create the context”
This is where we are going. It’s scary. I should decide if what I’m seeing is the truth. Is the earth flat? No, but I like the fact I can listen to flat earthers and know they’re speaking s***. But it’s my god-given right to determine that.
The "AI" you've described is basically TH-cam itself. And we (users) are its pawns. Write a wrong comment and see what happens.
@@brexitgreens that’s a very simple bot, but I still agree
Even before Computers, this was happening in many ways.
Newspapers editing stories based on what they wanted you to believe in.
Religious leaders telling you how to think.
There are always people out there who want to manipulate you in some way.
I've written lots of long comments here, what do you mean? Like that big one I wrote at....
Hey, where'd it go????
There is no truth. Truth implies information is "correctly" encoded in everyone's mind identically. Hah. People are dumb, and therefore, there is no truth.
The interviewer is so bad that he interrupts everyone talking. He has no idea about AI. We want to hear more from the 3 experts
limited time and many topics, the interviewees would talk the whole day if you let them
He gets prompts from the studio, it isn't all under his control only.
I actually thought he was really good, jumping in when guests were going a little off-topic (though what they were saying was interesting, as he rightly said, they only have 20 mins), being inquisitive, respectful and thoughtful.
He doesn't have a clue what he's talking about, the host.
By contrast, Connor knows what he's talking about, but his bias is entirely skewed to the unlikely worst case imaginable and suggests that since he's wealthy and comfortable and doesn't need AI to substantially improve the education of his kids or his own prosperity / productivity, that we should be scared enough to all stay suffering to ensure his protection from algebraic lambda functions.
I don't think either men realize how little sense they are making when real people are at stake, not just their own comfortable lives being threatened by people who fear destitution and opportunity more than they fear poor people competing economically with their luxurious selves.
Not differentiating real from fake would benefit everyone. We'd be forced to all apply critical thinking by default instead of trusting talking heads. It would force people to be informed by logic, cross referencing, consensus, and by reading well vetted authors. It wouldn't force everyone to never believe anything ever again as this whole panel suggests, it's far more likely to do the opposite when common knowledge is to be suspicious and critical of everything. That's healthy, that's not "thinking based on feelings" it's thinking based on thinking - which we're not doing.
The singularity is not a thing unless you're taking about either end of the universe. Computers are not doing "2 years of thinking per day", they don't think they associate tokens in matrices. Humans have agency by way of the senses coalescing, and we're fragile because we die when some of those senses stop working by consequence. If a machine developed agency but couldn't die from impaired senses then it wouldn't really be conscious or self aware without ever having any appreciation for its own death.
Connor Leahy knows how these systems work, he knows the code and the math right down to the assembly, probably. His fear is that 0.000001% chance of catastrophe isn't worth the risk to his great life, so everyone else should just suck it up and stop being so loose with our models. Poor people could leverage those models and lift the world to a new minimum standard but that tiny % risk isn't worth it to him and 10% of the rest of the world if it means not only AI threatens his comfortable life, but lifting the poor to compete for his wealth is the even greater threat.
Don't get me wrong, I lime the guy, he's not evil, he's not crazy, he's a father. He's a guy who sincerely wants good in the world but clearly doesn't even recognize how little sense he makes when he speaks about the risks. He's been on mlst a tonne of times and I listen to every episode because there's a lot to learn from him, lots of insight and perspective, and most importantly he sets a great example for discourse with differing views; it seems pretty clear over the years his strongest argument is a preference to preserve the status quo, and not many people on earth would think that's an acceptable reason to keep them trapped in exploited labour their entire lives.
A lot of people suffer and can't defend themselves for lack of education or tutoring, adequate language skill or stimulating dialog by virtue of the world they inherited through no fault of their own. It's not our fault either, except it is if there's a tool that would certainly help a healthy percentage of that population and compounding over time. If we withhold access to AI then it is our fault because suddenly we decided for them it wasn't worth the risk. They ought to just sacrifice themselves for the West (the least in need and most capable of defending themselves again an Ai-mageddon.
Indeed far more people are not well off than who are, so to suggested his fear of protecting his civilized life merits closing that door to the many millions of times more people who would at least have the option to work hard to catch up with him is patently selfish and logically asinine for a man of his dignified belief systems - unless he's just a man blinded by love. That would be completely understandable but not in the least bit justified.
TLDR, this whole conversation is a red herring to distract from license agreements, patent farming, privacy, rentseeking enterprise, and corruption of politics. This is the Houdini act, misdirection and pearl-clutching, while the bank robbers keep an unbroken congo-line strong carrying the future's wealth out the door in broad daylight.
@@paxdriver well vetted authors: how will you trust who they were if everything you see online would be skewed by deep fake and local libraries shrunk to some community rooms with aged novel books for youth? Even proper science is hidden behind paywall these days, more and more, for those actually able and willing to read scientific papers. With academia shifting towards a mill mass producing papers, anyway, some later retracted because the honesty and quality seem to be in a short supply. They just want tp publish, publish, publish, anything. Just push for as many publications as possible. Enshitification od search engines, enshitification of science. You need to have a deeper knowledge about a particular topic to be able to sense a rat in such a paper or you can get pretty confused.
Why are hackers always portrayed as figures in hoodies hunched over a laptop?
Thank you for talking about this.
You can't "break into" a model. A model is a set of values. It is literally a table (a mathematical matrix with rows and columns).
They don't break into models. They get into the unsecured datasets used to train or fine-tune those said models.
Nothing is true, everything is permitted.
Every traditional computer program is like that: a "set" or "table" of values of instructions and their operands.
exactly a csv file
Jailbreaking is getting responses from the AI model that the AI was programmed not to give like harmful content. It means you could leverage the learning ability of AI against it through the prompt..it is not hacking in any sense
Critical to understand that the developers do not understand to any fine degree how their 'AI' models actually work (in terms of being able to accurately predict what it may do in any given scenario). The 'reformed' hacker in the video was absolutely right. Also charmingly naive to think that any rules and regulations we agree as a society will protect us from AI down the line. How did that work for nuclear weapons? Someone, somewhere will ignore the rules if they see it can benefit them. It's a good job we're having this discussion (finally) if we are still talking this way....
Long term we seem to be de-skilling ourselves as a species via tech. What was said about us not making the brain connections due to our ai usage makes perfect sense to me, I think we are seeing the impact of this already.
What if we are learning to use our brains in different ways?
I actually worry about this quite a bit. Like in the future once we've handed over running the world to the AIs, if what something like a solar flare wipes out the electronics of the earth. Humans may have lost the skills that would allow us to rebuild, which would send us back into a bit of a dark age.
@@arinco3817 But if we have figured it out in the past we world figure it out again. I actually think we just use different skills.
@@Peter-mj6lz you're quite right, and I expect that the jury will be out for some time before we have a clear answer...which would "hopefully" be a positive one. The brain is like a muscle though and it needs exercise. I believe that to store memories, retain the ability to focus ,and to gain skills we need more than to passively push a button and be given a response. Should anything interfere with our ability to access this tech in the future, future generations could easily find themselves back in the stone age as far as human skills and understanding is concerned. Anyone might be able to build a house (for example) using say a vr headset telling them where to position the stones, but only someone with skill and experience can tell you why and then apply that knowledge to different situations. One person can place a stone where they are told to whereas the other can envision and build a cathedral. It's a big difference...in my mind.
@@Peter-mj6lz How long do you estimate that it took our species to get started out of the trees? I can't even begin to guess. How long before we learnt to smelt or navigate by the stars. My son can't find his way around our home town without gps and it actually does worry me.
When Conor said AI has been around for 2 years I switched off 😅
Clueless🤣
it has been 60 to 70 years since AI is there
"healthy skepticsm" is the *ONE* key subject that should be taught at all levels of education. Sadly it's not, therefore the future looks bleak.
Channel uncertainty and doubt into "healthy skepticism" instead of fear about a "bleak future"? Have a nice day :)
@@YouTViewer It disappeared? Where did it go? ;)
It seems that with AI concretizing so many questions about rules, computation, knowledge, mind, consciousness, culture... the awareness about associated paradoxes and mysteries grows as well. Thus, challenges to convenient shortcuts and common beliefs. Astounding how the societal thread of the AI story reshapes perspectives, as the new tools simultaneously change the economic game, progressively feedback into paradigms of thinking, investing time & energy, creating.
With huge change comes huge uncertainty. Political strategies will have to be developed in order to ensure that, unlike with the industrial revolution, this time the coming manifold increase in prosperity doesn't come through a period of extreme ideologies, terrible wars and social unrest. The fascination for visions of a "bleak future" is at its healthiest in dystopian movies at the cinema, while the real world stays reasonably optimistic, moderate and calm.
"Good sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess." - Descartes
In school for cyber security and would be ecstatic to be mentored by this guy!!!!
This is why we must vote out the surveillance state and demand they protect our data, not put citizens at risk for their political control. Demand back your human rights at the ballot.
On a similar note, you can bet that there are criminal groups, government departments, etc that are training AI to hack systems like you've never seen before, and that is gonna be a big story when that takes off, if it hasn't already without us knowing
As a bug bounty hunter most of the community already is using ML for finding bugs
@@eyezikandexploits can you elaborate on what you mean by "finding" and "bugs"?
@@YouTViewer despite your generalization, i figured that's what you meant. i have serious doubts that ML is finding vulns better than fuzzing and formal verification. ML may augment labelling and can aid with generation of familiarizing content to pop an account, but in terms of shaking actual bugs out of a piece of software...highly unlikely. ML can barely correlate context between two distinctly separate pieces of logic.
The infamous "red team" exercise.
@@eyezikandexploits If you search for "Unleashing AI The Future of Reverse Engineering with Large Language Models" related to REcon 2024, you can read some slides that talk about using LLMs in regards to reverse-engineering.
They're prolly better when setting up for the automation required for some webapps, but in terms of vuln-discovery...the weaknesses are pretty apparent. Perhaps it'll change in the distant future (as tech and capabilities change), but "already being used for finding bugs" (in the capacity for finding something other than low-hanging fruit) is pretty doubtful. Still, I'm looking forward to the results of DARPA's next CGC.
Respect the BBC for putting this in their programming. It's important.
Pliny the prompter, holy shhhh
I work in CS. Once AI is good enough, hundreds of billions of attacks can be done per millisecond ands the only possible defense is AI which is blue
It’s like when your parents who can’t even figure out how their phones work, tried to control your internet traffic.
I like how the hacker explains blue team and red team, and shows the interviewer, that he has no idea what he is talking about.
IMO interviewer is really good, has refreshing curiosity and passion for a wide range of subjects, but we wish he had more time with the exceptional guests.
Actually, the less technical knowledge the interviewer has, the more likely that his questions will be representative of the broad public. So, over time, he's bound to lose performance in this regard ;)
Or is it the entire video is made by AI?? Including people...
Great discussion and panel!
The program was based on a lie..
We already do that. We love fake news, fake people, fake politicians, fake schools, fake journalists, fake watermarking. We click, we fake get depressed, we fake consume, we die with a fake smile, within an illusion of fake meaning. 20:18 is why we are doomed by TikTok attention span.
imagine typing out this comment
1:00 how did "AI" somehow get blamed for a Russian state-sponsored cyber-criminal attack on the NHS?
What kind of baseless nonsense intro is that to setup a discussion on LLM models jailbreaking?
And what can you get by jailbreaking a LLM? Only the ability to answer questions based on its training data, which is public data from the web, nothing more.
couldn't be more accurate. But BBC seems to care more about click rates than actual factual thruth.
Ah, my sweet summer child 😔
LLMs can - and have - been used to massively expedite the generation of exploit code for multiple architectures & languages.
My team have been using the approach for some time now, whether by jailbreaking public LLMs or using bespoke LLMs.
The latter of which you can be sure "Fancy Bear" has access to; the former can be used by anyone.
That might well be true, this knowledge is somewhere on the Web, if you look. That is how LLMs are made, they gobble up the web and learn to regurgitate it. However, a LLM allows people with next to zero programming ability to get a LLM to output fairly sophisticated code. I am currently using Claude to output code, Julia programming language, that takes a colour image and converts it to a black and white image using Stucki dithering, a variation of Floyd-Steinberg dithering. I have nearly zero knowledge of Julia coding, I know enough to run Jupyter notebook and copy and past, and run code. I know if I get an error, I copy the error into Claude and ask it to fix the error, until the code runs. I don't understand these errors, that it effortlessness corrects. I have code converting images to colour and black and white dithered images, it's interesting. Yes, I could learn this on the Web, spend a few weeks to a few months learning Julia programming, and do this myself. But LLM allows complete novices like me to ask for code, including stupid 14-year-olds that hack hospitals.
@@Diamonddavej it's true that using a LLM can help you write code in a language that you don't know. It's awesome and it feels like magic.
But it doesn't mean that it's gonna be anywhere near what an expert would write, or even work correctly. It won't be capable of solving novel problems for you either.
That's despite what some AI companies and influencers use as marketing. Like Sam Altman from OpenAI and others profiting from the AGI and super intelligence hype.
Neither of them are real, in any shape or form.
Was there any hard evidence that the NHS data leak resulted from the use of jailbroken Large Language Models?
How could one even tell anyway? You can't tell if code was written by a machine, a human or mostly copied from Stack Overflow.
Or is that pure speculation presented as a fact (I didn't follow the details of that story)
I made the effort to write a detailed reply to someone else's interesting comment and both messages just disappeared.
This feels like it wasn't a good use of my time...
The Last Comments what the lady in the show stated regarding intelligence is extraordinary.
Very cool ! great job on the key points
Can machines really “read” you? You should have given that question to Connor Leahy, he would have told you how well it can read you and how! Great seeing you, Connor!
companies putting profit before safty 0_0 no way ^^
Right? Like THAT could ever happen 😁
I hate it when people who know nothing about technology try to explain it.
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.
My answer to your comment ain't welcome... yt just deletes it 😂
What is weird is the fact that you guys can't see that information was always manipulated and that this kind of use for AI is just the perpetuating of that modus operandi.
There is so much broad conversation and dramatic speculation but very little about how the technology actually works. The world needs to get on the same page as to what it is we are even talking about before there can be any actions taken.
Interesting discussion.
I mean I wonder we develop the system and people are using it more effective and efficiency
Red team blue team activities have so much controls implemented to prevent breaking the production environment. They are good for finding a few vulnerabilities but often time the tools being used in red team attacks are wildly different than the tools hackers use.
That being said, I dont see jail breaking gpts a super serious issues. All the information that they give once unrestricted can be pretty easily accessed on the internet anyways.
As long as those ai systems are not giving sensitive data input away there's not much harm. Thats coming soon though
I was hoping to actually learn something from this since I am in the cyber security and AI field, but this didn't tell us anything. If you get access to an internal AI system then you have already bypassed all of the multi layers of security. You can use AI to code malware, but that is it.
They are conflating concepts. The nhs doesn’t have public facing AI that hold patient records. People are using LLMs but they don’t need to be jailbroken. People can make their own and there is no stopping that now
This whole discussion is like watching the blind leading the blind.. I have so many questions. The LLM’s are like a personal googler, meaning it can sift through all the data you already can access online and respond in a more personal and seemingly intelligent way. But it’s still just like a glorified search engine for whatever data you feed it. So what does ”hacking it” even mean? Why in gods name would you feed any type of personal data to such a system and then try to censor the output when you can just reformulate the input prompt (the question you ask it) to basically trick the system to output that same data. What would the application even be, like why would it need sensitive information to begin with? It’s like putting up a website with all your secrets, and then try to censor sites like Google to make it difficult to find. Never impossible, just difficult. 🙄
The problem is with LLM, you ask the LLM a question, and as you say the data is already out available online, the LLM provides an answer, the LLM explains the answer that makes it sound like the correct answer yet it could be completely incorrect. And people will rely on the answer since they could not be bothered to fact check.
You say who would put personal (confidential) data in one of these systems, plenty of people do. Just look at how many people have put information in Facebook.
With LLM, one example could be that someone wants to impress their boss so they enter confidential Business proposals that they have been working on into the LLM to provide a summary of the proposals, the LLM now takes this data and provides the summary, however now the confidential information is now incorporated into the backend data.
The "Prompt breakout" issue is that some guard rails have been put in place to limit the sort of dangerous information being presented as an answer. One example could be, if you asked the LLM how to build a bomb with common house hold items, the guard rails would kick in and not provide the answer. Breaking out from the guard rails would then allow someone with limited knowledge to be able to build a bomb. Yes that information is already available on the Internet but you would need to do research to find it.
@@SteveGillhamI know, so the “danger” with LLM’s is it allows idiots to do idiotic things? Shocker. And Google is preferable since it requires more effort? Sure, okey. Also, I’m pretty sure most of them work on a static training set, and won’t actually retain data from input prompts in between sessions, but I could be wrong on this one. Either way, training or feeding personal information to a model that you have no control over is just stupid.
@@bubach85 I totally agree, if we all did what is the best and most secure ways of doing things, there would not be a need for this sort of protection. However people/Businesses will always choose the quick option, not what is safe and secure as it could give them the edge over others.
i can tell you never used chatgpt. they are not just search engines. i literally pasted malicious code into chatgpt and got it to tune the code to however i wanted even tho its not supposed to make malware
Love this program & the only reason why I’m subbed to the BBC. Keep up the great work!
Who are these people who have knowledge but no experience
Nowadays, there are so many self-proclaimed AI experts or tech charlatans out there. The O.G.'s of AI are the most humble people I have met and known
Fun fact: Kaspersky (the man) actually went to a KGB-affiliated technical college... Today its 'Computer and Technology College' of Russian intelligence agency FSB
Interestingly for some weird reason neither Kaspersky himself nor his ex-wife Natalia got sanctioned by the US Department of the Treasury’s Office of Foreign Assets Control. They sanctioned the COO and the admin staff: his legal guy, his HR lady, the marketing guy and the business development guy (focused on Russia-only-sales)
Has he ever admitted to how useless his "protection" is like the Macaffee dude did before he went delulu and got "erased"?
who the hell is Pliny the prompter
When I was young and the dial up internet I thought was the coolest thing ever.but I had no idea how evil the internet can become .
AI is being thrust upon us by billionaires, noone is looking ahead people are losing jobs already and AI fishing and phone calling is growing.
Computer says no 😅
Thank you for this video😊
how long will it take for an entity or nation to build and programme an AI solely for hacking the AI of other entities or nations?
they working on it lol😂
AI is working on it. So quite soon.
They are worrying about 'Chat'. That's just the shiny object being used to sell LLMs. We are doing a huge amount of dev on top of (private) LLMs with controlled inputs and outputs.
My sincere thanks for sharing it.
a grand surplus of data
Recently APP fraud reimbursement implemented by PSR is a good step by the regulators. I think soon Together we can bring innovative ideas to resolve this issue too
I think AI needs constant regulation and advancement by AI experts.
And they sold open AI to the government?
I find this conversation interesting.
You are afraid of getting unbiased answers from AI or getting around censorship with prompt engineering?
@@abram730i think he is afraid of database leaks.. this program was a joke 🤣
Good to see we’ve already started referring to it as ‘the institute’
14:35 - Good judgement is always the burden of a responsible and considerate person. I don't think that is the same as attributing a blame. You can't off-load this critical psychological defence still to companies. I think this is a chance to enhance our judgements in order to discern fake/simulated information. IMHO
There is no Palestine. 😁
@@thevikingwarrior Keep telling yourself that, you might believe it. They're the people the IOF are using as human shields.
I mean unbelievable she said "feelings are biochemical productions of human brain"
Knowledge and science are power to your country. Some profession is more important than others.
You can tell and see the differences in AI created contents if you look closely and if you learnt quickly to spot what is really wrong with the contents that been created by AI
very knowledgeable panel in this discussion BBC! 👍
Awful program. It was based on a lie. The hacker didn’t stole a database with AI and not even trough AI. LLMs don’t have hospital databases in them..
Reporter just saying buzz words without making any sense at all, abysmal reporting.
If you don't like it then go away.
Or better still, make your own program.
@bj6515
Not sure if you're 8 or 80. Your brain either hasn't fully developed or is in rapid decline.
I love that woman's passion, she is 100% right
This is the sad reality when these companies get there hands on these tools they pay more attention profits and ignore risk, I mean come really who didnt see this coming a world controlled by computers is a nightmare
8:30 use rust if it really needs to be robust software
realy good interessting discussion, excellent guests. I want more quality journalism like this ! :)
There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable. The punishment for not should be deletion, especially if the AI is used to make deep fakes or child pornography.
Robust (i.e. unremovable) watermarking is mathematically impossible, but removable watermarking is _much_ better than none at all! Either way, liability is exactly what is needed. Safety isn't the user's responsibility. It's not even the app-developer's responsibility. The responsibility lies with the companies who are creating the foundation models. If they can create a model capable of autonomously committing cyber terrorism, but they have no idea how to prevent it from committing cyber terrorism, then they shouldn't be making it at all! D'oh!
"There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable."
Why? Hollywood movies show things that didn't happen, and most music is generated.
"child pornography"
They wouldn't really be children, and not really having sex.
@@abram730 They literally caught people using AI to make child pornography. It was on the news, look it up. Deep fakes are made with the expectation to fool, ruin, and scam people, whereas everyone expects Hollywood to make stuff up for entertainment. That and you clearly didn't listen to the video. They went over why it's a problem.
@@41-Haiku Yeah. So far they have been relying on existing laws to combat AI, but I would love to see new laws imposing such liabilities onto the makers of AI.
only remedy is to associate an actual chain of custody with all content, signing it w/ a digital signature, then for individuals amongst society to personally de-value unsigned content or content signed with an immature signature. at this point, software/platforms/communities track signed content, and punish signatures that are used for things that society (or platform) disagrees with. problem is, none of this (or laws) are actually enforceable until things change in a number of different ways (some good, some bad).
That was a really interesting conversation!
Thanks for the video
If it's connected to the Internet, it's possible to go in and change things from the outside.
Anything can be broken into. Critical thinking is a skill that is exstinct and as long as profit is the driving force, these conversations are pointless.
This intelligence is just learning too learn
The problem here; isn't artificial intellegence, it is human intellegence.
So what about the evolution of the concept Zero Trust at the same time as AI???
Just do a documentary around AI, and you'd have plenty of time to discuss everything around it.
The "NHS Hack" had nothing to do with AI nor with the 'cyber attacking' the NHS hospitals directly. A private pathology lab called SynLab ( a part of Synnovis private-public partnership) got ransomewared due to poor cyber security measures at the lab (unlike the NHS itself which had their cyber defences improved after the previous attack). Synnovis refused to pay the ransom and the cyber criminals published the stolen data on the Russian-owned Telegram messaging app( the stolen data is still there by the way). The stolen data allegedly had the names, addresses and the blood types of everyone in the UK who was ever was blood-tested by the Synlab. As a knee-jerk reaction NHS stopped all operations in two hospitals to allow cyber-forensic investigations.
Create an AI scanner to detect for Ai that’s the way to don’t trust always verify
And yes at the end of the day it's up to the consumer and the individual to filter what's true or not
Unfortunately, there are many consumers who are unable to do that. They just want their quick fix of "short sound bites" and are not prepared to put any effort into finding the truth.
😕
Super panel but they were missing someone defending AI systems
Most of the hackers want to get rich easily
Some of them are enemies of the state
As they mature, many of them want to swap hats, when an opportunity arises, play for the winning team, sleep without worries.
Unless you updated the software on your computer 5 seconds ago, AI can break into your computer.
Or even if you have. See this paper: "Teams of LLM Agents can Exploit Zero-Day Vulnerabilities"
The AI system independently discovered new vulnerabilities and successfully exploited them. They used existing vulnerabilities that were discovered after the training date cutoff, which allowed them to run a proper test, where they knew what vulnerabilities were there to find and whether the AI found them. But as far as the AI knew, it was the first to discover these vulnerabilities. (This wasn't clearly communicated in the paper, so I reached out to the first author Richard Fang and he confirmed that the AI was not given any information whatsoever about the vulnerabilities.)
But that's old news already. They used GPT-4 Turbo, which isn't state-of-the-art anymore. Next-generation models (including OpenAI's GPT-5, Anthropic's Claude 4 Opus, and Google's Gemini 2 Ultra) will all be significantly better at autonomously committing cyberattacks.
Advance
AI
It will
Take
Less Than
5 minutes
Ethical hackers...the anti heroes we didn't know we needed. 😂♥️
YOU hacked their ego. :)) thx
@@volkerengels5298 oh, gosh, how did I do that? I must have accidentally pressed the wrong button or something. 😂 I actually need an ethical hacker to teach me tech... it's a "brave new world" to me. 🥰
@@SquawkingSnail HOW? (The beast plays the innocent)
'unknown anti-hero, may be useless'
is not exactly what one wants on his gravestone??? :)
@@volkerengels5298 Do our achievements only count if everyone knows about it? Hmm, I want to say no but I imagine many would say yes. I'm choosing to see ethical hackers as the firemen (or firewomen) of the tech world and feel grateful for their efforts...#heroes.
@@SquawkingSnail OF COURSE they are!!
And as you imagine - common sense is clear here: "Fame must be public - or it doesn't count"
With our changing social_climate and physical_climate - firehumans burn out like straw.
Didn't thought the joke would lead to a serious conversation :)
Just make your own AI - llm, its very easy. It's important to mention that almost anyone with a computer and some skills can create their own AI.
This sort of reporting makes me realise how far behind we already are. We're pandering to old audiences when we're already very aware, we're beyond screwed as a younger populace.
Presenters talk about the 'colour red', a lack lustre attempt as scaring or trying to diffuse the alarmingness of this situation. But it's exactly this realxed footing and reporting on it that has got us into this mess of lack of governance, lack of leadership and more
18:06 - Machines don't have wants and desires. However, the developers of AI do.
I am only a few minutes in and at this point this is a jumble of misrepresentations and mixed out of context info.
The only way to counter these attacks is to stay steps ahead. AI language models are always hackable… lack of funding is affecting development and many more irrespective of the technology…. Pay people to check for loops and redundancy
giving an ai learning material of things that should not be public is just dumb.
so Elon Musk was right
The Markets run on software and most banks run on software.
Software is everywhere….,
If software is not safe, then what?
Every hack is AI now lol It's bots built to target known system vulnerabilities, bots have existed since day 1 of the internet.
AI
Main and troubling area
Is
Between different
AI
Format
Who will be the
Most advanced and powerful
AI
In the world 🌎 and beyond
Universe
AI is coming to a point where it could be bottlenecked by the energy capacity of an organized "society" to be able to strike a goal through the vulnerabilities of synchronized and distributed systems on the tradeoff of other group's interests. And we nowadays can't figure a way to keep this away. I mean, end-to-end technology safety protocols have the same flaws of striking ideas to reach concrete consequences to the physical world ...
Crowd strike - meanwhile - we don’t need ransomware to get ur system down - we will take care 😂
Lets all be steady
The movie The terminator becoming real I scary. Yet AI could help organize traffic, commuters and thus save gasoline usage.
I hate it. Can we rewind a bit?
The fearmongering is insane. AI has the capability to become the single most useful and uplifting development in the world and all the public wants to do is restrict and lobotomize for the average consumer. You realize such restrictions won't apply to malicious, powerful actors, just making sure the average person can never have any form of useful knowledge or power.
Scammers are already using AI for bot calling. I got 10 calls within 15 secs from the same number
This was an insightful discussion. That woman is very sharp.
Hello.
One should recognize that a.i. is a large data collective.
Think how palentir can maximize value with all of the data the governments have there hands on.
Good luck.