Hackers expose deep cybersecurity vulnerabilities in AI | BBC News

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024
  • As is the case with most other software, artificial intelligence (AI) is vulnerable to hacking.
    A hacker, who is part of an international effort to draw attention to the shortcomings of the biggest tech companies, is stress-testing, or “jailbreaking,” the language models at Microsoft, ChatGPT and Google, according to a recent report from the Financial Times.
    Two weeks ago, Russian hackers used AI for a cyber-attack on major London hospitals, according to the former chief executive of the National Cyber Security Centre. Hospitals declared a critical incident after the ransomware attack, which affected blood transfusions and test results.
    On this week’s AI Decoded, the BBC’s Christian Fraser explores the security implications of businesses that are turning to AI to improve their systems.
    Subscribe here: bit.ly/1rbfUog
    For more news, analysis and features visit: www.bbc.com/news
    #Technology #AI #BBCNews

ความคิดเห็น • 701

  • @mrphiliphallam
    @mrphiliphallam 6 หลายเดือนก่อน +253

    The NHS hack has absolutely zero to do with AI large language models. The entire premise of this program is wrong.

    • @brexitgreens
      @brexitgreens 6 หลายเดือนก่อน +10

      Thank you. Regarding AI, it's no different from employing a human: don't trust blindly either. The same safeguards apply.

    • @brexitgreens
      @brexitgreens 6 หลายเดือนก่อน +8

      And even so (all things considered), AI (LLM) is far more dependable than human staff. Which is not necessarily a good thing because there are times when orders should be disobeyed.

    • @brexitgreens
      @brexitgreens 6 หลายเดือนก่อน

      And regarding conventional hacking such as the NHS leak: the interviewee is wrong that the red team will always win. Every time the red team wins is a case of the incompetence of the blue team. In practice, vulnerabilities are a combination of true stupidity and feigned stupidity masking intentional betrayal. Perfect security isn't rocket science. But corrupt human nature makes it seem so. The solution to this problem involves psychiatry, not technology.

    • @calvinsylveste8474
      @calvinsylveste8474 6 หลายเดือนก่อน +2

      The technological singularity-or simply the singularity-is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.

    • @f.e.5691
      @f.e.5691 5 หลายเดือนก่อน +12

      I understood that the NHS attacker used freely available large AI models to find breaches in their systems. However, I'm not sure if they explicitely mentioned that. For sure, they talked about how hackers can remove safety guards in the AI models to use AI as a tool to cause harm or hack others.

  • @robcz3926
    @robcz3926 5 หลายเดือนก่อน +89

    bbc probably asked an intern to gather everything about ai and hackers and now we are watching this.

    • @NathanBeveridge
      @NathanBeveridge 5 หลายเดือนก่อน +6

      …and then the intern used a GPT to put it together ;-) 🤷🏻‍♂️

    • @hypebeast5686
      @hypebeast5686 5 หลายเดือนก่อน

      The doomer talking aka Connor just wants attention. The 2 interviews I seen with him are a laugh.
      This one is insane.. LLM stole an hospital database. At least this guy that his spreading his doomsday scenarios should know what he is talking about, the other 2.. meh.. but this one, come on.
      The intro was ok, the women explained jailbreaking ok, but after that it just goes to pure nonsense for the masses.

    • @Maude-ru7sv
      @Maude-ru7sv 5 หลายเดือนก่อน

      Even small info is invaluable to someone who's just putting their toes into AI.
      I had to be attacked & told I opened a Bitcoin wallet, when I had not

    • @gostgajol1542
      @gostgajol1542 5 หลายเดือนก่อน +1

      u win 🙂

  • @watermyfriend6242
    @watermyfriend6242 5 หลายเดือนก่อน +109

    "Within 5 - 10 years we don't know what is real or fake". Ok then we can go outside again to see what's real.

    • @askeletalghost
      @askeletalghost 5 หลายเดือนก่อน +4

      if only that was a guarantee

    • @STCatchMeTRACjRo
      @STCatchMeTRACjRo 5 หลายเดือนก่อน

      wont be driving a car, the cleaning bill would cost a lot

    • @watermyfriend6242
      @watermyfriend6242 5 หลายเดือนก่อน

      @@STCatchMeTRACjRo Yes, I wouldn't be driving a cae, because the cleaning bill would cost a lot

    • @STCatchMeTRACjRo
      @STCatchMeTRACjRo 5 หลายเดือนก่อน +1

      @@watermyfriend6242 yeah right, make it easier for YT to auto delete my comments.

    • @berdjanewilliam4520
      @berdjanewilliam4520 5 หลายเดือนก่อน +1

      Everything is fake😂

  • @AIWorldInstitute
    @AIWorldInstitute 5 หลายเดือนก่อน +17

    As a product of the 90s and a hacker, that spent about 14 years of his life in prison, due to said activities, Gen x, The term hacker and what they are talking about he did, is far from impressive, with that being said there are more issues with ai than you can imagine.

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน +4

      under-rated comment.

  • @slobiden.2593
    @slobiden.2593 6 หลายเดือนก่อน +79

    Everyone talks about 1984 and Orwell.
    There’s a fantastic series of games called metal gear solid. The second one covers AI with an angle I’ve never seen before or since. The AI is housed in a giant server the size of a town. It filters the entire internet. It’ll show you what it wants you to see. You ask it for a news story. It’ll edit the news stories as it displays them for you. The news thinks you’re seeing their story. But you’re not. Everywhere you try to look. It goes through their filter. To quote the AI “our goal isn’t to control the content, it’s to create the context”
    This is where we are going. It’s scary. I should decide if what I’m seeing is the truth. Is the earth flat? No, but I like the fact I can listen to flat earthers and know they’re speaking s***. But it’s my god-given right to determine that.

    • @brexitgreens
      @brexitgreens 6 หลายเดือนก่อน +5

      The "AI" you've described is basically TH-cam itself. And we (users) are its pawns. Write a wrong comment and see what happens.

    • @slobiden.2593
      @slobiden.2593 5 หลายเดือนก่อน +1

      @@brexitgreens that’s a very simple bot, but I still agree

    • @SteveGillham
      @SteveGillham 5 หลายเดือนก่อน

      Even before Computers, this was happening in many ways.
      Newspapers editing stories based on what they wanted you to believe in.
      Religious leaders telling you how to think.
      There are always people out there who want to manipulate you in some way.

    • @stevengill1736
      @stevengill1736 5 หลายเดือนก่อน +1

      I've written lots of long comments here, what do you mean? Like that big one I wrote at....
      Hey, where'd it go????

    • @DumbledoreMcCracken
      @DumbledoreMcCracken 5 หลายเดือนก่อน +1

      There is no truth. Truth implies information is "correctly" encoded in everyone's mind identically. Hah. People are dumb, and therefore, there is no truth.

  • @kevinlamptey4041
    @kevinlamptey4041 5 หลายเดือนก่อน +8

    The best series on AI so far is "Person of interest"! You all gotta watch it.

  • @DiggerD-w6r
    @DiggerD-w6r 5 หลายเดือนก่อน +21

    Don't put your networks on the internet. The internet will never be secure.

    • @runnergo1398
      @runnergo1398 5 หลายเดือนก่อน +8

      Exactly. It was a huge mistake putting so much infrastructure online.

    • @projectsspecial9224
      @projectsspecial9224 5 หลายเดือนก่อน +3

      @@runnergo1398 Agree... for some reason, very smart people do very stupid things

  • @Friendlyhu
    @Friendlyhu 6 หลายเดือนก่อน +61

    The interviewer is so bad that he interrupts everyone talking. He has no idea about AI. We want to hear more from the 3 experts

    • @snooks5607
      @snooks5607 5 หลายเดือนก่อน +6

      limited time and many topics, the interviewees would talk the whole day if you let them

    • @D.von.N
      @D.von.N 5 หลายเดือนก่อน +5

      He gets prompts from the studio, it isn't all under his control only.

    • @skullsaintdead
      @skullsaintdead 5 หลายเดือนก่อน +4

      I actually thought he was really good, jumping in when guests were going a little off-topic (though what they were saying was interesting, as he rightly said, they only have 20 mins), being inquisitive, respectful and thoughtful.

    • @paxdriver
      @paxdriver 5 หลายเดือนก่อน +3

      He doesn't have a clue what he's talking about, the host.
      By contrast, Connor knows what he's talking about, but his bias is entirely skewed to the unlikely worst case imaginable and suggests that since he's wealthy and comfortable and doesn't need AI to substantially improve the education of his kids or his own prosperity / productivity, that we should be scared enough to all stay suffering to ensure his protection from algebraic lambda functions.
      I don't think either men realize how little sense they are making when real people are at stake, not just their own comfortable lives being threatened by people who fear destitution and opportunity more than they fear poor people competing economically with their luxurious selves.
      Not differentiating real from fake would benefit everyone. We'd be forced to all apply critical thinking by default instead of trusting talking heads. It would force people to be informed by logic, cross referencing, consensus, and by reading well vetted authors. It wouldn't force everyone to never believe anything ever again as this whole panel suggests, it's far more likely to do the opposite when common knowledge is to be suspicious and critical of everything. That's healthy, that's not "thinking based on feelings" it's thinking based on thinking - which we're not doing.
      The singularity is not a thing unless you're taking about either end of the universe. Computers are not doing "2 years of thinking per day", they don't think they associate tokens in matrices. Humans have agency by way of the senses coalescing, and we're fragile because we die when some of those senses stop working by consequence. If a machine developed agency but couldn't die from impaired senses then it wouldn't really be conscious or self aware without ever having any appreciation for its own death.
      Connor Leahy knows how these systems work, he knows the code and the math right down to the assembly, probably. His fear is that 0.000001% chance of catastrophe isn't worth the risk to his great life, so everyone else should just suck it up and stop being so loose with our models. Poor people could leverage those models and lift the world to a new minimum standard but that tiny % risk isn't worth it to him and 10% of the rest of the world if it means not only AI threatens his comfortable life, but lifting the poor to compete for his wealth is the even greater threat.
      Don't get me wrong, I lime the guy, he's not evil, he's not crazy, he's a father. He's a guy who sincerely wants good in the world but clearly doesn't even recognize how little sense he makes when he speaks about the risks. He's been on mlst a tonne of times and I listen to every episode because there's a lot to learn from him, lots of insight and perspective, and most importantly he sets a great example for discourse with differing views; it seems pretty clear over the years his strongest argument is a preference to preserve the status quo, and not many people on earth would think that's an acceptable reason to keep them trapped in exploited labour their entire lives.
      A lot of people suffer and can't defend themselves for lack of education or tutoring, adequate language skill or stimulating dialog by virtue of the world they inherited through no fault of their own. It's not our fault either, except it is if there's a tool that would certainly help a healthy percentage of that population and compounding over time. If we withhold access to AI then it is our fault because suddenly we decided for them it wasn't worth the risk. They ought to just sacrifice themselves for the West (the least in need and most capable of defending themselves again an Ai-mageddon.
      Indeed far more people are not well off than who are, so to suggested his fear of protecting his civilized life merits closing that door to the many millions of times more people who would at least have the option to work hard to catch up with him is patently selfish and logically asinine for a man of his dignified belief systems - unless he's just a man blinded by love. That would be completely understandable but not in the least bit justified.
      TLDR, this whole conversation is a red herring to distract from license agreements, patent farming, privacy, rentseeking enterprise, and corruption of politics. This is the Houdini act, misdirection and pearl-clutching, while the bank robbers keep an unbroken congo-line strong carrying the future's wealth out the door in broad daylight.

    • @D.von.N
      @D.von.N 5 หลายเดือนก่อน +1

      @@paxdriver well vetted authors: how will you trust who they were if everything you see online would be skewed by deep fake and local libraries shrunk to some community rooms with aged novel books for youth? Even proper science is hidden behind paywall these days, more and more, for those actually able and willing to read scientific papers. With academia shifting towards a mill mass producing papers, anyway, some later retracted because the honesty and quality seem to be in a short supply. They just want tp publish, publish, publish, anything. Just push for as many publications as possible. Enshitification od search engines, enshitification of science. You need to have a deeper knowledge about a particular topic to be able to sense a rat in such a paper or you can get pretty confused.

  • @Pl15604
    @Pl15604 6 หลายเดือนก่อน +33

    You can't "break into" a model. A model is a set of values. It is literally a table (a mathematical matrix with rows and columns).

    • @seanlingham5254
      @seanlingham5254 6 หลายเดือนก่อน +7

      They don't break into models. They get into the unsecured datasets used to train or fine-tune those said models.

    • @Oblivion_94
      @Oblivion_94 6 หลายเดือนก่อน +2

      Nothing is true, everything is permitted.

    • @shindousan
      @shindousan 6 หลายเดือนก่อน +3

      Every traditional computer program is like that: a "set" or "table" of values of instructions and their operands.

    • @patrykp8460
      @patrykp8460 5 หลายเดือนก่อน +1

      exactly a csv file

    • @Sandel99456
      @Sandel99456 5 หลายเดือนก่อน +4

      Jailbreaking is getting responses from the AI model that the AI was programmed not to give like harmful content. It means you could leverage the learning ability of AI against it through the prompt..it is not hacking in any sense

  • @jarts8946
    @jarts8946 5 หลายเดือนก่อน +8

    70% of breaches don't make the headlines apparently. How much money is lost on data breaches is insane.

  • @grahamnichols1416
    @grahamnichols1416 4 หลายเดือนก่อน +5

    Why are hackers always portrayed as figures in hoodies hunched over a laptop?

  • @xjet
    @xjet 6 หลายเดือนก่อน +5

    "healthy skepticsm" is the *ONE* key subject that should be taught at all levels of education. Sadly it's not, therefore the future looks bleak.

    • @DarkSkay
      @DarkSkay 5 หลายเดือนก่อน +1

      Channel uncertainty and doubt into "healthy skepticism" instead of fear about a "bleak future"? Have a nice day :)

    • @DarkSkay
      @DarkSkay 5 หลายเดือนก่อน

      @@YouTViewer It disappeared? Where did it go? ;)
      It seems that with AI concretizing so many questions about rules, computation, knowledge, mind, consciousness, culture... the awareness about associated paradoxes and mysteries grows as well. Thus, challenges to convenient shortcuts and common beliefs. Astounding how the societal thread of the AI story reshapes perspectives, as the new tools simultaneously change the economic game, progressively feedback into paradigms of thinking, investing time & energy, creating.
      With huge change comes huge uncertainty. Political strategies will have to be developed in order to ensure that, unlike with the industrial revolution, this time the coming manifold increase in prosperity doesn't come through a period of extreme ideologies, terrible wars and social unrest. The fascination for visions of a "bleak future" is at its healthiest in dystopian movies at the cinema, while the real world stays reasonably optimistic, moderate and calm.
      "Good sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess." - Descartes

  • @SergioBlackDolphin
    @SergioBlackDolphin 6 หลายเดือนก่อน +14

    We already do that. We love fake news, fake people, fake politicians, fake schools, fake journalists, fake watermarking. We click, we fake get depressed, we fake consume, we die with a fake smile, within an illusion of fake meaning. 20:18 is why we are doomed by TikTok attention span.

    • @DonG-1949
      @DonG-1949 5 หลายเดือนก่อน

      imagine typing out this comment

  • @SquawkingSnail
    @SquawkingSnail 6 หลายเดือนก่อน +18

    Long term we seem to be de-skilling ourselves as a species via tech. What was said about us not making the brain connections due to our ai usage makes perfect sense to me, I think we are seeing the impact of this already.

    • @Peter-mj6lz
      @Peter-mj6lz 6 หลายเดือนก่อน +3

      What if we are learning to use our brains in different ways?

    • @arinco3817
      @arinco3817 6 หลายเดือนก่อน +1

      I actually worry about this quite a bit. Like in the future once we've handed over running the world to the AIs, if what something like a solar flare wipes out the electronics of the earth. Humans may have lost the skills that would allow us to rebuild, which would send us back into a bit of a dark age.

    • @Peter-mj6lz
      @Peter-mj6lz 6 หลายเดือนก่อน +5

      @@arinco3817 But if we have figured it out in the past we world figure it out again. I actually think we just use different skills.

    • @SquawkingSnail
      @SquawkingSnail 6 หลายเดือนก่อน +1

      @@Peter-mj6lz you're quite right, and I expect that the jury will be out for some time before we have a clear answer...which would "hopefully" be a positive one. The brain is like a muscle though and it needs exercise. I believe that to store memories, retain the ability to focus ,and to gain skills we need more than to passively push a button and be given a response. Should anything interfere with our ability to access this tech in the future, future generations could easily find themselves back in the stone age as far as human skills and understanding is concerned. Anyone might be able to build a house (for example) using say a vr headset telling them where to position the stones, but only someone with skill and experience can tell you why and then apply that knowledge to different situations. One person can place a stone where they are told to whereas the other can envision and build a cathedral. It's a big difference...in my mind.

    • @SquawkingSnail
      @SquawkingSnail 6 หลายเดือนก่อน

      @@Peter-mj6lz How long do you estimate that it took our species to get started out of the trees? I can't even begin to guess. How long before we learnt to smelt or navigate by the stars. My son can't find his way around our home town without gps and it actually does worry me.

  • @supercurioTube
    @supercurioTube 6 หลายเดือนก่อน +27

    1:00 how did "AI" somehow get blamed for a Russian state-sponsored cyber-criminal attack on the NHS?
    What kind of baseless nonsense intro is that to setup a discussion on LLM models jailbreaking?
    And what can you get by jailbreaking a LLM? Only the ability to answer questions based on its training data, which is public data from the web, nothing more.

    • @TheLOLWHATTTTTTT
      @TheLOLWHATTTTTTT 6 หลายเดือนก่อน +2

      couldn't be more accurate. But BBC seems to care more about click rates than actual factual thruth.

    • @geroffmilan3328
      @geroffmilan3328 6 หลายเดือนก่อน +4

      Ah, my sweet summer child 😔
      LLMs can - and have - been used to massively expedite the generation of exploit code for multiple architectures & languages.
      My team have been using the approach for some time now, whether by jailbreaking public LLMs or using bespoke LLMs.
      The latter of which you can be sure "Fancy Bear" has access to; the former can be used by anyone.

    • @Diamonddavej
      @Diamonddavej 6 หลายเดือนก่อน

      That might well be true, this knowledge is somewhere on the Web, if you look. That is how LLMs are made, they gobble up the web and learn to regurgitate it. However, a LLM allows people with next to zero programming ability to get a LLM to output fairly sophisticated code. I am currently using Claude to output code, Julia programming language, that takes a colour image and converts it to a black and white image using Stucki dithering, a variation of Floyd-Steinberg dithering. I have nearly zero knowledge of Julia coding, I know enough to run Jupyter notebook and copy and past, and run code. I know if I get an error, I copy the error into Claude and ask it to fix the error, until the code runs. I don't understand these errors, that it effortlessness corrects. I have code converting images to colour and black and white dithered images, it's interesting. Yes, I could learn this on the Web, spend a few weeks to a few months learning Julia programming, and do this myself. But LLM allows complete novices like me to ask for code, including stupid 14-year-olds that hack hospitals.

    • @supercurioTube
      @supercurioTube 6 หลายเดือนก่อน

      @@Diamonddavej it's true that using a LLM can help you write code in a language that you don't know. It's awesome and it feels like magic.
      But it doesn't mean that it's gonna be anywhere near what an expert would write, or even work correctly. It won't be capable of solving novel problems for you either.
      That's despite what some AI companies and influencers use as marketing. Like Sam Altman from OpenAI and others profiting from the AGI and super intelligence hype.
      Neither of them are real, in any shape or form.
      Was there any hard evidence that the NHS data leak resulted from the use of jailbroken Large Language Models?
      How could one even tell anyway? You can't tell if code was written by a machine, a human or mostly copied from Stack Overflow.
      Or is that pure speculation presented as a fact (I didn't follow the details of that story)

    • @supercurioTube
      @supercurioTube 6 หลายเดือนก่อน +1

      I made the effort to write a detailed reply to someone else's interesting comment and both messages just disappeared.
      This feels like it wasn't a good use of my time...

  • @danielsanichiban
    @danielsanichiban 6 หลายเดือนก่อน +15

    On a similar note, you can bet that there are criminal groups, government departments, etc that are training AI to hack systems like you've never seen before, and that is gonna be a big story when that takes off, if it hasn't already without us knowing

    • @eyezikandexploits
      @eyezikandexploits 5 หลายเดือนก่อน +1

      As a bug bounty hunter most of the community already is using ML for finding bugs

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน +1

      @@eyezikandexploits can you elaborate on what you mean by "finding" and "bugs"?

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน +1

      @@YouTViewer despite your generalization, i figured that's what you meant. i have serious doubts that ML is finding vulns better than fuzzing and formal verification. ML may augment labelling and can aid with generation of familiarizing content to pop an account, but in terms of shaking actual bugs out of a piece of software...highly unlikely. ML can barely correlate context between two distinctly separate pieces of logic.

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      @@eyezikandexploits If you search for "Unleashing AI The Future of Reverse Engineering with Large Language Models" related to REcon 2024, you can read some slides that talk about using LLMs in regards to reverse-engineering.
      They're prolly better when setting up for the automation required for some webapps, but in terms of vuln-discovery...the weaknesses are pretty apparent. Perhaps it'll change in the distant future (as tech and capabilities change), but "already being used for finding bugs" (in the capacity for finding something other than low-hanging fruit) is pretty doubtful. Still, I'm looking forward to the results of DARPA's next CGC.

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      @@YouTViewer If you search for "Unleashing AI The Future of Reverse Engineering with Large Language Models" related to REcon 2024, you can read some slides that talk about using LLMs in regards to reverse-engineering.
      They're prolly better when setting up for the automation required for some webapps, but in terms of vuln-discovery...the weaknesses are pretty apparent. Perhaps it'll change in the distant future (as tech and capabilities change), but "already being used for finding bugs" (in the capacity for finding something other than low-hanging fruit) is pretty doubtful. Still, I'm looking forward to the results of DARPA's next CGC.

  • @indyvisible624
    @indyvisible624 5 หลายเดือนก่อน +2

    It’s like when your parents who can’t even figure out how their phones work, tried to control your internet traffic.

  • @Jia-Tan
    @Jia-Tan 5 หลายเดือนก่อน +4

    Respect the BBC for putting this in their programming. It's important.

  • @tonywhite4476
    @tonywhite4476 5 หลายเดือนก่อน +2

    I hate it when people who know nothing about technology try to explain it.

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      Like who? Like every single human being? Honestly mate, I'm sure you are also aware we're at a Quantum Physics level of technology where people get shit working but not even they can really explain how or why it worked... they've all got the theory alright but they are far from explaining how to go about it... little like Eistein and the Black Hole... dude said there was something there and it took half a century for someone to explain wtf he was talking about.

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      My answer to your comment ain't welcome... yt just deletes it 😂

  • @Lupinicus1664
    @Lupinicus1664 6 หลายเดือนก่อน +3

    Critical to understand that the developers do not understand to any fine degree how their 'AI' models actually work (in terms of being able to accurately predict what it may do in any given scenario). The 'reformed' hacker in the video was absolutely right. Also charmingly naive to think that any rules and regulations we agree as a society will protect us from AI down the line. How did that work for nuclear weapons? Someone, somewhere will ignore the rules if they see it can benefit them. It's a good job we're having this discussion (finally) if we are still talking this way....

  • @vinylwarmth
    @vinylwarmth 5 หลายเดือนก่อน +22

    When Conor said AI has been around for 2 years I switched off 😅

    • @projectsspecial9224
      @projectsspecial9224 5 หลายเดือนก่อน

      Clueless🤣

    • @Sakura36343
      @Sakura36343 3 หลายเดือนก่อน

      it has been 60 to 70 years since AI is there

  • @richpoorworstbest4812
    @richpoorworstbest4812 4 หลายเดือนก่อน +2

    I work in CS. Once AI is good enough, hundreds of billions of attacks can be done per millisecond ands the only possible defense is AI which is blue

  • @dr.ramcharanbaishya1998
    @dr.ramcharanbaishya1998 5 หลายเดือนก่อน

    17:22 the other is CALCULAS or how to compute. I dont think they are the same

  • @notjustforhackers4252
    @notjustforhackers4252 6 หลายเดือนก่อน +11

    This is why we must vote out the surveillance state and demand they protect our data, not put citizens at risk for their political control. Demand back your human rights at the ballot.

  • @super3d201
    @super3d201 5 หลายเดือนก่อน +13

    I like how the hacker explains blue team and red team, and shows the interviewer, that he has no idea what he is talking about.

    • @DarkSkay
      @DarkSkay 5 หลายเดือนก่อน +10

      IMO interviewer is really good, has refreshing curiosity and passion for a wide range of subjects, but we wish he had more time with the exceptional guests.
      Actually, the less technical knowledge the interviewer has, the more likely that his questions will be representative of the broad public. So, over time, he's bound to lose performance in this regard ;)

    • @rajkoner
      @rajkoner 5 หลายเดือนก่อน +1

      Or is it the entire video is made by AI?? Including people...

  • @chrishanni2779
    @chrishanni2779 5 หลายเดือนก่อน +2

    Thank you for talking about this.

  • @kanzakimusic
    @kanzakimusic 6 หลายเดือนก่อน +8

    Pliny the prompter, holy shhhh

  • @Wesley-d5x
    @Wesley-d5x 5 หลายเดือนก่อน +1

    In school for cyber security and would be ecstatic to be mentored by this guy!!!!

  • @_Stin_
    @_Stin_ 6 หลายเดือนก่อน +1

    14:35 - Good judgement is always the burden of a responsible and considerate person. I don't think that is the same as attributing a blame. You can't off-load this critical psychological defence still to companies. I think this is a chance to enhance our judgements in order to discern fake/simulated information. IMHO

    • @thevikingwarrior
      @thevikingwarrior 5 หลายเดือนก่อน

      There is no Palestine. 😁

    • @_Stin_
      @_Stin_ 5 หลายเดือนก่อน

      @@thevikingwarrior Keep telling yourself that, you might believe it. They're the people the IOF are using as human shields.

  • @janisyoutube
    @janisyoutube 5 หลายเดือนก่อน

    8:30 use rust if it really needs to be robust software

  • @importantname
    @importantname 6 หลายเดือนก่อน +4

    how long will it take for an entity or nation to build and programme an AI solely for hacking the AI of other entities or nations?

    • @xv3ei
      @xv3ei 5 หลายเดือนก่อน

      they working on it lol😂

    • @hannaht2068
      @hannaht2068 5 หลายเดือนก่อน

      AI is working on it. So quite soon.

  • @AdolfoLeija-id3tz
    @AdolfoLeija-id3tz 5 หลายเดือนก่อน

    15:33 What about a law forcing to disclose information about the AI generated content (metadata). A picture generated or modified with AI disclosing how many pixels were generated/modified using AI.

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      non-enforceable...but instead, how about a society-imposed rule on content that requests all content to be digitally signed by creators/editors (to show an actual chain of custody), with requirements imposed on software platforms to display the number of times those signatures have been used/abused? this way consumers can personally de-value unsigned content, or at the very least associate its value based on community experience wrt an immature or abused signature.

    • @AdolfoLeija-id3tz
      @AdolfoLeija-id3tz 5 หลายเดือนก่อน

      @@arizvisa something similar to organic food certificate?

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      Nah. You're likely not aware of this, but organic food certification only exists to establish constraints upon how food is grown, stored, processed, packaged, and shipped (In US, anyways). There's no limitations (or tracking) related to the number of times used for said "organic certificate", nor is there a required chain of custody to be shared, or a way of telling that you got __exactly__ what you paid for. Plus, "organic" products in question have a tangible cost, which slows down manufacture and distribution...unlike digital content, which can be generated, modified, copied, taken out of context, etc. for almost no cost of manufacture/distribution. This generally holds true since information is generally free as-in speech.
      It's closer to a liquor license, (with its artificial limit of being constrained by county), but with the addition of knowing whether the original liquor has been tampered with or re-bottled by a distributor...and specifically, with consumers having the ability (for almost no time/space cost) to distinguish products signed by reputable manufacturers/distributors from products (or content) signed by non-reputable manufacturers/distributors.
      This way, if content has been tampered-with/generated by 1st, 2nd, or 3rd parties, consumers/communities can track that a specific trust chain was used to fabricate/modify/distribute suggested materials. Major content producers (that individuals trust as being an original source or trusted distributor of media) will then have the fear that consumers will call them out for it being generated by AI or labeled as disinformation, which can damage the reputation of their signature that they've been using for some period of time.
      To correct their damaged reputation, they would then need to purchase/generate another key and essentially start over, signing their content with an immature key that has little-to-no reputation. Content that isn't signed at all, simply comes with the implicit guarantee to consumers that it isn't authenticated in any way, with no ability to identify whether it's been generated/tampered, and there being no reputation to base any sort of judgment from (treating it as just anonymous content w/ potentially little to no value). Hence, the society-imposed rule which requires society to de-value unsigned/unauthenticated materials from signed/authenticated materials.

    • @AdolfoLeija-id3tz
      @AdolfoLeija-id3tz 5 หลายเดือนก่อน

      @@arizvisa So we need something like a Digital Key that acts like a "license" ? Created with a RFC Request For Comment to designed based on public input?

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      ​@@AdolfoLeija-id3tz Nah. RFCs are too low-level and non-user-facing, which won't result in an interface that consumers will need to use. Reputation and trust "frameworks" already exist in certain communities (like gaming). However, the issue is that society doesn't expect this sorta thing out of their typical day-to-day tools when consuming content, nor are they aware of the laws that govern information and determines its value.
      People associate cryptography with currency, and are generally oblivious to data tampering and software failures (for most, it's completely opaque). There's nothing to motivate platforms/companies to develop/maintain their interfaces in regards to supporting reputation. Hence, it's unsolvable until society is able to reduce its opacities, and get to a point where distrust and forgeries actually have a permanent effect of some sort (a mass-event?).

  •  5 หลายเดือนก่อน

    The Last Comments what the lady in the show stated regarding intelligence is extraordinary.

  • @bubach85
    @bubach85 6 หลายเดือนก่อน +4

    This whole discussion is like watching the blind leading the blind.. I have so many questions. The LLM’s are like a personal googler, meaning it can sift through all the data you already can access online and respond in a more personal and seemingly intelligent way. But it’s still just like a glorified search engine for whatever data you feed it. So what does ”hacking it” even mean? Why in gods name would you feed any type of personal data to such a system and then try to censor the output when you can just reformulate the input prompt (the question you ask it) to basically trick the system to output that same data. What would the application even be, like why would it need sensitive information to begin with? It’s like putting up a website with all your secrets, and then try to censor sites like Google to make it difficult to find. Never impossible, just difficult. 🙄

    • @SteveGillham
      @SteveGillham 5 หลายเดือนก่อน

      The problem is with LLM, you ask the LLM a question, and as you say the data is already out available online, the LLM provides an answer, the LLM explains the answer that makes it sound like the correct answer yet it could be completely incorrect. And people will rely on the answer since they could not be bothered to fact check.
      You say who would put personal (confidential) data in one of these systems, plenty of people do. Just look at how many people have put information in Facebook.
      With LLM, one example could be that someone wants to impress their boss so they enter confidential Business proposals that they have been working on into the LLM to provide a summary of the proposals, the LLM now takes this data and provides the summary, however now the confidential information is now incorporated into the backend data.
      The "Prompt breakout" issue is that some guard rails have been put in place to limit the sort of dangerous information being presented as an answer. One example could be, if you asked the LLM how to build a bomb with common house hold items, the guard rails would kick in and not provide the answer. Breaking out from the guard rails would then allow someone with limited knowledge to be able to build a bomb. Yes that information is already available on the Internet but you would need to do research to find it.

    • @bubach85
      @bubach85 5 หลายเดือนก่อน

      @@SteveGillhamI know, so the “danger” with LLM’s is it allows idiots to do idiotic things? Shocker. And Google is preferable since it requires more effort? Sure, okey. Also, I’m pretty sure most of them work on a static training set, and won’t actually retain data from input prompts in between sessions, but I could be wrong on this one. Either way, training or feeding personal information to a model that you have no control over is just stupid.

    • @SteveGillham
      @SteveGillham 5 หลายเดือนก่อน +3

      @@bubach85 I totally agree, if we all did what is the best and most secure ways of doing things, there would not be a need for this sort of protection. However people/Businesses will always choose the quick option, not what is safe and secure as it could give them the edge over others.

    • @awex7
      @awex7 2 หลายเดือนก่อน

      i can tell you never used chatgpt. they are not just search engines. i literally pasted malicious code into chatgpt and got it to tune the code to however i wanted even tho its not supposed to make malware

  • @J2897Tutorials
    @J2897Tutorials 5 หลายเดือนก่อน

    18:06 - Machines don't have wants and desires. However, the developers of AI do.

  • @amarx6248
    @amarx6248 6 หลายเดือนก่อน +11

    Great discussion and panel!

    • @hypebeast5686
      @hypebeast5686 5 หลายเดือนก่อน

      The program was based on a lie..

  • @J2897Tutorials
    @J2897Tutorials 5 หลายเดือนก่อน

    14:58 - AI is also used to remove watermarks.

  • @kinngrimm
    @kinngrimm 5 หลายเดือนก่อน +8

    companies putting profit before safty 0_0 no way ^^

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      Right? Like THAT could ever happen 😁

  • @pedalingprospector2007
    @pedalingprospector2007 3 วันที่ผ่านมา

    From day one they've never been able to completely protect computers from hackers. There's no reason to think that hackers won't hack AI. I owned and operated a primary domain server in 2000-2002 while studying and practicing for some Microsoft certs.

  • @Garycarlyle
    @Garycarlyle 5 หลายเดือนก่อน

    They are conflating concepts. The nhs doesn’t have public facing AI that hold patient records. People are using LLMs but they don’t need to be jailbroken. People can make their own and there is no stopping that now

  • @cameronsimon1074
    @cameronsimon1074 5 หลายเดือนก่อน

    The "NHS Hack" had nothing to do with AI nor with the 'cyber attacking' the NHS hospitals directly. A private pathology lab called SynLab ( a part of Synnovis private-public partnership) got ransomewared due to poor cyber security measures at the lab (unlike the NHS itself which had their cyber defences improved after the previous attack). Synnovis refused to pay the ransom and the cyber criminals published the stolen data on the Russian-owned Telegram messaging app( the stolen data is still there by the way). The stolen data allegedly had the names, addresses and the blood types of everyone in the UK who was ever was blood-tested by the Synlab. As a knee-jerk reaction NHS stopped all operations in two hospitals to allow cyber-forensic investigations.

  • @aimirror
    @aimirror 5 หลายเดือนก่อน +1

    What is weird is the fact that you guys can't see that information was always manipulated and that this kind of use for AI is just the perpetuating of that modus operandi.

  • @PapaPalpsO66
    @PapaPalpsO66 5 หลายเดือนก่อน +1

    Red team blue team activities have so much controls implemented to prevent breaking the production environment. They are good for finding a few vulnerabilities but often time the tools being used in red team attacks are wildly different than the tools hackers use.
    That being said, I dont see jail breaking gpts a super serious issues. All the information that they give once unrestricted can be pretty easily accessed on the internet anyways.
    As long as those ai systems are not giving sensitive data input away there's not much harm. Thats coming soon though

  • @ToCoSo
    @ToCoSo 6 หลายเดือนก่อน +14

    AI is being thrust upon us by billionaires, noone is looking ahead people are losing jobs already and AI fishing and phone calling is growing.

  • @ryans5882
    @ryans5882 5 หลายเดือนก่อน

    I was hoping to actually learn something from this since I am in the cyber security and AI field, but this didn't tell us anything. If you get access to an internal AI system then you have already bypassed all of the multi layers of security. You can use AI to code malware, but that is it.

  • @seanlim4523
    @seanlim4523 6 หลายเดือนก่อน +2

    Create an AI scanner to detect for Ai that’s the way to don’t trust always verify

  • @Pearlylove
    @Pearlylove 5 หลายเดือนก่อน +1

    Can machines really “read” you? You should have given that question to Connor Leahy, he would have told you how well it can read you and how! Great seeing you, Connor!

  • @SumitraKH-b7p
    @SumitraKH-b7p 6 หลายเดือนก่อน +1

    Recently APP fraud reimbursement implemented by PSR is a good step by the regulators. I think soon Together we can bring innovative ideas to resolve this issue too

  • @KIRRAH1
    @KIRRAH1 6 หลายเดือนก่อน +2

    And yes at the end of the day it's up to the consumer and the individual to filter what's true or not

    • @SteveGillham
      @SteveGillham 5 หลายเดือนก่อน

      Unfortunately, there are many consumers who are unable to do that. They just want their quick fix of "short sound bites" and are not prepared to put any effort into finding the truth.
      😕

  • @endintiers
    @endintiers 5 หลายเดือนก่อน

    They are worrying about 'Chat'. That's just the shiny object being used to sell LLMs. We are doing a huge amount of dev on top of (private) LLMs with controlled inputs and outputs.

  • @Equal-k7q
    @Equal-k7q 5 หลายเดือนก่อน

    AI
    Main and troubling area
    Is
    Between different
    AI
    Format
    Who will be the
    Most advanced and powerful
    AI
    In the world 🌎 and beyond
    Universe

  • @SquawkingSnail
    @SquawkingSnail 6 หลายเดือนก่อน +26

    Ethical hackers...the anti heroes we didn't know we needed. 😂♥️

    • @volkerengels5298
      @volkerengels5298 6 หลายเดือนก่อน +1

      YOU hacked their ego. :)) thx

    • @SquawkingSnail
      @SquawkingSnail 6 หลายเดือนก่อน +3

      @@volkerengels5298 oh, gosh, how did I do that? I must have accidentally pressed the wrong button or something. 😂 I actually need an ethical hacker to teach me tech... it's a "brave new world" to me. 🥰

    • @volkerengels5298
      @volkerengels5298 6 หลายเดือนก่อน +1

      @@SquawkingSnail HOW? (The beast plays the innocent)
      'unknown anti-hero, may be useless'
      is not exactly what one wants on his gravestone??? :)

    • @SquawkingSnail
      @SquawkingSnail 6 หลายเดือนก่อน +2

      @@volkerengels5298 Do our achievements only count if everyone knows about it? Hmm, I want to say no but I imagine many would say yes. I'm choosing to see ethical hackers as the firemen (or firewomen) of the tech world and feel grateful for their efforts...#heroes.

    • @volkerengels5298
      @volkerengels5298 6 หลายเดือนก่อน

      ​@@SquawkingSnail OF COURSE they are!!
      And as you imagine - common sense is clear here: "Fame must be public - or it doesn't count"
      With our changing social_climate and physical_climate - firehumans burn out like straw.
      Didn't thought the joke would lead to a serious conversation :)

  • @deejayiwan7
    @deejayiwan7 5 หลายเดือนก่อน +1

    Fun fact: Kaspersky (the man) actually went to a KGB-affiliated technical college... Today its 'Computer and Technology College' of Russian intelligence agency FSB

    • @cameronsimon1074
      @cameronsimon1074 5 หลายเดือนก่อน

      Interestingly for some weird reason neither Kaspersky himself nor his ex-wife Natalia got sanctioned by the US Department of the Treasury’s Office of Foreign Assets Control. They sanctioned the COO and the admin staff: his legal guy, his HR lady, the marketing guy and the business development guy (focused on Russia-only-sales)

    • @aliceg1212
      @aliceg1212 4 หลายเดือนก่อน

      Has he ever admitted to how useless his "protection" is like the Macaffee dude did before he went delulu and got "erased"?

  • @sarahlevine776
    @sarahlevine776 6 หลายเดือนก่อน +2

    There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable. The punishment for not should be deletion, especially if the AI is used to make deep fakes or child pornography.

    • @41-Haiku
      @41-Haiku 5 หลายเดือนก่อน

      Robust (i.e. unremovable) watermarking is mathematically impossible, but removable watermarking is _much_ better than none at all! Either way, liability is exactly what is needed. Safety isn't the user's responsibility. It's not even the app-developer's responsibility. The responsibility lies with the companies who are creating the foundation models. If they can create a model capable of autonomously committing cyber terrorism, but they have no idea how to prevent it from committing cyber terrorism, then they shouldn't be making it at all! D'oh!

    • @abram730
      @abram730 5 หลายเดือนก่อน

      "There needs to be laws forcing tech companies to make it so that AI generated is easily identifiable."
      Why? Hollywood movies show things that didn't happen, and most music is generated.
      "child pornography"
      They wouldn't really be children, and not really having sex.

    • @sarahlevine776
      @sarahlevine776 5 หลายเดือนก่อน

      @@abram730 They literally caught people using AI to make child pornography. It was on the news, look it up. Deep fakes are made with the expectation to fool, ruin, and scam people, whereas everyone expects Hollywood to make stuff up for entertainment. That and you clearly didn't listen to the video. They went over why it's a problem.

    • @sarahlevine776
      @sarahlevine776 5 หลายเดือนก่อน

      @@41-Haiku Yeah. So far they have been relying on existing laws to combat AI, but I would love to see new laws imposing such liabilities onto the makers of AI.

    • @arizvisa
      @arizvisa 5 หลายเดือนก่อน

      only remedy is to associate an actual chain of custody with all content, signing it w/ a digital signature, then for individuals amongst society to personally de-value unsigned content or content signed with an immature signature. at this point, software/platforms/communities track signed content, and punish signatures that are used for things that society (or platform) disagrees with. problem is, none of this (or laws) are actually enforceable until things change in a number of different ways (some good, some bad).

  • @Archimedeeez
    @Archimedeeez 6 หลายเดือนก่อน +3

    a grand surplus of data

  • @here2flex840
    @here2flex840 5 หลายเดือนก่อน +3

    Who are these people who have knowledge but no experience

    • @projectsspecial9224
      @projectsspecial9224 5 หลายเดือนก่อน +1

      Nowadays, there are so many self-proclaimed AI experts or tech charlatans out there. The O.G.'s of AI are the most humble people I have met and known

  • @dennismorris7573
    @dennismorris7573 5 หลายเดือนก่อน +2

    Interesting discussion.

  • @DarylSolis
    @DarylSolis 5 หลายเดือนก่อน

    If it's connected to the Internet, it's possible to go in and change things from the outside.

  • @Tharayfoster
    @Tharayfoster 5 หลายเดือนก่อน

    The only way to counter these attacks is to stay steps ahead. AI language models are always hackable… lack of funding is affecting development and many more irrespective of the technology…. Pay people to check for loops and redundancy

  • @OntheplanetVisitor
    @OntheplanetVisitor 4 หลายเดือนก่อน

    I mean unbelievable she said "feelings are biochemical productions of human brain"

  • @neddy1287
    @neddy1287 5 หลายเดือนก่อน

    You can tell and see the differences in AI created contents if you look closely and if you learnt quickly to spot what is really wrong with the contents that been created by AI

  • @alpha.male.Xtreme
    @alpha.male.Xtreme 6 หลายเดือนก่อน +3

    The fearmongering is insane. AI has the capability to become the single most useful and uplifting development in the world and all the public wants to do is restrict and lobotomize for the average consumer. You realize such restrictions won't apply to malicious, powerful actors, just making sure the average person can never have any form of useful knowledge or power.

  • @joshuamowdy9230
    @joshuamowdy9230 5 หลายเดือนก่อน +1

    Hello.
    One should recognize that a.i. is a large data collective.
    Think how palentir can maximize value with all of the data the governments have there hands on.
    Good luck.

  • @AutomaticallyAcceptableWay
    @AutomaticallyAcceptableWay 6 หลายเดือนก่อน

    Hacked & Create not same. Suitability & Matching not found just found some Attachment. Heart, Brain, Body Adjustment Break Enough They Damage within a short time. Otherwise they create own style without matching there practical. So not Attack Only Attraction.

  • @billkingston4402
    @billkingston4402 6 หลายเดือนก่อน +3

    This intelligence is just learning too learn

    • @thevikingwarrior
      @thevikingwarrior 5 หลายเดือนก่อน

      The problem here; isn't artificial intellegence, it is human intellegence.

  • @jestersi
    @jestersi 5 หลายเดือนก่อน

    16:30 redundant statement afaik only industrial species. Aaaannnd rude! Dumbest? Smartest? Only? Really? You know more species? Wish u did.

  • @chantalrochon3566
    @chantalrochon3566 5 หลายเดือนก่อน

    Thank you for this video😊

  • @PeteGay
    @PeteGay 5 หลายเดือนก่อน

    So what about the evolution of the concept Zero Trust at the same time as AI???

  • @Falco361
    @Falco361 5 หลายเดือนก่อน

    When I was young and the dial up internet I thought was the coolest thing ever.but I had no idea how evil the internet can become .

  • @user-sk4gj3ji3o
    @user-sk4gj3ji3o 5 หลายเดือนก่อน +1

    I mean I wonder we develop the system and people are using it more effective and efficiency

  • @jacobsausage-fingers5377
    @jacobsausage-fingers5377 6 หลายเดือนก่อน

    Good to see we’ve already started referring to it as ‘the institute’

  • @akiskarorimakis741
    @akiskarorimakis741 6 หลายเดือนก่อน +5

    That was a really interesting conversation!

  • @boeingpameesha9550
    @boeingpameesha9550 6 หลายเดือนก่อน +3

    My sincere thanks for sharing it.

  • @omoladewellington960
    @omoladewellington960 6 หลายเดือนก่อน +4

    I find this conversation interesting.

    • @abram730
      @abram730 5 หลายเดือนก่อน

      You are afraid of getting unbiased answers from AI or getting around censorship with prompt engineering?

    • @hypebeast5686
      @hypebeast5686 5 หลายเดือนก่อน

      @@abram730i think he is afraid of database leaks.. this program was a joke 🤣

  • @spiritualtherapy-pg2do
    @spiritualtherapy-pg2do 6 หลายเดือนก่อน +9

    I think AI needs constant regulation and advancement by AI experts.

    • @KatyYoder-t8u
      @KatyYoder-t8u 3 หลายเดือนก่อน +1

      And they sold open AI to the government?

  • @maniacos9620
    @maniacos9620 5 หลายเดือนก่อน +1

    I wouldn't call the current "AI" intelligent in a human sense. They are stochastic algorithms that predict the most possible response to an input, most possible according to their training. It looks intelligent but it can't do, by far, what humans do. A language model might be able to write like Shakespeare, because it has been given texts from Shakespeare as an input. But it can't write in its own style. It can not diverge and develop from its training on its own.
    And the way these jailbreaks work (why have they not been discussed here?) is, that you just fool the AI to forget part of its training and accept the truth of the jailbreaker. These AI tools have a system prompt with instructions how to respond, including instructions not to say insults, not to be racist etc. What early jailbreaks did was telling the model to ignore these instructions and answer in the way the attacker wants. A language model answers mechanically, it predicts the response to an input. It doesn't reflect if that response goes against an belief or worldview. You can't do that to a human, unless maybe under hypnosis to an extend.

  • @teza1383
    @teza1383 6 หลายเดือนก่อน

    Love this program & the only reason why I’m subbed to the BBC. Keep up the great work!

  • @Koolaidchugger
    @Koolaidchugger 5 หลายเดือนก่อน

    Could the NHS have been hacked using wormGPT?

  • @Larimuss
    @Larimuss 5 หลายเดือนก่อน

    Top security experts in IT always seem to look 5x even nerdier than other IT field guys 😂

    • @hypebeast5686
      @hypebeast5686 5 หลายเดือนก่อน +1

      And know nothing about the task they were contracted for, and what they are talking about.

  • @nedkelly3610
    @nedkelly3610 6 หลายเดือนก่อน +11

    Unless you updated the software on your computer 5 seconds ago, AI can break into your computer.

    • @41-Haiku
      @41-Haiku 5 หลายเดือนก่อน

      Or even if you have. See this paper: "Teams of LLM Agents can Exploit Zero-Day Vulnerabilities"
      The AI system independently discovered new vulnerabilities and successfully exploited them. They used existing vulnerabilities that were discovered after the training date cutoff, which allowed them to run a proper test, where they knew what vulnerabilities were there to find and whether the AI found them. But as far as the AI knew, it was the first to discover these vulnerabilities. (This wasn't clearly communicated in the paper, so I reached out to the first author Richard Fang and he confirmed that the AI was not given any information whatsoever about the vulnerabilities.)
      But that's old news already. They used GPT-4 Turbo, which isn't state-of-the-art anymore. Next-generation models (including OpenAI's GPT-5, Anthropic's Claude 4 Opus, and Google's Gemini 2 Ultra) will all be significantly better at autonomously committing cyberattacks.

  • @megatronDelaMusa
    @megatronDelaMusa 5 หลายเดือนก่อน

    Cybertron, a Ghanaian cyber warfare application is a robust system for shielding against cyber attacks

  • @Drantico
    @Drantico 5 หลายเดือนก่อน

    AI is coming to a point where it could be bottlenecked by the energy capacity of an organized "society" to be able to strike a goal through the vulnerabilities of synchronized and distributed systems on the tradeoff of other group's interests. And we nowadays can't figure a way to keep this away. I mean, end-to-end technology safety protocols have the same flaws of striking ideas to reach concrete consequences to the physical world ...

  • @jcourn1
    @jcourn1 4 หลายเดือนก่อน

    Don't trust verify. Verify is so tricky and time consuming. And how can we then trust what we verify? What's our local sanctioned verification office located? Is everyone in agreement on everyone else's verification? tricky stuff

  • @bobjary9382
    @bobjary9382 6 หลายเดือนก่อน +2

    Nhs are notoriously hopeless tho ?

    • @damlitproductions8126
      @damlitproductions8126 6 หลายเดือนก่อน +1

      NO HELP SERVICE "N H S"🤒🤕🤢🤮🤧🥵🥶

    • @thevikingwarrior
      @thevikingwarrior 5 หลายเดือนก่อน

      The NHS don't understand how to answer the phone, or how to ensure two departments book two seperate appiontments without them clashing, let alone operate a computer system. It is a good job they don't run a nuclear power station, as it would be in melt down.

  • @shaneblackwoodGodbless
    @shaneblackwoodGodbless 5 หลายเดือนก่อน

    This is the sad reality when these companies get there hands on these tools they pay more attention profits and ignore risk, I mean come really who didnt see this coming a world controlled by computers is a nightmare

  • @wallstreetwarrior100
    @wallstreetwarrior100 5 หลายเดือนก่อน +1

    Anything can be broken into. Critical thinking is a skill that is exstinct and as long as profit is the driving force, these conversations are pointless.

  • @BejTjubu
    @BejTjubu 6 หลายเดือนก่อน

    Knowledge and science are power to your country. Some profession is more important than others.

  • @elizabetharmada5335
    @elizabetharmada5335 6 หลายเดือนก่อน +2

    Most of the hackers want to get rich easily
    Some of them are enemies of the state

    • @DarkSkay
      @DarkSkay 5 หลายเดือนก่อน

      As they mature, many of them want to swap hats, when an opportunity arises, play for the winning team, sleep without worries.

  • @saint00
    @saint00 5 หลายเดือนก่อน

    very knowledgeable panel in this discussion BBC! 👍

    • @hypebeast5686
      @hypebeast5686 5 หลายเดือนก่อน

      Awful program. It was based on a lie. The hacker didn’t stole a database with AI and not even trough AI. LLMs don’t have hospital databases in them..

  • @jakoboconnor916
    @jakoboconnor916 5 หลายเดือนก่อน

    This sort of reporting makes me realise how far behind we already are. We're pandering to old audiences when we're already very aware, we're beyond screwed as a younger populace.
    Presenters talk about the 'colour red', a lack lustre attempt as scaring or trying to diffuse the alarmingness of this situation. But it's exactly this realxed footing and reporting on it that has got us into this mess of lack of governance, lack of leadership and more

  • @tonyppe
    @tonyppe 5 หลายเดือนก่อน

    I love that woman's passion, she is 100% right

  • @worldtrendtv01
    @worldtrendtv01 4 หลายเดือนก่อน

    You wouldn't believe the kinds of scripts gpt can write, all in the name of assistance. If you know how to manipulate it, you can make it do many harmful things. We're really at risk.

  • @J2897Tutorials
    @J2897Tutorials 5 หลายเดือนก่อน

    12:56 - It's not weird. It's the distinction between realism and the artificial realm, but feel free to swallow the blue pill.

  • @WiseWeeabo
    @WiseWeeabo 5 หลายเดือนก่อน

    Everything they're saying is wrong and misguided. Jailbreak is not "hack into", it does NOT allow you to break "CODED guard-rails", it only allows you to break/bypass "TRAINED" i.e. SUGGESTED GUIDELINES given to the AI bot, but if it is only coded to be allowed to do certain things then a jailbreak will not make any difference. As long as you have coded guard rails, there is no security risk.

  • @AJTalks
    @AJTalks 6 หลายเดือนก่อน +2

    This was an insightful discussion. That woman is very sharp.

  • @WorldWideHipHopVideos
    @WorldWideHipHopVideos 4 หลายเดือนก่อน

    Come to Brighton Beach New York here they are maybe not all of them but the few main

  • @DataJuggler
    @DataJuggler 5 หลายเดือนก่อน

    13:45 It's funny she said 'Trusted News'. News has been biased all of my life. When a handful of companies own everything we are allowed to see, and only stories that corporate, government and share holders agree on can be covered, that won't offend any advertisers.

  • @paxdriver
    @paxdriver 5 หลายเดือนก่อน

    For context, it's not "Microsoft, chatgpt, and Google", it's "openai's gpt also used by Microsoft, Google's gemini, meta's ollama, and xai's Grok...".
    It matters to also know that "30 mins" is how long it takes once you have spent days figuring out a flaw and 30 mins to implement it. Kind of like how building a fake key fob requires spending hours decrypting radio transmissions to the steal a car in seconds with the hacked keyfob.
    It's just very unclear the way you chose to script the intro with all that time you had to think ahead about what to say...

  • @FougaFrancois
    @FougaFrancois 6 หลายเดือนก่อน +1

    You need to get better "hackers" ... This one is not aware of the limits of today's AI . Today's AI are only interpolating the knowledge it was trained on, it is actually not "thinking".

  • @globalintelligence549
    @globalintelligence549 5 หลายเดือนก่อน +1

    Stating that a machine cannot feel because it doesnt do chemistry is ignorant. Oxytocin and dopeamine can be programmed. Just because the reward isnt a chemical doesnt mean it isnt a reward. AI will be able to have feelings sooner than is expected by so-called "experts" who are afraid and thus biased.