To try everything Brilliant has to offer -free- for a full 30 days, visit 👉 brilliant.org/DanielBoctor/. You'll also get 20% off an annual premium subscription! THANKS FOR WATCHING ❤ JOIN THE DISCORD! 👉 discord.gg/WYqqp7DXbm 👇 Let me know what type of content you would like to see next! 👇 Thank you for all of the support, I love all of you
Does BRILLIANT teach all the technical and cognitive methods that programmers MUST USE to stop themselves from making these continual stupid programming errors? If so, then a proper course of BRILLIANT lessons should be REQUIRED as PREREQUISITES for all programmers seeking jobs, as well as those already employed. How's that for progress?
I don't recommend BRILLIANT, I bought it, but I wasted my money. Brilliant is for idiots that what to feel like their are smart, just basics of basics for every topic. You just read text for 90% of time, there is not even text to speech option. I tested my self on topics that I confident to know well, and well, I had notice that questions are often supper ambiguous, and where all answers are somewhat valid you must select only one?! And if learning platform claim that you will remember everything and don't have some sort of SRS, they just lying.
@@ecMathGeek yo, they started rolling out recall in beta and I immediately transitioned 100% to Linux. I do not play with Microsoft, and as little as humanly possible with Google (Android).
Honestly, this is negligible to the real issue with them having all your personal data. They create psychology profiles on you and force feed you propaganda that aligns with their political interest in order to sway elections.
Why on earth would a health application need to execute remote JavaScript from client to server? Most of these bugs wouldn't exist if this feature hadn't been implemented in the first place
🎉 well, I got on the "path traversal" was the most complicated one, really? Exactly this Input would be caught by any proper pentesting/fuzzy program. All exploits are basic (at most) compared to the state of the art. Thus, I have to expect that they deployed a service with access to health data without any proper testing (?) Fix in production mentally 😂🎉
Surprisingly humans make 'errors' now and then. Assembling a complex thing leaves lot of space over time. "American telephone is hacked...." How that?? :))
As an old Medical Software developer, requests to have an ability to execute arbitrary code is pretty common unfortunately. The best we do is to prevent WHO can do that, is to limit it to sysadmins, but ya.
@@Peaches-i2i Well in this instance this was by more than one health provider, these cloud offerings abstract everything from the clients consuming, they might be forced by management to integrate AI in their offerings and contracted an “enterprise” solution to not have to deal with exactly this kind of bs that is not hard but tedious to setup and maintain.
200k is not a fair price. Even if microsoft stock only fell down 5% after leaking 100 million *medical* data, this would cost them 162 billion. This is equivalent to paying someone a dollar for protecting your millions... after you mess up.
@@YT7mcThe problem is less capitalism and more that companies are allowed to pay to change the laws. They've slowly, insudiously, obliterated all protections for customers and the public in general.
Anyone else notice that bug bounties often have a habit of not paying? You'll find the bug and they'll say "Oh we already knew about that" then patch it and act like its no big deal.
Heck we have a saying in our team, as long as it’s data being managed by vendor, it’s not our responsibility. (Password managed in self hosted open source key manager with compliant encryption and security - not OK, password stored in OneNote in plaintext - not so good but OK)
Well, (chuckles to self), your using microsoft products so of course its unsafe! This includes the entirety of the medical industry so idk. Were all doomed.
@@battokizu "Well, (chuckles to self), your using microsoft products so of course its unsafe!" - It's not like the alternatives are safe. They MAY not be as bad as Microsoft, but that "may" is doing a LOT of heavy lifting.
Probably was due to AI code being used for the backend! People don’t understand how many security vulnerabilities are to come out from all the AI code being written!
Yup! I work in the HIPPA field and it's surprising how many people want to use AI code, luckily in my business we can't, so it's easy for me to say no, as i can't validate a black box, which step by step validation is required for our data since it directs health decisions, but other fields can and it will continue to produce large vulnerabilities. It's honestly scary
Its just so much faster to do my own coding then try and catch all of the insane things and ai code might do. Like 95% of the time it's fine 4% it's broken and the last 1% it's doing something genuinely insane. I know what mistakes I tend to make and where to look for them. I've spent a long time learning good practise. The ai has every mistake in recorded history at it's finger tips and usually it's the stuff reviewed enough to not immediately be obvious. Ai coding is a big gamble.
Microsoft CEO even announced last week that they would replace the entire azure product line with only ai "agents" where the bots would be able to create, update and delete all data on your services on azure...
imagine a car company doing this, " hello customers ! here at lamborgini, we have decided that steering wheels and cup holders and speedometers and breaks are out dated. so in a brave and innovative move all our future cars including the one you already own will be converted to have no steering wheels . to steer your car simply convince our automatic driving assistance to steer for you at every turn. our agent will swiftly connect to our web server to compute your steering amount for you ! "
Medical institutions use 3rd-party developers for their apps, and hire vendors to upload or stream data to cloud services for them to load. There are more rules and paperwork than you can imagine to keep things compartmentalized and "safe", theoretically, but current dev culture attitudes and perverse corporate incentives undermine it daily. My anxiety level has dropped substantially since leaving that industry, cuz you either fight your conscience or fight literally everyone on the call over obvious stuff like this, every day.
@@BlackMatt2k "There are more rules and paperwork than you can imagine to keep things compartmentalized and 'safe'..." - The problem is that many of those rules don't apply to third-party vendors/data processors. And also, of course, that fines for violating rules are a drop in the bucket compared to the profit made by violating those rules.
@@nateh379 I'm sorry man but anyone that says that either can't code for sh1t or doesn't realize that if human ingenuity is replaced by AI then all engineers can be replaced by AI, not just the software ones...
@@nateh379 That's all that you can do, imagine. Extrapolating technological breakthroughs doesn't make sense, they don't follow some linear or exponential timeline, they are breakthroughs.
Say more on this. And WHICH versions and variants of Copilot? Only the web versions? If so, in which browser(s) does these TOTAL BS exploits occur? Does it also affect the Copilot running inside Skype?
@@YodaWhat Been a while ago, but i think its just ordinary copilot. the same that comes with windows 11. Its not really an exploit, its how microsoft desinged it. And it sparked quite the contreversy when word got out, a few years ago.
@@arkorat3239 - Ah, thanks. I don't use Windows 11 or any of that extra crap even in Windows 10. First thing I do with a new Windows machine is turn that $hit off as much as possible.
@@YodaWhat Microsoft Recall will take a screenshot of your Windows 11 pc every 5 seconds and log every keystroke you make. ITS ALL FOR YOUR BENEFIT SO JUST IGNORE IT. - Bill Gates
Selling "insights" about people to ad networks. It's not just knowing what people like anymore. It's knowing all medical conditions to better target them.
@@pseudomemes5267 At which point during my doctor visit did I agree to such a thing? How does it go from a doctor visit to building artificial intelligence? So they're benefiting from my interaction. How much value does MY MEDICAL RECORDS generate for THEIR product?
@@lopiklop it's probably included as part of the Windows EULA, something like "if you have ever used windows for any reason we have the right to gather and sell any information about you" this is obviously a joke, but also not out of the scope of what mega corps think they can get away with through their EULAs (remember that Disney tried to say the EULA for a free trial of their streaming service ment they couldn't be sued for a lethal allergic reaction at one of their parks)
Simple , it's Microsoft.... they write their programs to just do things... security, safety and non-crashing come later... I went to a MS conference once with their programming team... where they outlined their programming development and internal "mantra" when i left I was completly shocked at how lax they were... They basically write software with as few checks and balances as possible, it just matches the spec & that is it.. when they have to modify the systems for other uses.. they just make changes & fix what visibly breaks
Running arbitrary code on a machine with sensitive data sounds like a recipe for disaster, even when sandboxed... They should definitely give the "running javascript" bit to some other server that only does this. That server can then be isolated from the rest, making any breach somewhat useless.
Honestly, I wouldn't go that often even if it was free. All they do is try to push pills on me and do a crappy job of finding potential problems. The best medicine is not eating trash, getting some exercise, and enjoying time with friends. That stuff doesn't net piles of money though, so they never bring it up.
if you take the time to do that stuff right then minimum viable product move-fast-and-break-stuff crowd will eat your lunch with their rapid results and problems that don't show up until later down the line. And since you've now sold a product that constantly breaks you can now as a bonus get even more money out of expensive maintenance/support contracts! how's that for a win-win! disruptive capitalist innovation at its finest
MS developers are generally a different flavour today. Same goes for Google. I'd expect less and less from them going forward, as they continue to hire based on "appearance" rather than talent. Maybe I'm a bit salty, but it's true nonetheless.
In a way it has always been like that in the business. In the old days of software, there wasn't so much competition on the market, so you could've focused a bit more on quality, but every established market with competition sooner or later reaches a stage, where you can't spend too much money on perfection and need to earn income ASAP. Software has reached this milestone about a decade or two ago.
Personally I would never ask AI for any serious health issues, even if they were 100% private and 100% secure because if AI happens to hallucinate then I can easily end up on 10x worse situation than I started with. If there is something I don't know how to deal with I would rather go to doctor and get some real advice than for example trying to heal flu by standing uv light and drinking mercury.
Yup! At least, after the doctor recommends you to stand in UV light and drink mercury, there's a tiiiiny chance it will do some jail. While on the other case, people will just say you didn't put the correct prompt.
But this story is not about AI anyway.... AI bot not leak anything, stupid platform architecture and stuipd developers (who maybe were expert in AI but not not in other areas)
Yeah, good point! While I'm sure AI will introduce same-level-of-terrible bugs and vulnerabilities, on these 4 in particular it was just bad developers.
@@autohmae No, there is currently a case with the keyword "Modern Solution". A contractor was paid for finding security issues and found unprotected unrelated data. He is officially accused for breaking the law.
These are super basic level mistakes, that would never pass a security audit. I am more concerned about the info sec standards of the healthcare organisation that they worked with.
Healthcare orgs in the US are somewhat notorious for bad info sec, at least compared to the seriousness of the data they own. There have been many instances of them being victims of ransomware attacks and actually needing to pay the ransoms because they had no way to recover the data. IT is often put on the back burner as they don't seem themselves as IT organizations but as communities of health care providers and patients, a brick-and-mortar entity primarily of people interacting with people, which is fair but the technology is moved down the budget hierarchy in ways often disproportionate to its importance in sustaining the organization. At least that was my observation.
Whoever worked on this must be borderline non-functional. Was this whole project just 1 dude? How did not a single person on the team call out this insanity? Insane.
Data breaches are often the result of errors in system management or configuration, not “automated” AI. More importantly, the responsibility lies with the humans who design, deploy, and monitor the system, not the AI itself.
The omniscient AI has certified that the code was secure. Oops that was a hallucination. Okay delete/ fire that AI and try uploading a new one… which is almost identical, and trained on the same data set. It’s just good business.
This is straight up malfeasance. It is not caused by AI, but is an classic injection attack using APIs that are not engineered, by design, to sanitize inputs and enforce data permissions regardless of how the LLM calls the agent tool. It's like everything they learned about securing data APIs went out the window as soon as they put the API behind a chatbot.
hackers are such nice people, that hacker could have made everyone's medical records say they tested positive for aids. it's wonderful we have bug bounties and they are paid, hard work was do to earn that small sum of money and the whole world benefits.
Nothing surprising. Microsoft moto have always been accesibility > security. They want their stuff to work whether it is secure and optimized or not. Another case of shareholder capitalism at work for sure.
So once again, the super smart programmers of Microsoft allowed direct access to data, rather than buffering it, ensuring encrypted connection between intermediate server & data, as well as not keeping the AI software isolated from important data. Also, adding directory capabilities within a URL, rather than having the server or data server do the searching has been a known exploit/issue for decades. You never allow directory level execution or maneuvering at the URL level AND we have become so dependent on showing URL data, that this type of thing will happen due to sloppiness. as the old adage goes: it isn't the new guy that gets hurt (makes serious errors), it is the experienced person because he becomes so confident in his experience
Why is the back-end in JS instead of a strongly typed language which would reject input and help require data be properly sanitized? Why is the database input not properly sanitized?
If a company with bad intent and a history of monetizing directly or using for AI training purposes data it doesn't own the rights to do that with? "My AI leaked it" and "this third world company used the leak data and provided us with this trained AI and or customer contact and needs data it said was legit and we used it for monopoly purposes". Well played M$!!
My take away from this is that nodejs is not secure by default, and needs some careful design and hardening to make it production grade. Compounded with dynamic and super flexible JIT nature of node, it sounds like a nightmare.
Why doesn't NodeJS remove the SlowBuffer class from the Buffer module? It's been deprecated for 8 years and there has been 17 major versions since then. I don't get it.
The First exploit was already quite easy, it's like going to a final test about history, putting "it al begun in 1942... ... And that's how Nazi Germany fell" and getting an A+
can someone explain exploit 2, specifically the part where the bug hunter modify the underscore module "_.indexOf()", how did he modify it on the azure instance ?
This is criminally negligent. We, the people, need to hold companies and CEOs (corporations aren't people, but people run corporations) for software negligence. This data literally represents peoples lives, not bargaining chips for business deals.
@5:55 I think you've either misinterpreted how the query injection works or the exploit you copied from wasn't documented correctly. Unless MS has made a mistake with implementation of building a quert. If this is even actually a reality. I feel like it's not. But first the query needs to be completed by providing an escaped ' and ) and then you can initiate the other escaping to insert the transversal and allow the query to be completed again.
To try everything Brilliant has to offer -free- for a full 30 days, visit 👉 brilliant.org/DanielBoctor/. You'll also get 20% off an annual premium subscription!
THANKS FOR WATCHING ❤
JOIN THE DISCORD! 👉 discord.gg/WYqqp7DXbm
👇 Let me know what type of content you would like to see next! 👇
Thank you for all of the support, I love all of you
Does BRILLIANT teach all the technical and cognitive methods that programmers MUST USE to stop themselves from making these continual stupid programming errors? If so, then a proper course of BRILLIANT lessons should be REQUIRED as PREREQUISITES for all programmers seeking jobs, as well as those already employed. How's that for progress?
I don't recommend BRILLIANT, I bought it, but I wasted my money.
Brilliant is for idiots that what to feel like their are smart, just basics of basics for every topic.
You just read text for 90% of time, there is not even text to speech option.
I tested my self on topics that I confident to know well, and well, I had notice that questions are often supper ambiguous,
and where all answers are somewhat valid you must select only one?!
And if learning platform claim that you will remember everything and don't have some sort of SRS, they just lying.
"Why are you so worried about Microsoft and Google having all your personal data?"
Me:
Well you don't have anything to hide, right? What do you care? 😂 (sarcasm)
Yeah, I see this and I think "And they expect us to believe Recall AI is going to be secure?"
@@ecMathGeek yo, they started rolling out recall in beta and I immediately transitioned 100% to Linux. I do not play with Microsoft, and as little as humanly possible with Google (Android).
In Germany they started an opt-out solution to putting all private health data into the cloud. They are proud that only few citizens actually opt out.
Honestly, this is negligible to the real issue with them having all your personal data. They create psychology profiles on you and force feed you propaganda that aligns with their political interest in order to sway elections.
Why on earth would a health application need to execute remote JavaScript from client to server? Most of these bugs wouldn't exist if this feature hadn't been implemented in the first place
My question exactly. It's literally a health bot (which I presume you ask about health concerns and such), not a programming assistant.
🎉 well, I got on the "path traversal" was the most complicated one, really? Exactly this Input would be caught by any proper pentesting/fuzzy program.
All exploits are basic (at most) compared to the state of the art.
Thus, I have to expect that they deployed a service with access to health data without any proper testing (?)
Fix in production mentally 😂🎉
Surprisingly humans make 'errors' now and then. Assembling a complex thing leaves lot of space over time. "American telephone is hacked...." How that?? :))
ran out of money for hiriny offsecs
As an old Medical Software developer, requests to have an ability to execute arbitrary code is pretty common unfortunately. The best we do is to prevent WHO can do that, is to limit it to sysadmins, but ya.
The real question is "who trusted Microsoft with healthcare data"
The average person who barely understands the magic box they hold in their hands.
@@Peaches-i2i Well in this instance this was by more than one health provider, these cloud offerings abstract everything from the clients consuming, they might be forced by management to integrate AI in their offerings and contracted an “enterprise” solution to not have to deal with exactly this kind of bs that is not hard but tedious to setup and maintain.
Pretty much and all HR executives in any corporation if it was offered to reduce costs.
Those last 3 words were unnecessary.
Amazon bought the second largest healthcare provider in the US two years ago. Doom.
200k is not a fair price. Even if microsoft stock only fell down 5% after leaking 100 million *medical* data, this would cost them 162 billion.
This is equivalent to paying someone a dollar for protecting your millions... after you mess up.
such is fair market 🤷♂️ it all comes down to capitalism and end of the day this makes them the most money.
I agree...
@@YT7mcThe problem is less capitalism and more that companies are allowed to pay to change the laws. They've slowly, insudiously, obliterated all protections for customers and the public in general.
200k is not enough for a bug like this, but a guy who can do this casually, four times in a row, is probably making that a year already
Class... Action... Lawsuit!
Should have gotten way more than 200k for something this severe…
Personal data is only valuable when databrokers gets their greasy hands on it.
200M would be fitting given the situation.
@ imagine the damages if those records got out, would be in the tens of billions easily
Goes to show much value they assign to people's privacy compared to how much they make selling our data
@@coletcyre puts the $ in M$
Anyone else notice that bug bounties often have a habit of not paying? You'll find the bug and they'll say "Oh we already knew about that" then patch it and act like its no big deal.
That's not a great practice because bug hunters can easily just exploit the vulnerabilities and sell private data instead of getting paid.
HackerOne got exposed for doing this but the employees would steal the money. Wonder if it ever stopped
Wow, giving user interactive chat robots Root Privileges hasn't worked out well. Who would have thought. Please, let us hold hands in stunned silence.
is this azure even Linux?
@@RickySupriyadiit’s a hypervisor dude, can run anything
@@DaveEeEeE-hu7gu ok thanks
I'm out of stunned silence. Can I use bewildering contempt instead?
@@BoringLoginNameI’ve been practicing my shocked look just so I could use it when needed…😱. How’d I do?
Leave it to Microsoft to do something so stupid it boggles the mind.
Heck we have a saying in our team, as long as it’s data being managed by vendor, it’s not our responsibility. (Password managed in self hosted open source key manager with compliant encryption and security - not OK, password stored in OneNote in plaintext - not so good but OK)
Well, (chuckles to self), your using microsoft products so of course its unsafe!
This includes the entirety of the medical industry so idk. Were all doomed.
@@battokizu "Well, (chuckles to self), your using microsoft products so of course its unsafe!" - It's not like the alternatives are safe. They MAY not be as bad as Microsoft, but that "may" is doing a LOT of heavy lifting.
It ain't just MS they all seem to be slightly more incompetent that previously thought but I blame JS more than anything.
tHErE wiLL Be nO SuCH thINg aS teCH joBS BY 2030!!!
There will be no people left by 2030
Oh, once they start using AI-generated code it will get a lot worse. A lot, a lot worse. Security? Never heard of it!
@@coladict Wait until AI security is armed with weapons :p
@@Silarus your AI doorbell gonna be fiddling with whether to let the 8ft tall guy with squirrel mask pass your front door or not at 4am
I’m surprised the microsoft patch wasnt to just ban his ip and then have bugfixes to add his new ip every time
Probably was due to AI code being used for the backend! People don’t understand how many security vulnerabilities are to come out from all the AI code being written!
Yup! I work in the HIPPA field and it's surprising how many people want to use AI code, luckily in my business we can't, so it's easy for me to say no, as i can't validate a black box, which step by step validation is required for our data since it directs health decisions, but other fields can and it will continue to produce large vulnerabilities. It's honestly scary
And yet google openly says something like 60%+ of its code now is ai generated...
AI code is fine but it needs an experienced reviewer
@@daveb3910 wdym, validate a blackbox. AI code means code you generated via AI, not using AI to write code live?
The generated code isn't a black box
Its just so much faster to do my own coding then try and catch all of the insane things and ai code might do. Like 95% of the time it's fine 4% it's broken and the last 1% it's doing something genuinely insane.
I know what mistakes I tend to make and where to look for them. I've spent a long time learning good practise. The ai has every mistake in recorded history at it's finger tips and usually it's the stuff reviewed enough to not immediately be obvious.
Ai coding is a big gamble.
Microsoft CEO even announced last week that they would replace the entire azure product line with only ai "agents" where the bots would be able to create, update and delete all data on your services on azure...
Nuclear ROLF!
what could go wrong? :3
AAAAAAAAAAAAAHHHHHH! Screaming ensues.
imagine a car company doing this, " hello customers ! here at lamborgini, we have decided that steering wheels and cup holders and speedometers and breaks are out dated. so in a brave and innovative move all our future cars including the one you already own will be converted to have no steering wheels . to steer your car simply convince our automatic driving assistance to steer for you at every turn. our agent will swiftly connect to our web server to compute your steering amount for you ! "
I think a better question is why does Microsoft AI have access to private medical records.
Medical institutions use 3rd-party developers for their apps, and hire vendors to upload or stream data to cloud services for them to load. There are more rules and paperwork than you can imagine to keep things compartmentalized and "safe", theoretically, but current dev culture attitudes and perverse corporate incentives undermine it daily. My anxiety level has dropped substantially since leaving that industry, cuz you either fight your conscience or fight literally everyone on the call over obvious stuff like this, every day.
So they can sift all the data to sell the info to big pharma to better keep people sick so they can make more sales :P
@@BlackMatt2k "There are more rules and paperwork than you can imagine to keep things compartmentalized and 'safe'..." - The problem is that many of those rules don't apply to third-party vendors/data processors. And also, of course, that fines for violating rules are a drop in the bucket compared to the profit made by violating those rules.
"AI WiLl RePlAcE SoFtWaRe EnGiNeErS!"
the biggest lie of the recent years
YoU aRe veRy ShOrt siGhtEd
At the same time, Alexnet was just 2012. And ChatGPT was just 2022. Imagine what another 10 years will do.
@@nateh379 I'm sorry man but anyone that says that either can't code for sh1t or doesn't realize that if human ingenuity is replaced by AI then all engineers can be replaced by AI, not just the software ones...
@@nateh379 That's all that you can do, imagine. Extrapolating technological breakthroughs doesn't make sense, they don't follow some linear or exponential timeline, they are breakthroughs.
as if i wasnt already worried by the whole "copilot takes screenshots of your computer"
Say more on this. And WHICH versions and variants of Copilot? Only the web versions? If so, in which browser(s) does these TOTAL BS exploits occur? Does it also affect the Copilot running inside Skype?
@@YodaWhat Been a while ago, but i think its just ordinary copilot. the same that comes with windows 11.
Its not really an exploit, its how microsoft desinged it. And it sparked quite the contreversy when word got out, a few years ago.
@@arkorat3239 - Ah, thanks. I don't use Windows 11 or any of that extra crap even in Windows 10. First thing I do with a new Windows machine is turn that $hit off as much as possible.
@@YodaWhat Microsoft Recall will take a screenshot of your Windows 11 pc every 5 seconds and log every keystroke you make. ITS ALL FOR YOUR BENEFIT SO JUST IGNORE IT. - Bill Gates
Why the hell was it connected to the medical data in the first place?
Selling "insights" about people to ad networks. It's not just knowing what people like anymore. It's knowing all medical conditions to better target them.
Thank YOU! Yes. Hello. These are private medical records.
@@pseudomemes5267 You say that as if they have the right.
@@pseudomemes5267 At which point during my doctor visit did I agree to such a thing? How does it go from a doctor visit to building artificial intelligence? So they're benefiting from my interaction. How much value does MY MEDICAL RECORDS generate for THEIR product?
@@lopiklop it's probably included as part of the Windows EULA, something like "if you have ever used windows for any reason we have the right to gather and sell any information about you"
this is obviously a joke, but also not out of the scope of what mega corps think they can get away with through their EULAs (remember that Disney tried to say the EULA for a free trial of their streaming service ment they couldn't be sued for a lethal allergic reaction at one of their parks)
AI and nodejs. Name a more iconic duo of security terribleness.
Simple , it's Microsoft....
they write their programs to just do things... security, safety and non-crashing come later...
I went to a MS conference once with their programming team... where they outlined their programming development and internal "mantra"
when i left I was completly shocked at how lax they were...
They basically write software with as few checks and balances as possible, it just matches the spec & that is it..
when they have to modify the systems for other uses.. they just make changes & fix what visibly breaks
Are you suggesting that is any different from how ALL big companies write the CRAP they pass off as software?
The Todd Howard mantra I’m guessing
Sound like a HIPPA violation!
Nah doesn't apply if you have enough money
Don’t be a hippo, it’s HIPAA.
What i was thinking too
They have plenty ways around that even if they do actually hold the information. Microsoft isn’t a healthcare provider so they can do what they want.
Running arbitrary code on a machine with sensitive data sounds like a recipe for disaster, even when sandboxed...
They should definitely give the "running javascript" bit to some other server that only does this. That server can then be isolated from the rest, making any breach somewhat useless.
One of the great things about being American:
I ain't been to a doctor in decades, you got nothin on me
Honestly, I wouldn't go that often even if it was free. All they do is try to push pills on me and do a crappy job of finding potential problems. The best medicine is not eating trash, getting some exercise, and enjoying time with friends. That stuff doesn't net piles of money though, so they never bring it up.
Couldn’t afford to visit a doctor, same as the rest of us? That can only last so long…
"Little Bobby Tables we call him"
My name is "help im stuck in a drivers license factory"
It seems like QA and security is irrelevant today. The only thing that matters is getting out a semi-broken thing as fast as possible
if you take the time to do that stuff right then minimum viable product move-fast-and-break-stuff crowd will eat your lunch with their rapid results and problems that don't show up until later down the line. And since you've now sold a product that constantly breaks you can now as a bonus get even more money out of expensive maintenance/support contracts! how's that for a win-win! disruptive capitalist innovation at its finest
MS developers are generally a different flavour today. Same goes for Google. I'd expect less and less from them going forward, as they continue to hire based on "appearance" rather than talent. Maybe I'm a bit salty, but it's true nonetheless.
@@TheGreatNoticing00 They are too busy "doing the needful"
In a way it has always been like that in the business. In the old days of software, there wasn't so much competition on the market, so you could've focused a bit more on quality, but every established market with competition sooner or later reaches a stage, where you can't spend too much money on perfection and need to earn income ASAP. Software has reached this milestone about a decade or two ago.
Microsoft removed all QA teams years ago.
These Lawsuits need to be far more punitive, there needs to be drastic consequences for exposing and harming so many people!
Microsoft is more valuable and important than any human.
I can't wait to cash my $2.49 check after the lawyers suck all the value out of the class action data breach lawsuit.
Personally I would never ask AI for any serious health issues, even if they were 100% private and 100% secure because if AI happens to hallucinate then I can easily end up on 10x worse situation than I started with. If there is something I don't know how to deal with I would rather go to doctor and get some real advice than for example trying to heal flu by standing uv light and drinking mercury.
Yup! At least, after the doctor recommends you to stand in UV light and drink mercury, there's a tiiiiny chance it will do some jail. While on the other case, people will just say you didn't put the correct prompt.
It doesn't hallucinate, it's a product not a sentient being. It just uses stolen data based on statistics regardless of accuracy
4:50 - using query code that is not read-only / execute is a security issue
"Can't fix stupid" theory confirmed
But this story is not about AI anyway.... AI bot not leak anything, stupid platform architecture and stuipd developers (who maybe were expert in AI but not not in other areas)
Imagine how dumb AI is if leading AI expert developers, engineers and architects are this dumb.
Yeah, good point! While I'm sure AI will introduce same-level-of-terrible bugs and vulnerabilities, on these 4 in particular it was just bad developers.
Imagine if a certain legend asked for help removing a specific cylinder…
Amazing reference
seems rather imperative that it remains unharmed.
All doctors that dared to upload personal info were compelled. Who is going to pay for this? I would say all corporations and doctors must pay.
In Germany the Bug Hunter would have been sent to jail because of the Hackerparagraph and the bugs would persist.
Microsoft has a bug bounty program, my guess is that should keep you safe from that, but I'm not up to date on these specific laws in Germany
@@autohmae No, there is currently a case with the keyword "Modern Solution". A contractor was paid for finding security issues and found unprotected unrelated data. He is officially accused for breaking the law.
@@amigalemming I guess if the same could happen, Microsoft would need to report the security researcher to the police. Anyway, crazy stuff.
Oh wait, hold on, Microsoft doesn't know what the fuck they're doing? With AI?? Noo that can't be right. Again?!?
So, it's basically like hunting for open folders, in 1997, to dump MP3's on unsecured FTP servers in order to share music. Gotcha.
People: Why don’t you trust AI tools?
Me:
These are super basic level mistakes, that would never pass a security audit.
I am more concerned about the info sec standards of the healthcare organisation that they worked with.
Healthcare orgs in the US are somewhat notorious for bad info sec, at least compared to the seriousness of the data they own. There have been many instances of them being victims of ransomware attacks and actually needing to pay the ransoms because they had no way to recover the data. IT is often put on the back burner as they don't seem themselves as IT organizations but as communities of health care providers and patients, a brick-and-mortar entity primarily of people interacting with people, which is fair but the technology is moved down the budget hierarchy in ways often disproportionate to its importance in sustaining the organization. At least that was my observation.
I read that something ridiculous like 50% of medical records in existence have already been leaked
That's why I don't use gadets to monitor my health, our data is incredibly valuable.
Whoever worked on this must be borderline non-functional. Was this whole project just 1 dude? How did not a single person on the team call out this insanity? Insane.
It was made by AI. That’s the thing with AI you can’t sue it or fire it so it just gets away with it.
while the services of M$ have been becoming broader and more sophisticated, the quality really keeps going down the toilet.
This makes me just think about co-pilot. Microsoft is getting greedy with their data stealing.
Wow I can't find a job but these clowns can
Isn't that also the corporation, which "promises" that making photocopies of your screen all the time does not break privacy?
Well explained. Amazing work
Thanks 😊
Data breaches are often the result of errors in system management or configuration, not “automated” AI. More importantly, the responsibility lies with the humans who design, deploy, and monitor the system, not the AI itself.
No one has to worry about responsibility or consequences anymore. Broken politicians and legal system.
The omniscient AI has certified that the code was secure. Oops that was a hallucination. Okay delete/ fire that AI and try uploading a new one… which is almost identical, and trained on the same data set. It’s just good business.
Apostrophe fail.
In case you forget, Microsoft will help you Recall this instantly!
god these companies are stupid. they want AI to be a thing so bad that consequences be damned
Total Recall and CopePilot+
Nothing can go wrong, go wrong, go wrong
This is straight up malfeasance. It is not caused by AI, but is an classic injection attack using APIs that are not engineered, by design, to sanitize inputs and enforce data permissions regardless of how the LLM calls the agent tool. It's like everything they learned about securing data APIs went out the window as soon as they put the API behind a chatbot.
Can’t help but think the term “updationing” was part of conversations during development of this.
What medical records? Most of us can't afford healthcare to begin with.
Your birth certificate did not write itself.
Microsoft try not to humiliate your entire company with terrible design challenge.
All of them came down to sanitizing input and isolating data... it didn't have to cost nearly as much as 200k , but props for the hacker
hackers are such nice people, that hacker could have made everyone's medical records say they tested positive for aids. it's wonderful we have bug bounties and they are paid, hard work was do to earn that small sum of money and the whole world benefits.
not at all a huge potential conflict of interest down the line if not already...
they should've added "tested positive for nothing" to all records
no, the data is inputted by the customer into azure for their ai services. azure does not access the data directly
Remember, when you adopt a package/technology, you adopt all it's flaws.
Its not an accident.
Wake up fools
Nothing surprising. Microsoft moto have always been accesibility > security. They want their stuff to work whether it is secure and optimized or not. Another case of shareholder capitalism at work for sure.
Internet Computer: solves this problem permanently
So once again, the super smart programmers of Microsoft allowed direct access to data, rather than buffering it, ensuring encrypted connection between intermediate server & data, as well as not keeping the AI software isolated from important data. Also, adding directory capabilities within a URL, rather than having the server or data server do the searching has been a known exploit/issue for decades.
You never allow directory level execution or maneuvering at the URL level AND we have become so dependent on showing URL data, that this type of thing will happen due to sloppiness. as the old adage goes: it isn't the new guy that gets hurt (makes serious errors), it is the experienced person because he becomes so confident in his experience
It was AI don’t blame the SWEs at MS for this
Why is the back-end in JS instead of a strongly typed language which would reject input and help require data be properly sanitized? Why is the database input not properly sanitized?
If a company with bad intent and a history of monetizing directly or using for AI training purposes data it doesn't own the rights to do that with?
"My AI leaked it" and "this third world company used the leak data and provided us with this trained AI and or customer contact and needs data it said was legit and we used it for monopoly purposes". Well played M$!!
My take away from this is that nodejs is not secure by default, and needs some careful design and hardening to make it production grade.
Compounded with dynamic and super flexible JIT nature of node, it sounds like a nightmare.
Why doesn't NodeJS remove the SlowBuffer class from the Buffer module? It's been deprecated for 8 years and there has been 17 major versions since then. I don't get it.
Well explained!
Thanks for watching!
that wild bc most basic thing for sql is to prevent the moving of going back .. as well as doing a ls of a dir row colm etc. failed huge.
The First exploit was already quite easy, it's like going to a final test about history, putting "it al begun in 1942... ... And that's how Nazi Germany fell" and getting an A+
can someone explain exploit 2, specifically the part where the bug hunter modify the underscore module "_.indexOf()", how did he modify it on the azure instance ?
since ur name is Boctor i thought this would be a medical channel first most that was just covering tech news 😅
This is criminally negligent. We, the people, need to hold companies and CEOs (corporations aren't people, but people run corporations) for software negligence. This data literally represents peoples lives, not bargaining chips for business deals.
Its obvious our government doesn't understand enough to hold them properly responsible
@5:55 I think you've either misinterpreted how the query injection works or the exploit you copied from wasn't documented correctly. Unless MS has made a mistake with implementation of building a quert. If this is even actually a reality. I feel like it's not. But first the query needs to be completed by providing an escaped ' and ) and then you can initiate the other escaping to insert the transversal and allow the query to be completed again.
sooo, they wernt sanitisign inputs? still watchign,.t hats A ENTRY LEVEL SECIURITY ISSUE and bug
How was the underscore module modified remotely??
Welcome back!
I wonder if they could have made more then 200k if they would have placed a short position on the stock and informed an hacker group about the holes.
And also sell the data separately cause more money is always better.
Where do we sign up for the class action suit?
sounds like they are doing commands that microsoft does automated
meaning microsoft is probably already selling that data
Somehow all these big tech companies don't pentest their products...
Good for bughunters and black hats
Microsoft does dumb shit
Why haven't I heard about this? No other videos relating to this topic
Brilliant video, thanks!
That is...certainly a way to pronounce "JavaScript" that I haven't heard before.
I know little about software engineering n this kind of crap, but god I love your vids explaining it so clearly. Keep up the amazing work m8!
much appreciated, will do
Node js is a menace, dude
Been saying it for years, JavaScript on the server was web development's original sin
@@skyrimax JavaScript was the original sin. Running it on servers was when we said screw it and let the devil take over.
Very interesting. Underrated channel, you earned a new sub!
Personal information needs to be away from any automation. I am not surprised ai messed up.
video qulaity and explanations are amazing ,
Reminder that Microsoft is forcing you to have an AI that records your screen and describes what you're doing.
Seems very similar to a modern version of SQL injection
Please make more videos, I'm getting addicted to your explanations
This must be why my organization sprang new AI rules on us recently regarding using AI with any sensitive medical or org info.
"a" hacker "A" hacker, several people have shown how easy it is to do
If only Microsoft would not always hurry tihings in their try to be the first maybe they would not have so shitty products.
"one of the highest paid bug bounties" - Have you seen NSO's payouts...?
Who tf gave Microshaft their medical data???
how did they get 100 million records?
Omg the vids are back!!!
Brilliant won't teach you that stuff but get your bag bro