So, a company that builds software to protect computers against cyberattacks, has deployed a software update that behaves exactly like a cyberattack. Got it. Aptly named company, by the way.
This brings back memories. When I worked for a retail shop in Dallas, we would say this line when we were making fun of the vendors for not performing proper testing on the code they delivered.
Paper and pencil were the saviors of the night last night at our local 911 dispatch center. Medical calls, law enforcement calls, an oil spill, normally not a problem for a couple of overnight dispatchers to handle with Computer Aided Dispatch software, but it became instant hair-pulling frustration when suddenly everything had to be tracked on paper.
At least you folks had a disaster readiness plan in place! Good job! Weirdest and ugliest disaster I ever dealt with was a flood - in the middle of a parched Middle Eastern desert. A large water pipe ruptured, the water running on a calcium carbonate layer under the sand, to fill manholes with telecom cabling inside, saturating the phone network into oblivion. Unbeknown to us for a week, one primary emergency generator's underground fuel supply was also flooded, the tank now full of water and eagerly awaiting the next week's power transformer failure to drop that same telephone network, server rooms and all war theater communications. Who has flood emergency plans in the desert?! I did after that! Boy, but my face was red. And the most fetching shade of purple usually reserved for the pointy haired boss... When asked to come up with an emergency response for the future, I quietly handed over the new plan.
The first commandment you learn in IT is that you never make a change on a Friday!! This is what happens when you hire too many consultants underpay your employees and have teams that are burned out. The level of accountability has cratered over the years.
If you have a mature deployment model you can deploy at anytime with confidence. They obviously don't have a mature deployment model. That's the tenant behind CI/CD. You should be able to integrate and deploy at any time, e.g. continuously. The C stands for continuous in CI/CD. Small changes with continuous testing, usually via automated testing methods, can help mature your model. I'll bet their branches have thousands of commits and they keep branches alive forever. They probably don't use trunk based or anything else that would indicate a mature pipeline.
@@BitwiseMobile Not pushing something to production on a Friday would be a part of a mature production model. There is always, always, always a chance of something going wrong. You can never completely prevent that from happening. Because of that, it is unwise to have something go live on a Friday.
I think they're kidding us. I've been an IT manager for 40 years and I decide when our computers reboot. Restarting all computers worldwide at a certain time to trigger the problem is a complete rip-off.
Tbh this is worse than a windows update because it is done automatically, and since it was a driver it wouldn't even let windows fully load before it crashed
@@x2desmit the people it is affecting are saying it was a Microsoft update but the news is all blaming crowdstrike. I think its possible microsoft is bullying another smaller company again.
My father just got diagnosed with colon cancer and needs surgery ASAP, now this happens and the hospital we want to transfer him to has an outage. Thanks CrowdStrike!
Cashless is not a problem, no backup / failproof rerouting is a problem. If there's no electricity your new electronic cash register won't let you use cash inside - unless it has manual override. Same with using digital resources.
I work at a FAANG company. Was having a peaceful night and 30 minutes from clocking out until I got a barrage of tickets for servers down and multiple laptops crashing at the same time. Was happy to hear that it wasn't just a me problem and it's an everyone problem lol
I'm a coles employee in Brisbane, went to buy some groceries, but could only pay in cash. According to an old coworker, all the back end systems (there are a lot!) have gone haywire. Thank god I have today off, but praying its back on by tomorrow.
Boss: Team, I told Crowdstrike we would be ready to roll out the update before the weekend. Dev 1: No way, we need to test it. Boss: Don't you want the weekend off? Roll it out immediately. Dev 1: But it isn't... Boss: Dev2, roll out the update, Dev 1said it was ready. Dev 2: Okay boss. Chaos ensues. Boss: I told you guys we needed to test before rolling out the patch!
@@ImperiumLibertasproblem is, paper takes significantly more time and effort. That's like saying everyone should be driving around with paper maps. A lot of people do, but after years of depending on smart phones and GPS navigation, even those people with paper maps couldn't easily and quickly find their way.
The problem is that thanks to Obamacare hospitals were forced to use electronic medical records or they would get penalized on their medicare reimbursement by 3% and since that's the the entire profit margin they all switched a decade ago. So how do you retrain a 1k different roles from aids to physicians on process in less than thirty minutes with no access to charts or orders? Blame congress and Obama
*fist bump* I'm a phone jockey as well WFH, been enjoying the quiet, my customers call when their car breaks down or they wreck and need a tow or jump start or whatever, but I don't get new callers, I get the escalated ones that have been waiting 4-10hours and longer and I'm the guy that takes the cussing saying that Im a sup and blah blah blah, easy money tonight. I so feel for day shift, they just may all walk out and be done lol.
Many people already use thix fix for over an hour. also it doesn't work on machines that have bitlocker - which majority of corp/large companies would be on - won't.
I manage a Starbucks and live in a city of 100,000 people. I walked in at 5am saw the blue screen on our POS said "Huh.." unplugged it then plugged it back in and it worked. Our corporate called my boss and asked how our Starbucks is the only one up.. How was I the only Starbucks in a district of a major city that figured out. "Did you try turning it off and back on again?" Corporate should be paying me a major salary...
The bonus stupidity of that company releasing the update to everyone at the same time, brilliant. Great to know we have such security experts being trusted by corporate idiots everywhere :D
@@Ungood-jl5ep And of course, the same applies to the fact these corporations exist in the first place, and affect a disturbing portion of our world. But even if "only" 1% of companies used this... the impact would still be absolutely ridiculous. _Especially_ considering some of those completely eliminate the competition around them. And all this is built on the solid foundations of "trust me, bro".
Not really. The issue is ppl. Making soft good and secure requires way more knowledge and time. Would that be my "perfect" setup, it would be back up in few minutes. Not to mention in perfect scenario it would simple not happen. Keep doing "good" job companies. 😅😅
I can't believe they pushed a poorly tested update to the entire freaking world at once. You deploy to 100 systems first and wait 24 hours and if everything works you then deploy more widely.
Any company which is IT dependent and running more than a couple of systems in their network should be vetting updates on an isolated test system. I never understood the trust people have in these software developers.
@@x2desmit I saw in a reddit thread someone claim they pushed for their company to turn off auto-update for CrowdStrike when they adopted it and they had no BSODs. Anecdotal, but I don't see a reason to disbelieve it.
Crowndstrike have remote access for drive updates on corporate windows machine all around the world, so a employee usually can’t control what gets update or not on their windows machine. The problems is even deeper, its related to how windows OS is designed, how corporations manage their OS and more
Heh, had a ruptured pipe flood manholes with all of our base telephone lines in them once. Boy, it was a peaceful, quiet day, then someone had to come in and wreck it by reporting the outage and flooding underground... Boy, but I was tempted to shut down the VOSIP server, as network services remained operational...
This was the company involved with the "hacked DNC server"... for some strange reason this private firm did the investigation and not cybersecurity experts within the FBI/NSA.
@@raymondqiu8202 I think the company - CrowdStrike - by now is just done, they just don't know yet... Imagine the whole world filing a lawsuit against your company for damages...
Also.. I do system development for 25 years... and I never saw someone stupid enough to make updates on a FRIDAY. Exactly because if it causes an outage you are fucked for 2 extra days! When I worked in health tech the golden rule was ZERO UPDATES allowed on fridays
zero updates for zero-days... i'm not against holding feature updates back for testing but for security critical systems like this you'd want the program detecting and blocking new threats asap
No, no! Friday morning gets the low level driver updates. Friday afternoon, at COB, DNS gets a major update. All one needs then is a bit of gasoline... The bear of it is, Crowdstrike defaults to automagic updates through their own channel. All of this angst over a damned logger.
I've been in IT for 30+ years. You'd be surprised. Special shout out to the Online Game Devs. We used to joke about how Blizzard moved to Friday game launches, after Korea took the week off to play the new Starcraft expansion.
They were used by the DNC to find out who "hacked" their computers and their CEO, Shawn Henry, admitted under oath that there was no evidence that Russia (or anyone) hacked the DNC computers.
Down Under (Southern Hemisphere) all airports in chaos. Handwritten boarding passes, cancelled flights, handing out bottled water to stranded passengers. Broadcast TV networks running with reduced facility. Emergency services still reportedly fine. Bit of a mess. So....this high-tech "cashless society" we keep hearing about...maybe not such a good idea....
And I remember the bad old days, when mainframe computers ruled the roost. If a crap Windows update or driver for Windows update pooped the bed, worst case, they'd dust off the old terminal to the mainframe.
I work for the largest financial firm in the country, this has had MAJOR financial impact on us. I speculate two things will happen 1) this software will be removed from every server in our firm over the next few days, 2) that CEO will be unemployed and Crowdstrike as a company is DONE
1) maybe. 2) doubt it. Crowdstrike is one of the best antivirus and security companies on the planet. For sure, this was a big f up, but I doubt this will be more than a blip in our memory, in retrospect. Solarwinds recovered. Equifax is still around. ConnectWise got through there mess up. Heck, McAfee and Norton are still around.
Remove the software, you'd terminated compliance for logging. Your company would be done. But, you do you, I'll pay pennies on the thousand for the old office equipment. Crowdstrike is a logging software and monitoring solution. If it's done on a system, it's logged and stored, fully searchable in their console. It's actually very good software that I've personally used in the SOC. Even chased down a Chinese government incursion that had been ongoing until Crowdstrike was deployed and we could gain visibility in realtime ongoing attacks. To get the company to spring for it, well a couple of SOX audits tend to be convincting. Long and short, with that software the attack was traced to a literally forgotten DMZ test machine and the incursion into the global network was halted. That all said, they're quite likely liable for this debacle - to each and every disserved client organization. Rightfully so, this isn't some Act of God, it's an Act of Moron. One never, ever, *ever* pushes a patch out on a Friday! For this very reason.
@@hojo70 Per crowdstrike's TOS (I posted a link but my comment disappeared), they are not liable for any lost revenue, business opportunities, damages, etc. beyond the remedy of cancelling the license agreement and refunding any prepaid subscription fees. They also say they do not provide warranty for the software that it is defect free, etc. There's no case that can be brought against them unless a customer operates in a country that forbids those kinds of disclaimers and limitations of liability.
I agree with the guy on hackernews it's the companies fault to enable automatic update for everything(using windows for everything is pretty stupid too but that's other problem)
Imagine making your entire company dependent on a single point of failure caused by a random software running in kernel level of Windows which can push an update any time. What could go wrong lmao
@@UnhingedSystems they'll tell you that's why they pay for IT, right before they realize it's outsourced and 100% remote. This is the day freelancers have an opportunity to make BANK
Despite warnings that The Cloud might be Pie in the Sky, people were still CONvinced that putting all their data to someone else's computer was a great idea. After this event, perhaps some will start waking up to the clear and present danger of totally trusting technology?
@@antispindr8613 they laughed at us when we 1) bought our own CDs and records instead of relying only on streaming 2) bought our own Blu-rays and books instead of "digital downloads" 3) run an in-home NAS instead of paying "only a few dollars a month" for cloud storage Today, I'm gonna throw on a movie, read a book, and thank myself for having a shred of foresight. Might check Kasperky's tweets later to see how smug they get
@@antispindr8613 We're already starting to see more conversations about relying more on on-prem solutions and to use cloud technology sparingly. It isn't the magic bullet it sells itself to be. In my opinion the cloud has been a bait and switch game. We were promised it would be cheap and affordable and would continue to become more affordable as time went on. Instead we see companies with cloud products increasing prices aggressively and turning cloud services into a predatory system where they make companies reliant on them and then raise prices arbitrarily to appease shareholders. The cloud should be aptly named "Digital Landlords" because they now control the rent prices and when maintenance is needed they just pass the problems and the costs onto their customers. Shareholders want another big house or boat? Just raise prices 8-10% per year until your customers can't afford it anymore. I utilize cloud services, but sparingly. Never put your eggs in one basket.
What are you smoking? This situation is a great ad for cloud services. The cloud services all fixed this issue on their end already and all their customers are back up and running without having to do anything. Everyone who ISN'T running on "the cloud" is having a very bad day right now.
After 30 years in development, I've removed social apps, and Google (as much as I can) to minimize problems. I'd like to go back to pen and paper rotary phones, and talking face to face.
I've also done 30 years, what seriously worries is the fact that MS pander to the American government... the guys that operate a business model based on g3n0c1de and Microsoft products are vulnerable by design and always have been. Hmmm 😑
FYI: for thoes that aren't from Australia, the big thing about that "supermarkets go cash only" headline is that the banks and supermarkets have been trying to push everyone to digital economy for a while now. They even went so far as to cause issues with one of the big armored car companies. They want to force transaction fees on every one and to snoop on private exchanges etc.
@@aidancoutts2341 I'm old enough to remember that if the power went out or a register went down that you'd break out a ledger paper and write it all down.
People really don’t know what goes on in the background until something like this happens. So many things work behind the scenes and when some of them go wrong, this is what happens.
Many ppl dont appreciate IT at all. Budget for visible stuff (painting, digging) is always higher than budget for crucial operating systems. Doesn't matter company size. The ones that got best tech for what they need is the ones that smallest that would die in a day without it. All the rest that can still get away with it, do just that. And then it happens. I have 0 empathy. 😂
Ironically, it's a logger that blew up. Forwards all system events to a log server for examination and analysis for if there's an intrusion or other mass event going on. Unfortunately, to gain such granular access, one needs a low level driver and apparently, this one got a modification and was thoroughly tested by Helen Keller. I've had patches sail through testing in small test groups, only to blow up in production. This one though, it's winning a booby prize.
@@pawelhyzopski6456 not even a budget issue. A borked patch for a logger's low level driver is it, all of the IT budget in the universe wouldn't prevent or mitigate that. I am glad of one thing. That I'm nowhere near Crowdstrike's corporate office or boardroom. I'm fairly certain that they're gonna need a shit ton of new paint for all of the now peeling paint...
@@spvillano That probably means the deployment model isn't mature. If you have a mature pipeline with healthy CI/CD things like that don't happen. You should be deploying to a staging environment that is identical to production. That would have found the issue on the first smoke test. A unit test should have been written already for the logger, and that would have been caught in development. When you code like a cowboy you have to expect some issues. It sounds like Crowdstrike is a bunch of cowboys.
you know what's funny... Years ago I wrote a short story for a writing class where we had to come up with new ways for the world to end...mine started exactly like this.
I work at a hospital, the fix is simple, just deleting a driver file. The fix requires that you enter the bitlocker key and have local admin rights though, which has a few steps. Typically the fix is about 5 or 6 minutes a machine, but we have thousands of machines.
Same here, working at another shipping company that everyone had heard. Half of our work laptop went bsod, but our web services are miraculously still up. We felt like we’re less affected, but I’m sure our aviation sector probably went dead.
@@andrewortiz8044 Who tf doesn't at least deploy on a test machine to make sure it boots, THEN send the update out, though? Like, I do tests for my frekking Minecraft datapack, HOW is this slipping past an actual company that pays people???
I work for Meijer. Which is a huge shopping center franchise in several states. A lot of our systems are being affected by it. We started noticing strangeness around 330pm EDT yesterday.
This is what experts at the end of the 90's expected the chaos from the Y2K bug to look like. Thankfully back then the IT and corporate people agreed it would be a bad thing and made a massive effort to fix the bug before it happened.
Oh noes! The nukes flew, the world blew up, we all died and the lions and lambs started interbreeding! Why, according to our intrepid host, even Amazon is offline - ignore that their site is up and merrily running.
we had a Power cut at the local supermarket about 15 years ago, and the only person who could sell anything was near retirement. no body else could do the mental arithmatic.
No it's not. Trust me, it's not. Having experienced work in retail, hospital, Amazon warehouse, a hospital, web development, no one ever said just write it down. And the only places that had safety in place for this were those that were in IT AND did their IT themselves. With tech as a tool, for most businesses it's safer to wait out until it's fixed than rely on workers writing down important info. In retail, due to regulations and laws. Unless we go back to cash, which we won't, for consumer safety reasons you are not allowed to write down information needed to charge someone's credit card. The period of those mechanical credit card machines you use to slide the inked paper over is long gone. Does anyone even make those anymore? In major hospitals not giving medicine to patient is legally safer than giving wrong medicine or a wrong dose. At the hospital I've been only white boards and clipboards in patient rooms, and front desks with their posted notes had stuff written down by hand. Again due to regulations patient information can not be written down. Because they would get sued if it came back that due to their negligence their patience had their identity stolen in such a stupid way. Amazone warehouse, would be no different than if conveyor belt died. Eventually it will be fixed. There is only so much you can do without equipment working. Especially computers. Web development, most applications and websites are not as important as their owners would like to think they are. That has to do with western business mindset. The whole idea of - if it's not making money, it's losing money - is ridiculous. Commercial websites are not losing money if they experience a brief outage. So, you spent few hours or a day with your website down, do you really think IT company will fix your problem first. Most online businesses don't spend that much on their websites as they think. So, that paper is staying in a printing machine.
This is utterly insane. I work in a medical laboratory and we had the BSOD at about 10 pm and we can’t process any of our specimens. I really hope this gets fixed
Do you have an IT guy? Reboot in Safe Mode, then type: del "C:\Windows\System32\drivers\CrowdStrike\C-00000291*.sys" Then reboot the PC normally and it should be good
@@Ausnapify Well, IF they are affected, they're on hold by themselves too. This issue just locks them in place, like the KRAGLE from the Lego movie. Most people, that work at a bank have no idea how to fix this issue and even if they did, they do not have the access needed to fix it. This is one hell of a ride for the IT-Department, especially if the device is encrypted, because only they have the key to open said device up. Now imagine a whole IT-Landscape being locked up and you have 3 people, that are able to unlock it, because the IT is notoriously understaffed and underpayed.
At my doctors office all the computers went down and they couldn't do my prescriptions. It happened in the middle of my doctors appointment, if it had happened earlier I would not have been seen at all it would have been another week before I could get seen again and I wouldn't have gotten the antibiotics I needed.
Where I work I'm suppose to tell the customers how system issues are just updates so I can't help but feel this is a bunch of BS here and they have leaked a bunch of data. Given they are a security company it ain't gonna help them to be honest, if they can hide it will be best for that money they love....
@@niceone2731 Availability is a major part of security - So the machine can't be hacked, great, but only if you are protecting data on it, or that it has access to. However, what if that machine is responsible for something critical? For example, patients can't get medication because the electronic prescription and patient record system is down.
@@TigCMWell I am aware about that and dealing with scalability with proper availability is a part of my job day in day out. I was just trying to make a joke, guess it didn’t land just like the flights. Oh wait, they did land finally.
I work at a large semiconductor company. Started work 1 hour later than most people. First thing I see when I open my computer is an unusually large number of Teams messages talking about all the things that don't work.
@@SahilP2648 Nvidia and AMD don't make any chips themselves. They're purely design firms and outsource production to TSMC. Intel has their own fabs though.
So what have we learned? That consolidation in cybersecurity is bad. Everyone, including critical infrastructure, using the same closed source proprietary products and services is bad.
I am a huge fan of open source software, yet I would still remove the "closed source" from that sentence. Diversity is necessary even in an open source ecosystem. Systems should be interoperable with open protocols, but the implementation should never reach dominant market share for any critical component.
I work third shift at a hospital and was at a code blue during the outage. We literally couldn't access the patient's CT images to see if they had a perforated bowel that had incited the cardiac arrest. Not to mention the hospital's critical systems were crippled most of the night further delaying patient care.
Living in north mexico really close to the border, my aunt came to visit (she lives in the US) and when she tried to cross back into the US, all of their CBP systems were messed up, so no one would be crossing anytime soon, she had to stay with us for the night, but it was obviously related to this. This thing is highly disruptive for a lot of folks around here, the queue to cross has been massive today since it's all of the people that were meant to be back already, it's pretty insane.
@@Bristolcentaurus I think they are missing out right now, they should probably be running to the airports... Lots of money waiting there to be made :D Imagine all those angry people heading for holidays... Class action material...
Friday. What a prefect day to deploy to the production (and customers)... Or was it is a case of "it is not yet Friday, it is 11:50pm on Thursday. Lets deploy!" ... 25+ years on IT and software development has teached one thing, Friday is like Sunday, you go out and pray that systems will survive till the Monday morning and you do not touch anything. Except if there is a full fire on the environment... :P
Now this is gonna have a new lesson for IT personnel. Endpoint protection segmentation. Not sticking to one endpoint solution, although this will be harder to monitor.
And this is a great thing as I hope it will be a wake up call to stop using Windows servers for backend and critical stuff. What I have read here traumatizes me: emergency servers are on Windows? WTF is that??
@@something86783yes it is a windows issue. the whole reason the malware market exists is because microsoft ran security by obscurity for decades, resulting in the windows malware ecosystem. add to that the need for pretty much every program to need root access to every file on the machine to do the install, and you need antivirus and other windows security software to get around the insecurity model of windows. add to that that it sounds like the channel file in question sounds like a config file, and anyone doing tdd or ci writes a config file tester which is run before any config file is committed to version control, precisely so that this sort of thing is not likely to happen. a lot of this type of software does not even have a mechanism where you can defer the update, with it happening immediately it is available. if you could designate one machine as a canary, and require a successful update and reboot to publish a file that all your other machines can access to gatekeep the update, it wouldn't be so bad.
You nailed everything, I’ve been dealing with this all afternoon first hand for internal users. The mainstream media on the other hand have no idea what they are talking about.
in their defense, they're an anti-virus company, so if a new vulnerability is discovered they probably have to deploy a patch as fast as possible. Even on a Friday
@@ahdog8there are ways to test this stuff before it hits all of the public. netflix do rolling updates, where they pick the least active region, do gradual migration, check if it does not scale, and then roll back if it has problems. this allows them to do multiple updates per day, and if they can do it, so can others.
look into resilience engineering (lots of lessons from 9/11 available online) and chaos engineering (netflix and the simian army). that covers what happens when stuff breaks and how to make it less of a problem, and breaking stuff on purpose to test if you got the other stuff right.
This is why updates should be controlled by a businesses IT team, and not done automatically. At the company I work at, software isn't allowed to auto-update, updates are managed as part of a change process, and the update gets rolled out slowly to a small number of users first!
I can see why that’s good practice, but what if the software is a critical piece of your security infrastructure? I don’t have the CrowdStrike release notes but what if it was an urgent update that needs to be sent out ASAP?
@@lukeh990update notifications are vital, mandatory updates are a sign to move to a different provider. it is basically saying that it is fine for someone else to decide your machine is so unimportant that it is fine for them to immediately stop what you are doing, and change something so you have to have a complete reinstall, and hope you have everything backed up and that everything else can wait. if you cannot see why that might be a problem, go back to pencil and paper.
We are in damage control this morning for a couple of servers. Not going to say which company it is but the fact it’s happening just as payday processing is happening could be grim
I was shocked I got my check this morning, I pray for everyone else does too... I can't help but if you mean ADP...... I work nights and from home and still on the clock, while this has been fun tonight won't be and well day shift just may quit after they have to deal with the cussin about how the night went. My callers are plp calling their insurance needing a tow or roadside, I deal with escalated customers in the manager compacity so I hope they don't fix it before my shift ends I don't want to have to get on the phone this morning it will be a nightmare.
@@PatrickBaptist hahaha oh boy. Not going to reply to your comment but while “we” got everything back online this afternoon the next few weeks are going to be great fun replying to the millions of clients asking for an update
This happened all the time at a previous company. The IT folks would force an update without fully testing it, then it would blow up, and then IT gets overtime hours and busy-looking tickets from people screaming for their help. Testing and scenario planning is tedious and boring, but it has to be done. It also takes people who have sharp imaginations...as in, what could go wrong, and let's test for that.
Yeah, went through that. Had 10k users, was only allowed a test group of 20. Obviously my fault. So, I left them to swim in their own swill and amazingly, the entire organization failed. Guess that was my fault too. Remotely. Without access. Without even being in the country. For I am all powerful with that mighty can of beans.
Here at UPS we had almost most of our buildings (~60%) halt to issue out fixes. We were able to get our technicians to start deleting the files off our servers to get the buildings up and running but 8 hours of downtime nationwide. The solution gives us a fix for a root cause we're still addressing the symptoms of the crashes such as lost data/corrupted files we had employees lose complete access to their C: drive removed the file type complete to RAW fs. Crazy to see the world wide impact.
Yeah, we got BSOD one after another around 3:40 PM, then my friends said their companies experienced the same issue about the same time. At the time, I know something crazy is going on……
It actually didn't, which is why it was seen as a rolling outage starting in Australia (first place with significant modern infrastructure where 0:00 occurs earliest)
Well for that reason you normally have Clients run n-1 Agent Versions behind the latest Update. Sorry to say, but all companys affected did not follow proper Patch Management Guidelines when it comes to Deployment of XDR Agents
I work in IT at a medical company. We weren't directly affected but many things we use to communicate with other centers went down and surrounding hospitals were having lots of problems
I just had to tell a guy whose flight was canceled and I know you booked a room at this hotel but our systems are down. Oh also every hotel in the area. Oh you’re having an anxiety attack and need an ambulance their system is down too.
The problem is a corrupted channel file. I don't know what that is, but I suspect it contains malware lists or something similar. If that is the case, it makes sense that it is pushed immediately. Still should have tested it though. This should not have passed tests, but it does make sense that it was released on a Friday, and quickly at that.
@@entcraft44 nope, no malware lists at all. Crowdstrike is a fancy logger, with an analysis panel at the security operations center that utilizes a Splunk styled software interface. Overall, I quite like their software and implementation - until now. They push the updates to their software. It's inexcusable to ever consider pushing a low level driver out on a Friday. If anything does sideways, it's equivalent to pouring gasoline on a foundry floor while it's in use.
@@entcraft44it can effect any product with automatic updates, especially if it needs write permissions everywhere to install. this is why every tech lead worth the name won't install mandatory automatic update software on any machine they are responsible for.
Thanks for talking about this as a Crowdstrike issue and not a Microsoft update fault that took everything down. So many news outlets dont understand an thing tech related and are all reporting a bad Microsoft update broke everything 🤦♂
As an IT professional for 48 years I never place products on systems that can cause a system outage. We test and vet all updates and patches before going wide spread My company is up and running and so are the 5000 servers and workstations we service and support. Things do happen...
The thing is, apparently no one that uses crowdstrike can vet anything and the whole point of it is to do zero touch patch exploits for emerging threats. So it’s a lose/lose situation, you remain vulnerable until you manually vet and patch and hope you don’t get exploited in the mean time every time there’s a new vulnerability, or you get a patch like on occasion this that ‘fails safe’ and locks even your users out.
@@peter65zzfdfh If you run a multi-billion dollar company, they should employ a couple in-house IT people. It is insane to rely 100% of your security on a 3rd party. Especially when it is software that has direct access to the registry. This is computer usage 101. Just the fact that so many companies are using this single point of failure without any oversight should be a massive wake-up call to the world to realize how fragile these things are - for both the companies and the public.
You are totally correct in your company's approach to system updates. They are used internationally by vital infrastructure companies, so it was totally irresponsible for them to have not tested and retested the update BEFORE sending it to the world!
@@peter65zzfdfhif you can't vet it, and are stuck with mandatory automatic updates, start looking for alternatives. you cannot outsource your resilience planning and still expect things to work, ever.
It’s nerve-wracking to give a $250,000 immunotherapy drug if we can’t do our safety checks on our oncology unit computer. The patient lived. I think it took a couple years off my life.
My nightowl-ism has finally paid off. I caught this video brand new on the East Coast of the U.S. at 4:28 AM. Fascinating stuff. My sympathies to those having to contend with this nightmare.
Australian Woolies worker here, over half of our self checkout registers were gone at around 7pm AEST, and all of the office computers were experiencing the BSOD as well.
I am in Perth western Australia. Yes happened to me. I work in an office and the whole thing went down inculding the phones etc. The computers were also making really creepy sounds 😮Its actually so funny to see how everything and everyone just falls apart as soon as anything like this happens 😂 ridiculous. Our work finally sent us home after waiting 2 hours and realising it wasn't going to be fixed by their own IT department 😂
I'd like to congrat you on pointing out how doing the rename fix would disable the protection. You would probably NOT be shocked at how many wouldn't think of that. Good job.
So, a company that builds software to protect computers against cyberattacks, has deployed a software update that behaves exactly like a cyberattack. Got it. Aptly named company, by the way.
Oooouuuch
They were the cyberattack all along
You can’t perform a cyberattack on a device that’s offline now ;)
Nothing to see here, chop chop, move!!
if your product phones home and installs updates all behind the user's back, it's just part of a botnet really
Crowdstrike is the best anti malware company on the planet. Can't get malware if you can't boot. Giga-brain
Can’t get a BSOD from Malware if CrowdStrike beats you to it!
How one company, did more for Linux than Penguins on ice.
Genius!
Oops. “The best” 😮
AI level thinking there
"I don't always test my code but when I do it's in production"
Is that a joke
@@lhutton1yes ?
This brings back memories. When I worked for a retail shop in Dallas, we would say this line when we were making fun of the vendors for not performing proper testing on the code they delivered.
@@lhutton1 That is THE joke. Because pushing test code into production is an asinine thing to do, but it's probably what actually happened.
high productivity, bwhahahaha
Paper and pencil were the saviors of the night last night at our local 911 dispatch center. Medical calls, law enforcement calls, an oil spill, normally not a problem for a couple of overnight dispatchers to handle with Computer Aided Dispatch software, but it became instant hair-pulling frustration when suddenly everything had to be tracked on paper.
At least you folks had a disaster readiness plan in place! Good job!
Weirdest and ugliest disaster I ever dealt with was a flood - in the middle of a parched Middle Eastern desert. A large water pipe ruptured, the water running on a calcium carbonate layer under the sand, to fill manholes with telecom cabling inside, saturating the phone network into oblivion. Unbeknown to us for a week, one primary emergency generator's underground fuel supply was also flooded, the tank now full of water and eagerly awaiting the next week's power transformer failure to drop that same telephone network, server rooms and all war theater communications.
Who has flood emergency plans in the desert?! I did after that! Boy, but my face was red. And the most fetching shade of purple usually reserved for the pointy haired boss... When asked to come up with an emergency response for the future, I quietly handed over the new plan.
"We don't need to test this patch to the falcon driver, just deploy the update to all our customers immediately. Nothing can go wrong"
Seeing how much infrastructure relies on this makes how irresponsible a practice that is that much worse. They seriously couldn't test it?
As software test automation engineer, "Fuck it, lets deploy" - I did once when I was new at my job lol. Learned my lesson.
Never auto accept patches. Let the others take the hit first.
That's not really best practice for cyber security stuff@@TheBooban
Well... needs to be signed by M$ and C$...
hmmm
"Can't be hacked if your computer won't turn on - Crowdstrike
Super safe mode 😂
@@hugohabicht9957bruhhh hahahah😅
😂
Stole this and memed it
Comment of the day
The first commandment you learn in IT is that you never make a change on a Friday!! This is what happens when you hire too many consultants underpay your employees and have teams that are burned out. The level of accountability has cratered over the years.
Add changes on at COB Friday’ fired on Monday
nah, you can change all you want, just don't push to prod on Friday.
Think maybe was from X Flare that happened at same time, not sure why they don't want to mention that
If you have a mature deployment model you can deploy at anytime with confidence. They obviously don't have a mature deployment model. That's the tenant behind CI/CD. You should be able to integrate and deploy at any time, e.g. continuously. The C stands for continuous in CI/CD. Small changes with continuous testing, usually via automated testing methods, can help mature your model. I'll bet their branches have thousands of commits and they keep branches alive forever. They probably don't use trunk based or anything else that would indicate a mature pipeline.
@@BitwiseMobile Not pushing something to production on a Friday would be a part of a mature production model.
There is always, always, always a chance of something going wrong. You can never completely prevent that from happening. Because of that, it is unwise to have something go live on a Friday.
I think they're kidding us. I've been an IT manager for 40 years and I decide when our computers reboot. Restarting all computers worldwide at a certain time to trigger the problem is a complete rip-off.
The idea that the world is one bad Windows update from total collapse is absolutely hilarious.
😇😎😇
Tbh this is worse than a windows update because it is done automatically, and since it was a driver it wouldn't even let windows fully load before it crashed
It wasn't a Windows Update! Did anyone watch the video?????
@@x2desmit the people it is affecting are saying it was a Microsoft update but the news is all blaming crowdstrike. I think its possible microsoft is bullying another smaller company again.
Scotty, from Star Trek III: "The More they Overthink the Plumbing, the easier it is to stop up the drain.". Accidentally in this case.
Every CrowdStrike employee's Todo list for this weekend:
1) clean LinkedIn job history
2) apply for new job
3) profit
Literally on the profit side...
2) apply for a job at MediSecure... oh wait.
I work in helpdesk, today was a rough day 😂
4.) Sell Company Stock
change name by deed poll, someone wants your ass
My father just got diagnosed with colon cancer and needs surgery ASAP, now this happens and the hospital we want to transfer him to has an outage. Thanks CrowdStrike!
Sorry to hear. Best wishes to your father.
Sorry to hear, Hope he get well soon! ❤
So sorry. May God protect him
Really, critical services like that should not rely completely on their computer systems.
Sorry to hear that. My dad just received yesterday his chemo for his stomache cancer. We are lucky that he didn't want the appointment on a fridays.
90% computers at the court house had blue screen of death
Flights canceled, hospiltals closed for the day, supermarkets cant process card payments... i cant wait to also live in a cashless society
CASH IS KING.
@@oojimmyflip crypto could have been our lord and savior. but the hype&scam bros fucked it up so badly…
Whoever goes full on cashless as payment method are stupid and idiots.
Cashless? How about money free? Even if that point of sale terminal can accept cash, it won't be able to process the transaction anyway.
Cashless is not a problem, no backup / failproof rerouting is a problem. If there's no electricity your new electronic cash register won't let you use cash inside - unless it has manual override. Same with using digital resources.
I work at a FAANG company. Was having a peaceful night and 30 minutes from clocking out until I got a barrage of tickets for servers down and multiple laptops crashing at the same time. Was happy to hear that it wasn't just a me problem and it's an everyone problem lol
Haha same here
show me what is behind your lol
@PriCap what do you mean?
Refer me
@@Motinha-l7c It affects mostly companys and infrastructure world wide, partuially in china and russia too...
I'm a court worker in Sydney. Our entire judicial system shut down proceedings which could no longer be transcribed.
@@marcelogonzalezdanke I work for a law firm in Brisbane. Our system was unaffected but Monday morning might be interesting!
A tech company in Melbourne, most of windows laptop experience BSOD. We have to start our weekend early, while MacBook users still need to work, lol.
I'm a coles employee in Brisbane, went to buy some groceries, but could only pay in cash. According to an old coworker, all the back end systems (there are a lot!) have gone haywire. Thank god I have today off, but praying its back on by tomorrow.
Not good, justice will get delayed. Courts should not depend on the system.
Apples anyone?
Boss: Team, I told Crowdstrike we would be ready to roll out the update before the weekend.
Dev 1: No way, we need to test it.
Boss: Don't you want the weekend off? Roll it out immediately.
Dev 1: But it isn't...
Boss: Dev2, roll out the update, Dev 1said it was ready.
Dev 2: Okay boss.
Chaos ensues.
Boss: I told you guys we needed to test before rolling out the patch!
Sounds about right!!! Every boss does this 😂😂😅
I’m a physician who worked overnight last night. And our entire computer system, radiology system was down for almost 5 hours. It was terrifying.
Terrifying lol Hope you people never have to go to war
The fact the hospitals dont have paper fallbacks is disgusting. Our healthcare shouldn't be bound to a computer so tightly.
@@ImperiumLibertasthere is paper charting fallbacks but it’s tedious and a lot of processes take longer and coordination of care is less efficient.
@@ImperiumLibertasproblem is, paper takes significantly more time and effort.
That's like saying everyone should be driving around with paper maps. A lot of people do, but after years of depending on smart phones and GPS navigation, even those people with paper maps couldn't easily and quickly find their way.
The problem is that thanks to Obamacare hospitals were forced to use electronic medical records or they would get penalized on their medicare reimbursement by 3% and since that's the the entire profit margin they all switched a decade ago. So how do you retrain a 1k different roles from aids to physicians on process in less than thirty minutes with no access to charts or orders? Blame congress and Obama
Working from home, sitting on the couch and telling all my callers "systems down, check the news" . Pretty chill
*fist bump* I'm a phone jockey as well WFH, been enjoying the quiet, my customers call when their car breaks down or they wreck and need a tow or jump start or whatever, but I don't get new callers, I get the escalated ones that have been waiting 4-10hours and longer and I'm the guy that takes the cussing saying that Im a sup and blah blah blah, easy money tonight. I so feel for day shift, they just may all walk out and be done lol.
My work is also affected. And we also use crowdrstrike. But lucky enough my bank is working fine :).
Running Linux & didn't have a single clue till my mate mentioned it. Lmao
the things i need still work
@@DutchPyro2011 lol same here went from Windows to Linux Mint after Windows 7 expired. Never regreted it.
John releasing the video quicker than folks being able to resolve the issue 😅
True , companies are still looking into the issue
lolololol
You should see his tweets lol this is nothing 😂
Many people already use thix fix for over an hour. also it doesn't work on machines that have bitlocker - which majority of corp/large companies would be on - won't.
A official workaround was posted earlier on reddit already though,
I manage a Starbucks and live in a city of 100,000 people. I walked in at 5am saw the blue screen on our POS said "Huh.." unplugged it then plugged it back in and it worked. Our corporate called my boss and asked how our Starbucks is the only one up.. How was I the only Starbucks in a district of a major city that figured out. "Did you try turning it off and back on again?" Corporate should be paying me a major salary...
How furtuitous 🤣🤣accident by chance or just coincidence.
I might try this 😂
Not the Fix for the company I work for where we’ve been fighting with 20k down Systems. After 4 days we’re down to 1k
This is why you don’t put all your eggs in one basket. If every company is using the same system, if it goes down, everything goes down
The bonus stupidity of that company releasing the update to everyone at the same time, brilliant. Great to know we have such security experts being trusted by corporate idiots everywhere :D
Tell that to Cloudflare being at least half of the world provider
@@Vysair The same rules apply. Centralization is bad.
@@Ungood-jl5ep And of course, the same applies to the fact these corporations exist in the first place, and affect a disturbing portion of our world.
But even if "only" 1% of companies used this... the impact would still be absolutely ridiculous. _Especially_ considering some of those completely eliminate the competition around them.
And all this is built on the solid foundations of "trust me, bro".
Not really. The issue is ppl. Making soft good and secure requires way more knowledge and time.
Would that be my "perfect" setup, it would be back up in few minutes. Not to mention in perfect scenario it would simple not happen.
Keep doing "good" job companies. 😅😅
Putting outage info behind a login portal is some xfinity type bullshit
@RyanClone As a fellow veteran, I approve this analogy
Dumbest shit ever had to contact a whole different team for a response they never gave , to be fair I work for a shit company
The RyanAir of CyberSecurity
I can't believe they pushed a poorly tested update to the entire freaking world at once. You deploy to 100 systems first and wait 24 hours and if everything works you then deploy more widely.
Seems like patch distribution 101. The "I'm sure it will be fine" mentality is behind a lot of major screw ups.
Any company which is IT dependent and running more than a couple of systems in their network should be vetting updates on an isolated test system. I never understood the trust people have in these software developers.
@@Bob-cx4ze I used to have this mentality when I started in IT, now I test 100 times before sending something to QA :P
@@rmcq1999 Microsoft didn't give them a choice. They back end patched it themselves
We need system diversify ... They are focusing on DEi for people yet there are no diversity to services they are using ...
This should be a wake-up call to set all your updates to manual
Assuming that Crowdstrike has the capability to do that. They control a lot of things to a certain extent.
@@x2desmit I saw in a reddit thread someone claim they pushed for their company to turn off auto-update for CrowdStrike when they adopted it and they had no BSODs. Anecdotal, but I don't see a reason to disbelieve it.
Crowndstrike have remote access for drive updates on corporate windows machine all around the world, so a employee usually can’t control what gets update or not on their windows machine. The problems is even deeper, its related to how windows OS is designed, how corporations manage their OS and more
Thanks CrowdStrike. You took down Teams yesterday and I didn't have to talk to my coworkers.
Heh, had a ruptured pipe flood manholes with all of our base telephone lines in them once. Boy, it was a peaceful, quiet day, then someone had to come in and wreck it by reporting the outage and flooding underground...
Boy, but I was tempted to shut down the VOSIP server, as network services remained operational...
Not being able to use Teams is always a blessing in disguise. Teams is the worst team chat platform.
An Australian news channel described Crowdstrike as a malware company. Pretty sure it wasn't intentional but kind of accurate given the situation 😂
Freudian slip
Accurate
This was the company involved with the "hacked DNC server"... for some strange reason this private firm did the investigation and not cybersecurity experts within the FBI/NSA.
classic australian media being ignorant as fuck
It's lemmingware. Everyone uses it because everyone else uses it.
Wow, this could go down in history as one of the most expensive bugs ever.
Disrupt is gonna love this
I mean doesnt take much to see that it definitely seems like BY FAR the most expensive already. It's been affecting everything
@@raymondqiu8202 I think the company - CrowdStrike - by now is just done, they just don't know yet... Imagine the whole world filing a lawsuit against your company for damages...
This probably is. Cloudstrike stock is going down like the planes that are going down and getting grounded
What about a talking bug that lays golden eggs?
Linux user:
"*sips from the beer* I'll watch the world burn now"
Indeed! I only use Linux so I had no idea of anything that had happened until I saw the news.
Lucky that ssh vulnerability was found otherwise it would’ve been worse with all the servers running Linux lol
There's linux version too (though it wasn't affected this time).
@@overlordmae9090 who uses antivirus with linux😂😂
@@khelben1979 my colleague told me and both were like "thats why linux"
Also.. I do system development for 25 years... and I never saw someone stupid enough to make updates on a FRIDAY. Exactly because if it causes an outage you are fucked for 2 extra days! When I worked in health tech the golden rule was ZERO UPDATES allowed on fridays
Amen Brother ‼️Amen‼️‼️
They released the update on Thrusday btw.
zero updates for zero-days...
i'm not against holding feature updates back for testing but for security critical systems like this you'd want the program detecting and blocking new threats asap
No, no! Friday morning gets the low level driver updates. Friday afternoon, at COB, DNS gets a major update.
All one needs then is a bit of gasoline...
The bear of it is, Crowdstrike defaults to automagic updates through their own channel.
All of this angst over a damned logger.
I've been in IT for 30+ years. You'd be surprised. Special shout out to the Online Game Devs. We used to joke about how Blizzard moved to Friday game launches, after Korea took the week off to play the new Starcraft expansion.
Crowdstrike is an ominous name
Agreed.
Hidden in plain sight
Stormbreaker
@@AParkedCar1 Tinfoil hat is an hilarious game.
They were used by the DNC to find out who "hacked" their computers and their CEO, Shawn Henry, admitted under oath that there was no evidence that Russia (or anyone) hacked the DNC computers.
Down Under (Southern Hemisphere) all airports in chaos. Handwritten boarding passes, cancelled flights, handing out bottled water to stranded passengers. Broadcast TV networks running with reduced facility. Emergency services still reportedly fine. Bit of a mess. So....this high-tech "cashless society" we keep hearing about...maybe not such a good idea....
Yep no cash no buy
People forget about that part
Handwriting and cash, exactly what ALWAYS worked. Go figure 😅
And I remember the bad old days, when mainframe computers ruled the roost. If a crap Windows update or driver for Windows update pooped the bed, worst case, they'd dust off the old terminal to the mainframe.
@@spvillano
"Finally! My COBOL studies will pay off!"
I work for the largest financial firm in the country, this has had MAJOR financial impact on us. I speculate two things will happen 1) this software will be removed from every server in our firm over the next few days, 2) that CEO will be unemployed and Crowdstrike as a company is DONE
1) maybe. 2) doubt it. Crowdstrike is one of the best antivirus and security companies on the planet. For sure, this was a big f up, but I doubt this will be more than a blip in our memory, in retrospect. Solarwinds recovered. Equifax is still around. ConnectWise got through there mess up. Heck, McAfee and Norton are still around.
Remove the software, you'd terminated compliance for logging. Your company would be done.
But, you do you, I'll pay pennies on the thousand for the old office equipment.
Crowdstrike is a logging software and monitoring solution. If it's done on a system, it's logged and stored, fully searchable in their console. It's actually very good software that I've personally used in the SOC. Even chased down a Chinese government incursion that had been ongoing until Crowdstrike was deployed and we could gain visibility in realtime ongoing attacks. To get the company to spring for it, well a couple of SOX audits tend to be convincting.
Long and short, with that software the attack was traced to a literally forgotten DMZ test machine and the incursion into the global network was halted.
That all said, they're quite likely liable for this debacle - to each and every disserved client organization. Rightfully so, this isn't some Act of God, it's an Act of Moron. One never, ever, *ever* pushes a patch out on a Friday! For this very reason.
@@WarrenGarabrandt this. so depressing. so unfortunately, true...
@@WarrenGarabrandtFirms taking financial loss will likely litigate Crowdstrike to recover damages
@@hojo70 Per crowdstrike's TOS (I posted a link but my comment disappeared), they are not liable for any lost revenue, business opportunities, damages, etc. beyond the remedy of cancelling the license agreement and refunding any prepaid subscription fees. They also say they do not provide warranty for the software that it is defect free, etc. There's no case that can be brought against them unless a customer operates in a country that forbids those kinds of disclaimers and limitations of liability.
i never heard of crowdstrike, but i am pretty happy right now that i'm administrating Linux-Servers.
Rofl
I was about to be like "wait isn't that the thing on my OPNsense" but nah that's crowdsec. Nevermind.
Crowdstrike runs on linux as well...
Laughs in Linux and MacOS
@@ragayclark Linuix for the WIN!!!!
With over 10,000 users....40,000 devices...our entire company was nuked by this.
What time to be alive
I feel you.
I agree with the guy on hackernews it's the companies fault to enable automatic update for everything(using windows for everything is pretty stupid too but that's other problem)
Imagine making your entire company dependent on a single point of failure caused by a random software running in kernel level of Windows which can push an update any time.
What could go wrong lmao
@@zocker1600 exactly lol
"The cloud is just someone else's PC"
It's scary when you have no control over your own infrastructure and there are outages caused by a third party.
@@UnhingedSystems they'll tell you that's why they pay for IT, right before they realize it's outsourced and 100% remote. This is the day freelancers have an opportunity to make BANK
Despite warnings that The Cloud might be Pie in the Sky, people were still CONvinced that putting all their data to someone else's computer was a great idea. After this event, perhaps some will start waking up to the clear and present danger of totally trusting technology?
@@antispindr8613 they laughed at us when we
1) bought our own CDs and records instead of relying only on streaming
2) bought our own Blu-rays and books instead of "digital downloads"
3) run an in-home NAS instead of paying "only a few dollars a month" for cloud storage
Today, I'm gonna throw on a movie, read a book, and thank myself for having a shred of foresight. Might check Kasperky's tweets later to see how smug they get
@@antispindr8613 We're already starting to see more conversations about relying more on on-prem solutions and to use cloud technology sparingly. It isn't the magic bullet it sells itself to be. In my opinion the cloud has been a bait and switch game. We were promised it would be cheap and affordable and would continue to become more affordable as time went on. Instead we see companies with cloud products increasing prices aggressively and turning cloud services into a predatory system where they make companies reliant on them and then raise prices arbitrarily to appease shareholders. The cloud should be aptly named "Digital Landlords" because they now control the rent prices and when maintenance is needed they just pass the problems and the costs onto their customers. Shareholders want another big house or boat? Just raise prices 8-10% per year until your customers can't afford it anymore.
I utilize cloud services, but sparingly. Never put your eggs in one basket.
What are you smoking? This situation is a great ad for cloud services. The cloud services all fixed this issue on their end already and all their customers are back up and running without having to do anything. Everyone who ISN'T running on "the cloud" is having a very bad day right now.
The longer I work as a software developer, the less I trust computer software.
After 30 years in development, I've removed social apps, and Google (as much as I can) to minimize problems. I'd like to go back to pen and paper rotary phones, and talking face to face.
Linux is here
Use Linux and you will say that software development is finally a science
I've also done 30 years, what seriously worries is the fact that MS pander to the American government... the guys that operate a business model based on g3n0c1de and Microsoft products are vulnerable by design and always have been. Hmmm 😑
The main reason I strip as much of google out of my systems@m12652
Y2k bug was just 24 and a half years late.
Truly befitting for a calendar bug.
Why are you making me feel old though...?
Right! This was what everyone thought might happen on Y2K. I even wrote an essay on it in school 😅
@@Tarodev Because we are old. Lol
@@nozeonfroze1 Just seasoned, 55 is old
FYI: for thoes that aren't from Australia, the big thing about that "supermarkets go cash only" headline is that the banks and supermarkets have been trying to push everyone to digital economy for a while now. They even went so far as to cause issues with one of the big armored car companies. They want to force transaction fees on every one and to snoop on private exchanges etc.
Digital money no work? You no eat. Brave new world.
All of the cash registers in big business run on windows whether they accept card or not.
I heard that CBA today going fully cashless by Xmas 2024
@@aidancoutts2341 I'm old enough to remember that if the power went out or a register went down that you'd break out a ledger paper and write it all down.
Banks use unix systems and other legacy systems that are okay
People really don’t know what goes on in the background until something like this happens. So many things work behind the scenes and when some of them go wrong, this is what happens.
Many ppl dont appreciate IT at all. Budget for visible stuff (painting, digging) is always higher than budget for crucial operating systems. Doesn't matter company size. The ones that got best tech for what they need is the ones that smallest that would die in a day without it. All the rest that can still get away with it, do just that. And then it happens. I have 0 empathy. 😂
Do not try to normalize pushing untested software during production.
Ironically, it's a logger that blew up. Forwards all system events to a log server for examination and analysis for if there's an intrusion or other mass event going on.
Unfortunately, to gain such granular access, one needs a low level driver and apparently, this one got a modification and was thoroughly tested by Helen Keller.
I've had patches sail through testing in small test groups, only to blow up in production. This one though, it's winning a booby prize.
@@pawelhyzopski6456 not even a budget issue. A borked patch for a logger's low level driver is it, all of the IT budget in the universe wouldn't prevent or mitigate that.
I am glad of one thing. That I'm nowhere near Crowdstrike's corporate office or boardroom. I'm fairly certain that they're gonna need a shit ton of new paint for all of the now peeling paint...
@@spvillano That probably means the deployment model isn't mature. If you have a mature pipeline with healthy CI/CD things like that don't happen. You should be deploying to a staging environment that is identical to production. That would have found the issue on the first smoke test. A unit test should have been written already for the logger, and that would have been caught in development. When you code like a cowboy you have to expect some issues. It sounds like Crowdstrike is a bunch of cowboys.
The fact that they release their code without controlled test environments is insane! Someone's definitely getting fired.
Big sack coming from both microsoft and crowdstrike
you know what's funny... Years ago I wrote a short story for a writing class where we had to come up with new ways for the world to end...mine started exactly like this.
You are a wizard, Harry
Let me guess it had a gay black woman as the hero...
bruh
The same thing happened to my great grandfather out at sea.
Me too... I was just thinking about that.
How do these companies have such a big single point of failure, it is beyond me. Given choice of so many FOSS options..
Reminded me of Evil Corp from Mr.Robot.
@@nmisoo exactly, maybe we are experiencing something alike :D
Yupp. Just another reason why I'm glad I'm running Linux instead. And having a live booting Linux USB helps in case needs be
It's even worse that it affects all versions. You didn't have a choice on whether or not you got the update.
@@jholden0 woah, didn't know that, well brilliant move unfortunately if it was intentional..
Too many real eggs in a digital basket.
Obviously 🙄
I work at a hospital, the fix is simple, just deleting a driver file. The fix requires that you enter the bitlocker key and have local admin rights though, which has a few steps. Typically the fix is about 5 or 6 minutes a machine, but we have thousands of machines.
I work for a huge shipping company we all use everyday. Our entire system is down. Only the local network works but nothing else.
Same I work at dixie we're down.
No disaster recovery
Same here, working at another shipping company that everyone had heard. Half of our work laptop went bsod, but our web services are miraculously still up.
We felt like we’re less affected, but I’m sure our aviation sector probably went dead.
Good
Ouch.
Who updates on a Friday!?!?!
Someone is gonna get fired....
A company that needs to be up to date on threat detection
auto updates
Major updates are normally done on a Friday evening to avoid disruptions to business operations.
Barclays Bank does so does Tescos and Asda.
@@andrewortiz8044 Who tf doesn't at least deploy on a test machine to make sure it boots, THEN send the update out, though?
Like, I do tests for my frekking Minecraft datapack, HOW is this slipping past an actual company that pays people???
only work for 3 hrs, the rest movie watching and playing... keep it up crowdstrike we need it every friday
😂
Literally!! 😂
Same here brother😂
Crowdstrike - we brought you the 4 day work week!
What if you can't work because of this? Wont be funny then
I work for Meijer. Which is a huge shopping center franchise in several states. A lot of our systems are being affected by it. We started noticing strangeness around 330pm EDT yesterday.
You know outage is BIG when it has its own global Wikipedia page..
and you know it didnt "blow up the internet" when you can already see that site
@koneofsilence5896 phones aren't affected, mate.
@@nopers369 Phones visiting websites that rely on servers w/ crowdstrike software running would be affected
Stop installing software from BG. Install the virus instead.
Not how soypedia works but ok
This is what experts at the end of the 90's expected the chaos from the Y2K bug to look like. Thankfully back then the IT and corporate people agreed it would be a bad thing and made a massive effort to fix the bug before it happened.
Spent 98-99 testing payroll software to make sure it handled 4 digit yrs. Lol
Oh noes! The nukes flew, the world blew up, we all died and the lions and lambs started interbreeding!
Why, according to our intrepid host, even Amazon is offline - ignore that their site is up and merrily running.
Yep, everyone thinks Y2K was overblown. It wasn't overblown at all. It was a monumental success.
Paper, typewriter and pencils.. your comeback is near
All thanks to AI
we had a Power cut at the local supermarket about 15 years ago, and the only person who could sell anything was near retirement. no body else could do the mental arithmatic.
Let’s go to the Old School way Folks! 🤣
@@grokitall😂❤
No it's not. Trust me, it's not. Having experienced work in retail, hospital, Amazon warehouse, a hospital, web development, no one ever said just write it down. And the only places that had safety in place for this were those that were in IT AND did their IT themselves.
With tech as a tool, for most businesses it's safer to wait out until it's fixed than rely on workers writing down important info.
In retail, due to regulations and laws. Unless we go back to cash, which we won't, for consumer safety reasons you are not allowed to write down information needed to charge someone's credit card. The period of those mechanical credit card machines you use to slide the inked paper over is long gone. Does anyone even make those anymore?
In major hospitals not giving medicine to patient is legally safer than giving wrong medicine or a wrong dose. At the hospital I've been only white boards and clipboards in patient rooms, and front desks with their posted notes had stuff written down by hand. Again due to regulations patient information can not be written down. Because they would get sued if it came back that due to their negligence their patience had their identity stolen in such a stupid way.
Amazone warehouse, would be no different than if conveyor belt died. Eventually it will be fixed. There is only so much you can do without equipment working. Especially computers.
Web development, most applications and websites are not as important as their owners would like to think they are. That has to do with western business mindset. The whole idea of - if it's not making money, it's losing money - is ridiculous. Commercial websites are not losing money if they experience a brief outage. So, you spent few hours or a day with your website down, do you really think IT company will fix your problem first. Most online businesses don't spend that much on their websites as they think.
So, that paper is staying in a printing machine.
Did anyone see the Reddit post as John was scrolling? "Maybe the real crowdstrike was the friends we made along the way" 🤣
dude I just saw that lmfaoooooooo
@@vilelive Yes, it references "Maybe the real treasure is the friends we made along the way"
Most sane Reddit user.
This is utterly insane. I work in a medical laboratory and we had the BSOD at about 10 pm and we can’t process any of our specimens. I really hope this gets fixed
How many of those will have to be resampled?
i also work as a med lab and all our instruments are down. Thousands of patients on hold rn.
I work on lab equipment…should be an interesting day. Best of luck, all!
Do you have an IT guy?
Reboot in Safe Mode, then type: del "C:\Windows\System32\drivers\CrowdStrike\C-00000291*.sys"
Then reboot the PC normally and it should be good
@@aisle_of_viewabhorrent troll attempt
Working in the financial sector...to say, that everything is on fire atm. would be an understatement.
Please explain. I've been on hold with my bank for almost an hour.
I was supposed to get paid this morning. I’m guessing this means all direct deposits are on hold or something??
@@Ausnapify My bank is working fine, its not affected with the issue.
@@Ausnapify Well, IF they are affected, they're on hold by themselves too.
This issue just locks them in place, like the KRAGLE from the Lego movie. Most people, that work at a bank have no idea how to fix this issue and even if they did, they do not have the access needed to fix it.
This is one hell of a ride for the IT-Department, especially if the device is encrypted, because only they have the key to open said device up.
Now imagine a whole IT-Landscape being locked up and you have 3 people, that are able to unlock it, because the IT is notoriously understaffed and underpayed.
At my doctors office all the computers went down and they couldn't do my prescriptions. It happened in the middle of my doctors appointment, if it had happened earlier I would not have been seen at all it would have been another week before I could get seen again and I wouldn't have gotten the antibiotics I needed.
CrowdStrike saying this "Isn't a security incdent" shows they don't understand the A in the CIA Triad!
Where I work I'm suppose to tell the customers how system issues are just updates so I can't help but feel this is a bunch of BS here and they have leaked a bunch of data. Given they are a security company it ain't gonna help them to be honest, if they can hide it will be best for that money they love....
Well it isn’t a security icident. If your device can’t boot, it isn’t vulnerable.
Reboot .... all good 👍
@@niceone2731 Availability is a major part of security - So the machine can't be hacked, great, but only if you are protecting data on it, or that it has access to. However, what if that machine is responsible for something critical? For example, patients can't get medication because the electronic prescription and patient record system is down.
@@TigCMWell I am aware about that and dealing with scalability with proper availability is a part of my job day in day out. I was just trying to make a joke, guess it didn’t land just like the flights. Oh wait, they did land finally.
I work at a large semiconductor company. Started work 1 hour later than most people. First thing I see when I open my computer is an unusually large number of Teams messages talking about all the things that don't work.
Same here.
Teams is working? Damn.
I also work in semiconductor. Luckily we separate networks, so only one of my 3 computers had a minor malfunction with database to email not working.
Nvidia? AMD? Intel? As if there are more than these in the entire world lmao. Nice job trying to obfuscate it.
@@SahilP2648 Nvidia and AMD don't make any chips themselves. They're purely design firms and outsource production to TSMC. Intel has their own fabs though.
So what have we learned? That consolidation in cybersecurity is bad. Everyone, including critical infrastructure, using the same closed source proprietary products and services is bad.
There’s gonna be some congressional hearings lol. I heard that using a company like crowd strike is required by many corporate insurance companies
🎯🎯👍
I am a huge fan of open source software, yet I would still remove the "closed source" from that sentence. Diversity is necessary even in an open source ecosystem. Systems should be interoperable with open protocols, but the implementation should never reach dominant market share for any critical component.
In Finland, the network of probably the largest fiber optic provider (lounea) was almost entirely down for over 6 hours.
Thanks for clearing this up. I didn't even know what Crowdstrike was outside of what Ford Mustangs do when you press the accelerator.
Lollllllll... perfect 5/7.
LMFAO
I work third shift at a hospital and was at a code blue during the outage. We literally couldn't access the patient's CT images to see if they had a perforated bowel that had incited the cardiac arrest. Not to mention the hospital's critical systems were crippled most of the night further delaying patient care.
Ah, my nightmare.
Nasty. That's a bit worrying knowing my brother is in for heart surgery atm!
Holy crap!
That is crap , why are your computers connected to internet anyway ?
@@jovictor3007so we can chart. Epic runs on internet.
Happened at Friday 3pm here in Australia. Boss told me to just go home. Thanks for the early start to the weekend crowdstrike.
And us, IT still fixing up the mess
Living in north mexico really close to the border, my aunt came to visit (she lives in the US) and when she tried to cross back into the US, all of their CBP systems were messed up, so no one would be crossing anytime soon, she had to stay with us for the night, but it was obviously related to this. This thing is highly disruptive for a lot of folks around here, the queue to cross has been massive today since it's all of the people that were meant to be back already, it's pretty insane.
Can you even think about the amount of claims that will be going to the CrowdStrike company, the total money amount must be crazy.
lawyers are going to be busy for a looooooong time
@@Bristolcentaurus I think they are missing out right now, they should probably be running to the airports... Lots of money waiting there to be made :D Imagine all those angry people heading for holidays... Class action material...
@@Bristolcentauruswhy are they gonna be busy?
CrowdStrike struck the world.
It's a loaded name
Share price is also going to get struck down tonight
They really lived up to their name, didn't they? lol
- cutting costs to the extreme
- pushing AI hard, and not relying on experienced humans
two main factors
Crowdstrike evolved to Worldstrike
Friday. What a prefect day to deploy to the production (and customers)... Or was it is a case of "it is not yet Friday, it is 11:50pm on Thursday. Lets deploy!" ... 25+ years on IT and software development has teached one thing, Friday is like Sunday, you go out and pray that systems will survive till the Monday morning and you do not touch anything. Except if there is a full fire on the environment... :P
Now this is gonna have a new lesson for IT personnel. Endpoint protection segmentation. Not sticking to one endpoint solution, although this will be harder to monitor.
The entire world just experienced the "Windows Experience" at the same time
And this is a great thing as I hope it will be a wake up call to stop using Windows servers for backend and critical stuff. What I have read here traumatizes me: emergency servers are on Windows? WTF is that??
@@TraumatreeIt's not a windows issue, it's a crowdstrike problem.
That's a "Crowdstrike" experience. Know the difference.
@@something86783yes it is a windows issue. the whole reason the malware market exists is because microsoft ran security by obscurity for decades, resulting in the windows malware ecosystem.
add to that the need for pretty much every program to need root access to every file on the machine to do the install, and you need antivirus and other windows security software to get around the insecurity model of windows.
add to that that it sounds like the channel file in question sounds like a config file, and anyone doing tdd or ci writes a config file tester which is run before any config file is committed to version control, precisely so that this sort of thing is not likely to happen.
a lot of this type of software does not even have a mechanism where you can defer the update, with it happening immediately it is available. if you could designate one machine as a canary, and require a successful update and reboot to publish a file that all your other machines can access to gatekeep the update, it wouldn't be so bad.
Tine to switch to Linux or Mac folks
You nailed everything, I’ve been dealing with this all afternoon first hand for internal users. The mainstream media on the other hand have no idea what they are talking about.
Honorable mention: DO NOT DEPLOY ON FRIDAY
Word
Meetings Mondays, patch Tuesdays, pray Wednesday, rollback Thursday, chill Fridays 😅
in their defense, they're an anti-virus company, so if a new vulnerability is discovered they probably have to deploy a patch as fast as possible. Even on a Friday
@@ahdog8there are ways to test this stuff before it hits all of the public.
netflix do rolling updates, where they pick the least active region, do gradual migration, check if it does not scale, and then roll back if it has problems. this allows them to do multiple updates per day, and if they can do it, so can others.
I'm going to school in this fall for IT Systems and security, so it is fascinating to see your expertise, and breakdown of all of this. Great job! :D
look into resilience engineering (lots of lessons from 9/11 available online) and chaos engineering (netflix and the simian army).
that covers what happens when stuff breaks and how to make it less of a problem, and breaking stuff on purpose to test if you got the other stuff right.
This is why updates should be controlled by a businesses IT team, and not done automatically. At the company I work at, software isn't allowed to auto-update, updates are managed as part of a change process, and the update gets rolled out slowly to a small number of users first!
I can see why that’s good practice, but what if the software is a critical piece of your security infrastructure? I don’t have the CrowdStrike release notes but what if it was an urgent update that needs to be sent out ASAP?
@@lukeh990 priority 1 it and get on it asap. It still needs to follow change procedure though
@@lukeh990update notifications are vital, mandatory updates are a sign to move to a different provider.
it is basically saying that it is fine for someone else to decide your machine is so unimportant that it is fine for them to immediately stop what you are doing, and change something so you have to have a complete reinstall, and hope you have everything backed up and that everything else can wait.
if you cannot see why that might be a problem, go back to pencil and paper.
We are in damage control this morning for a couple of servers. Not going to say which company it is but the fact it’s happening just as payday processing is happening could be grim
I got my money. 🤷🏿♀️
Our company sent us home early and boi am glad for that.
I was shocked I got my check this morning, I pray for everyone else does too... I can't help but if you mean ADP......
I work nights and from home and still on the clock, while this has been fun tonight won't be and well day shift just may quit after they have to deal with the cussin about how the night went. My callers are plp calling their insurance needing a tow or roadside, I deal with escalated customers in the manager compacity so I hope they don't fix it before my shift ends I don't want to have to get on the phone this morning it will be a nightmare.
@@TethoSama one benefit for doing updates/changes on a Friday I guess!
@@PatrickBaptist hahaha oh boy. Not going to reply to your comment but while “we” got everything back online this afternoon the next few weeks are going to be great fun replying to the millions of clients asking for an update
They should change their name to WorldStrike.
Even though it was 20 sec ago that’s underrated
This is awesome, I love it
Thank you Crowdstrike to promote real OS like Linux
Here in the netherlands hospitals are cancelling non-critical surgeries
... and crowdstrike said my qualifications weren't good enough for them
Well, you couldn'thave made this much of impact.
Imagine how overqualified the engineering team at CS is 🤦🏿
This happened all the time at a previous company. The IT folks would force an update without fully testing it, then it would blow up, and then IT gets overtime hours and busy-looking tickets from people screaming for their help. Testing and scenario planning is tedious and boring, but it has to be done. It also takes people who have sharp imaginations...as in, what could go wrong, and let's test for that.
Worst part is since this is happening on every pc, if they had tested just one it would have been fine
Yeah, went through that. Had 10k users, was only allowed a test group of 20. Obviously my fault.
So, I left them to swim in their own swill and amazingly, the entire organization failed.
Guess that was my fault too. Remotely. Without access. Without even being in the country. For I am all powerful with that mighty can of beans.
FedEx? 🤣🤣
It also takes a company that invests in resources and people to be able to test. Most run so lean we're all just hoping nothing breaks.
Here at UPS we had almost most of our buildings (~60%) halt to issue out fixes. We were able to get our technicians to start deleting the files off our servers to get the buildings up and running but 8 hours of downtime nationwide. The solution gives us a fix for a root cause we're still addressing the symptoms of the crashes such as lost data/corrupted files we had employees lose complete access to their C: drive removed the file type complete to RAW fs. Crazy to see the world wide impact.
The real problem is that the update got installed everywhere at the same time.
hmmm yeah... i mean deployment should be localized first at least 😂
Yeah, we got BSOD one after another around 3:40 PM, then my friends said their companies experienced the same issue about the same time. At the time, I know something crazy is going on……
It actually didn't, which is why it was seen as a rolling outage starting in Australia (first place with significant modern infrastructure where 0:00 occurs earliest)
@earlnoli or make someone else try it first 😂
Well for that reason you normally have Clients run n-1 Agent Versions behind the latest Update. Sorry to say, but all companys affected did not follow proper Patch Management Guidelines when it comes to Deployment of XDR Agents
It's surprising how so many news outlets are blaming Microsoft for the issues instead of CrowdStrike.
They aren't blaming Microsoft, rather that it's only affecting Microsoft computers.
I had news on my radio say a sorts of "it's a problem only on devices with windows but it's CrowdStrike's fault"
They both are shite and not worth a good word.
Doesnt Microsoft have responsibility since they work with them?
@@fire.smok3 It is windows that refuses to load...... Dunno why it can't just fix itself...
lol I work in IT for a government branch, had a day off today woke up and saw the entire world taken offline. Thank god it’s not my problem 😂😂
Betcha they're going to call you in. lol
Testing Testing Testing 😊
Oops 😬 🤭 😊
That's your Friday evening F'd 😂
@@aisle_of_viewjust don't pick up
I work in IT at a medical company. We weren't directly affected but many things we use to communicate with other centers went down and surrounding hospitals were having lots of problems
I just had to tell a guy whose flight was canceled and I know you booked a room at this hotel but our systems are down. Oh also every hotel in the area. Oh you’re having an anxiety attack and need an ambulance their system is down too.
You need an ambulance ride to the hospital for an anxiety attack???? Seems a bit overkill.
@@LoganChristiansonsome people seriously can’t handle stress or plans changing
Sounds like a pretty normal night on reception back when I did it.
@@LoganChristianson When you're running a business you don't take any cha- oh wait, I just remembered what this video is about lol
Does that mean I don’t needa go to work today at my hotel:)?
Rule 1: Do not push to production on Friday afternoon. Rule 2: Follow rule 1.
i’ve said exactly this so many times in my career 😂
The problem is a corrupted channel file. I don't know what that is, but I suspect it contains malware lists or something similar. If that is the case, it makes sense that it is pushed immediately. Still should have tested it though. This should not have passed tests, but it does make sense that it was released on a Friday, and quickly at that.
@@entcraft44 nope, no malware lists at all. Crowdstrike is a fancy logger, with an analysis panel at the security operations center that utilizes a Splunk styled software interface.
Overall, I quite like their software and implementation - until now.
They push the updates to their software. It's inexcusable to ever consider pushing a low level driver out on a Friday. If anything does sideways, it's equivalent to pouring gasoline on a foundry floor while it's in use.
@@spvillano If so then I retract my statement. But it could still affect other products in the future.
@@entcraft44it can effect any product with automatic updates, especially if it needs write permissions everywhere to install. this is why every tech lead worth the name won't install mandatory automatic update software on any machine they are responsible for.
Thanks for talking about this as a Crowdstrike issue and not a Microsoft update fault that took everything down. So many news outlets dont understand an thing tech related and are all reporting a bad Microsoft update broke everything 🤦♂
Thanks for the heads up. Shoutouts to IT folks caught up in this; you're all doing awesome work
"Let's test on Production!"
"Do it. What could possibly go wrong?"
"We can do this Agile, release it... analyse potential issues, mtp update"
"Yeah great idea, let's go.. TGIF!:
Push to prod, pray to god
"Dew it"
"Don't worry, it's a pre-alpha release. Nothing will go wrong!"
As an IT professional for 48 years I never place products on systems that can cause a system outage. We test and vet all updates and patches before going wide spread My company is up and running and so are the 5000 servers and workstations we service and support. Things do happen...
The thing is, apparently no one that uses crowdstrike can vet anything and the whole point of it is to do zero touch patch exploits for emerging threats. So it’s a lose/lose situation, you remain vulnerable until you manually vet and patch and hope you don’t get exploited in the mean time every time there’s a new vulnerability, or you get a patch like on occasion this that ‘fails safe’ and locks even your users out.
@@peter65zzfdfh If you run a multi-billion dollar company, they should employ a couple in-house IT people. It is insane to rely 100% of your security on a 3rd party. Especially when it is software that has direct access to the registry. This is computer usage 101. Just the fact that so many companies are using this single point of failure without any oversight should be a massive wake-up call to the world to realize how fragile these things are - for both the companies and the public.
You are totally correct in your company's approach to system updates. They are used internationally by vital infrastructure companies, so it was totally irresponsible for them to have not tested and retested the update BEFORE sending it to the world!
@@peter65zzfdfhif you can't vet it, and are stuck with mandatory automatic updates, start looking for alternatives.
you cannot outsource your resilience planning and still expect things to work, ever.
Cyberdyne Systems releases Skynet
Skynet is useful. This is stupidity 😂
Speaking of which I just recently repaired a robot. 😂😂😂
@@someoneout-there2165😶
@@someoneout-there2165 Did it call you a meat bag or ask for a plasma rifle in the 40-watt range?
@@blshouseI understood that reference 😂
Strong argument now to bring back Windoze XP
In the movie Independence day, they took out the mothership this way.
Thanks for the spoilers
@@ZakuroBananabrother that came out decades ago
🤔 you may be onto something
It’s nerve-wracking to give a $250,000 immunotherapy drug if we can’t do our safety checks on our oncology unit computer. The patient lived. I think it took a couple years off my life.
I doubt my insurance would cover 1/4 million.
It shouldn't cost 250,000
😢
More nerve wracking that a drug costs $250,000
@@solido888facts
been crowdstruck😥
No CrowdStroke
Yeah yeah yeah yeah crowdstruck!!
They're definitely using this as a cover for something else.
My nightowl-ism has finally paid off. I caught this video brand new on the East Coast of the U.S. at 4:28 AM.
Fascinating stuff. My sympathies to those having to contend with this nightmare.
Australian Woolies worker here, over half of our self checkout registers were gone at around 7pm AEST, and all of the office computers were experiencing the BSOD as well.
I am in Perth western Australia. Yes happened to me. I work in an office and the whole thing went down inculding the phones etc. The computers were also making really creepy sounds 😮Its actually so funny to see how everything and everyone just falls apart as soon as anything like this happens 😂 ridiculous. Our work finally sent us home after waiting 2 hours and realising it wasn't going to be fixed by their own IT department 😂
I'd like to congrat you on pointing out how doing the rename fix would disable the protection. You would probably NOT be shocked at how many wouldn't think of that. Good job.
I work for a call center that takes call for several companies and we started getting calls about this around 10pm. It's huge!!