I work in IT and spent 10 hours working with users fixing their computers and fixing our servers. Most of the users were remote, so trying to walk them through deleting the file was a nightmare.
"OK, now type cd space forward-slash d, yes forward slash is on the right side, then put a space, then a d, then colon, yes that's the one with the two dots"
Thankfully our organization uses Cortex so we didn't have that issue, but we got to help our supreme court figure out our courthouse users. Since we didn't have admin privileges, and the SC IT don't enforce bios passwords, we were able to turn off secure boot and then boot to HBC via usb. We only had 20 workstations to fix, and two of us got them done in about 2 hours. It sucks all around though. I feel for you all. My department uses MDT and smart deploy to re image. So, if needed we would have just re deployed windows, but we decided to configure restore points just in case it happens again. Good times.
@@Woodztaomg users are so dumb! Me: Use the slash next to the enter key! User: I did but it's not working. Me: take a picture on your phone. User: sends photo with the wrong slash. 😒
Yup that is why you take few PC to update only as test and THEN push the update after few days to everyone. At our job we work with the PatchManager from Zoho, quite good.
I work as a nurse in a hospital, and this affected almost every computer in the building. It made passing meds to patients difficult as we couldn't use the computer to pull meds from the med room, and we couldn't access patients' electronic medical records. IT managed to get some of the computers running by the end of my shift, but even then, none of the computers in patient rooms were working, and there were still a lot of delays when trying to work with other departments in the hospital because of it.
This is why push updates and auto-update are so dangerous. Updates should be a choice EVERY TIME to stagger the rollout and ensure it’s safe by updating a few machines first. This is also why redundancy, diverse services, and on-site solutions are better than everything relying on a single, remote service
It wiped out all of our most important systems at work from basic payment processing, our websites, our timekeeping, our order systems, and even just the servers to get our computers turned on! It was chaos for a whole day and some issues lingered into the following day but everything seems to be back to normal now. I work for a major business and heard about the flights having to land from the outage but didn’t know thats the system our company computers, landlines, and cell phones all use. It’s insane how much we depend on those systems. Years ago we would need double the people to do the same jobs. I’m just glad it was only one day at work. Now im on vacation.
This event crippled our town. The hospital is basically shut down (which is very scary), banks are not operating, and most businesses are closed because all they can accept is cash. Thankfully we still have water, electricity, and internet for now, but no real timeline of when everything will open back up considering most places here don't have their own local IT departments.
What's wild to me...the fact that companies don't have bootable image backups for their computers. We normal tech folk are always told to have bootable backups and system images of our computers....why can't companies do the same?
I work in IT for the TSA/DHS and support all the airports and subsidiaries in Nevada - Our systems were not affected, however our airports are chaotic to say the least.
I work for a local city government and we had about 1500 workstations and servers that we had to physically go to to fix this problem. We’re still working on it today. Luckily it’s not just me. We have a team of IT people, but this was a major issue that cost the city hundreds of thousands of dollars.
I'm a leo with the Port Authority of NY/NJ, and this affected me big time handling traffic at the Port Newark Marine Terminal. We have 3 shipping terminals where trucks go in and out with containers, and 2 of the 3 were affected. Truck drivers were stuck for hours, grinding the entire Port to a halt, with our traffic eventually affecting Newark Airports traffic as well as the town of Elizabeth. It was a horrendous day pushing traffic all day on 90+ degree heat and also dealing with the frustrating truckers, who were just wasting fuel going no where and dealing with us having to push them north and south just to have the traffic flow moving. Worst day at work in my 12 yrs here with the Port Authority
Man the amount of Reliance that our society has to deal with the internet/network is insane... too much for what's going on. I saw this with bank telling as well (yet the ATM was fine
I think Mixrosoft's approach is not to make computers boot specifically from this update, but to make them boot regardless of the update so this doesn't happen again
Working emergency services we didn't get blue screens but pretty much all our core systems were down as we couldn't access servers or web services weren't functioning. My VM was completely, the system that handles my radios for Motorola was down, Computer Aided Dispatch was down, my online portal and tools were down, even our paging system wasn't functioning which meant calling individuals one by one manually instead of being able to alert groups state wide about incidents which require a response. 13 and a half hours all of that was down.
I'm working in public service in germany, we're an IT team of four in charge of around 500 coworkers, and I'm SO Glad I never heard about crowdstrike before yesterday 😮
This may sound wrong and crazy but didn't this Crowd Strike incident happen before like in the last 4 years also around July? My memory is doing a Deja Vu where it seems there was a big catastrophe with airport computers and first the media narrative was some bad Microsoft Windows update as the cause but then later it turned out to be some company's software that people who are not techies have never heard of? Don't get me wrong I can dislike Microsoft as much as anyone. I especially can't stand Windows 11. Mostly, I am wondering, is my Deja Vu feeling correct or am I losing it?🤔
The amount of calls for on-site repair i had to do for all these damn PCs was ridiculous. Had to do each one by one and verify they were calling back to servers. Got quick enough to get done with 5 or more PCs every 15 minutes. Feels like I must've touched upwards of 1000 today.
I work online as a Video Relay Service Interpreter. The general email was people who left their computers on over night couldn’t log into work. Lucky I decided to shut down the night before. Back to back calls all day.
This after Intel servers and desktop CPU's crashing at a high rate, gaming servers offline on halo and other games. It's bonkers. Tech is falling apart
Another reason this is taking so long to fix is that if your drive is encrypted (and best practice is to do so), when you boot into safe mode you need the bitlocker key, which the average user doesn't track (relying on IT to have properly stored it somewhere).
At my place of work, 90% of the staff (several hundred people) got to take the day off with pay. Of course, my team was the only full team that was forced to stay as we were still able to do our jobs with the 7 computers that were functional. If we wanted to leave, we had to use PTO or recieve a hit on our attendance (which can lead to a suspension at 6 and termination at 7) if we didnt have time to cover it. So while the rest of the building went home, we got to stay and deal with the additional headaches of processing the entire team's work through 7 computers. Thanks Crowdstrike.
First problem: Forced updates If there was a way to delay updates even by a few hours this wouldn't have been so bad or as wide spread. Second problem: Backup systems getting the same issues as active systems. There needs to be a disconnected emergency backup situation that can kick in and take on the workload while limiting certain processes.
I remember a couple years ago there was a problem with a Windows 10 update that had a conflict with Avast anti-virus software and it ended up completely screwing up the update. The fix was literally having to use a boot disk to boot up windows properly.
I work Security at a Hospital and we had a huge Issue overnight that was a crazy thing to witness for sure. There was a time where we all thought some insane cyber hack was happening too!
Got me at work as well. My IT guy was an idiot and couldn't restore my laptop on his own even with a step by step instruction in front of him. A coworker had to walk him through like he was talking to a child.
Oh my day of work was very affected by it. I work in a hospital in the operating room & we couldn’t take care of patients or perform surgical procedures due to this outage.
John! When you think you're getting pirated, the first reflex should be to pull the internet down. By making those computer inoperable, they're IMPOSSIBLE to compromise!
Been working it for 2 days. My Friday started at 12:30am and ended at 9:00pm…. For my Saturday to start at 6:30am and it’s still going. 😢 The nature of it being a manual fix puts it at about 15-20min / pc * number of machines impacted and for some that estimate is in the 10,000 + devices. Will be interesting to see if CrowdStrike is held liable for the losses. Stay tuned…. I’m sure many video game development teams lost several days of productivity.
And they called me crazy when I built this Y2K survival bunker. Who’s laughing now? Edit: Nonperishable foods apparently do eventually expire. My tummy hurts.
It’s always a weird thing with security. It can work 99.99% of the time, and no one bats an eye. It messes up once then the whole operation is under a microscope. Multiple car dealerships were down as well as a bunch of local urgent cares in my area.
A few years back my pc did a windows update overnight and I had to take it to best buy to fix w.e the update messed up. I was just so taken aback because I’d only ever heard of a windows update bricking a PC,until it actually happened to me. Ever since I’ve known how fragile it really all is.
This kind of reminds me of that 1 update Nintendo did for animal crossing wild worlds were a malicious kind of item was sent out that literally ate the floor wherever you put it and can never recover
This highlights that we are over dependent on technology. Now apply this to the video game consoles we love, they are also too relient on online connections and updates. Anything now can be broken with a simple faulty update.
Depending on your remote access tools you can boot a system into safe mode even if it’s in a boot loop. If it’s a partial boot loop even basic system of remote access can get you into safe mode remotely.
Our computers weren't affected, but some of the insurance companies we contract with were so some payments weren't processing until they got their servers back online
Back to posting here on the channel, subscribe for random videos going forward.
He found the password!
I was working and everything was king of silent and boring but my supervisor still gave us stuff to do which was alright I guess.
6 MONTHS BRO REALLY 6 MONTHS 😮
Well it remember me when Avast started to fail on win XP and causing boot loops.
i didn't even realize this was on your second channel, i just assumed i was watching this on the main
he remembered his password
😂😂😂
ON GODDDD 😂
No joke 6 months since the last video welcome back plus wave gang
Y2K actually happening 24 years later is crazy
Even before the Y2K in 2038 too that will happen and 32bit apps will stop being functional. Head start.
Why 2k?? Whats wrong with 2k??🏀🗑️
@@YokiBrewster that’s how we know u young lil bro 😹
@@suntannedduck2388 The Unix time counting bug?
@@YokiBrewster Old Operating Systems didn't account for year counting with 4 digits and the year 2000 supposedly reset the calendar back to 1900.
I work in IT and spent 10 hours working with users fixing their computers and fixing our servers. Most of the users were remote, so trying to walk them through deleting the file was a nightmare.
"OK, now type cd space forward-slash d, yes forward slash is on the right side, then put a space, then a d, then colon, yes that's the one with the two dots"
I don’t work in IT, I’m an Analytical Instrument Engineer, and I still had to do the same fucking thing. Thats how bad it was.
Thankfully our organization uses Cortex so we didn't have that issue, but we got to help our supreme court figure out our courthouse users. Since we didn't have admin privileges, and the SC IT don't enforce bios passwords, we were able to turn off secure boot and then boot to HBC via usb. We only had 20 workstations to fix, and two of us got them done in about 2 hours. It sucks all around though. I feel for you all. My department uses MDT and smart deploy to re image. So, if needed we would have just re deployed windows, but we decided to configure restore points just in case it happens again. Good times.
@@Woodztaomg users are so dumb!
Me: Use the slash next to the enter key!
User: I did but it's not working.
Me: take a picture on your phone.
User: sends photo with the wrong slash.
😒
Damn that's really hard! 😩😔😥
The more crazy thing is he remembered this account existed.
I am a database administrator and I can't stress enough that you always always need to test any change and also have a roll back plan.
Can't roll back when your PC's bricked.
Yup that is why you take few PC to update only as test and THEN push the update after few days to everyone. At our job we work with the PatchManager from Zoho, quite good.
Yeah usually big application changes should go through a test environment before being rolled out to production
@@RMX7777Yes you can if you've done a failsafe correctly.
lmao, I was just telling my mom that they should've tested the update before a rollout.
I think the crazier thing is your back on this channel. Welcome back!
I work as a nurse in a hospital, and this affected almost every computer in the building. It made passing meds to patients difficult as we couldn't use the computer to pull meds from the med room, and we couldn't access patients' electronic medical records. IT managed to get some of the computers running by the end of my shift, but even then, none of the computers in patient rooms were working, and there were still a lot of delays when trying to work with other departments in the hospital because of it.
This is why push updates and auto-update are so dangerous. Updates should be a choice EVERY TIME to stagger the rollout and ensure it’s safe by updating a few machines first. This is also why redundancy, diverse services, and on-site solutions are better than everything relying on a single, remote service
It wiped out all of our most important systems at work from basic payment processing, our websites, our timekeeping, our order systems, and even just the servers to get our computers turned on! It was chaos for a whole day and some issues lingered into the following day but everything seems to be back to normal now. I work for a major business and heard about the flights having to land from the outage but didn’t know thats the system our company computers, landlines, and cell phones all use. It’s insane how much we depend on those systems. Years ago we would need double the people to do the same jobs. I’m just glad it was only one day at work. Now im on vacation.
This event crippled our town. The hospital is basically shut down (which is very scary), banks are not operating, and most businesses are closed because all they can accept is cash. Thankfully we still have water, electricity, and internet for now, but no real timeline of when everything will open back up considering most places here don't have their own local IT departments.
OMG So scary
Your town needs to get off of crowd strike
@@Matanumi Agreed!
People don't realize how connected the world is on a server. Imagine a hack or someone takes control like rogue AI
That crimson AI
Summer Wars
That scarlet ai
This just proves how easy it is to take down the whole country, scary thought with how connected we are
Damn im about to re watch that anime 😂 nice call back @@zero9112
"crowdstrike is the person out of final fantasy" had me laughing so hard
My eye is twitching
Crowd Strike is on the box of a Cloud Strife figure you found on Wish/Teemu/Aliexpress.
🙄
😂
Bro finally remembered the password for the 2nd channel after 6 months
What's wild to me...the fact that companies don't have bootable image backups for their computers. We normal tech folk are always told to have bootable backups and system images of our computers....why can't companies do the same?
I work in IT for the TSA/DHS and support all the airports and subsidiaries in Nevada - Our systems were not affected, however our airports are chaotic to say the least.
I work as a manager of an IT department and we got destroyed on Friday.
I work overnight at a hospital and it happened between 1am-2am. Almost all the computers were blue screened. My job was effected for about 9 hours.
Shout out to my fellow IT comrades. It was a brutal 12 hour shift today.
holy cow the computer crash zapped this channel back to life
My whole company got hit and I work in IT. Going to sleep it has been a crazy 10 hours
You sticking with CrowdStrike?
I work in IT. We were lucky enough to dodge this bullet. I feel for my peers who were not so fortunate.
How can Crowdstrike even survive this? Just imagine all the lawsuits...
I hope they have good insurance and better lawyers.
most likely, government bail out, which means the taxpayer
@@ramonosuke beautiful
I work for a local city government and we had about 1500 workstations and servers that we had to physically go to to fix this problem. We’re still working on it today. Luckily it’s not just me. We have a team of IT people, but this was a major issue that cost the city hundreds of thousands of dollars.
I'm a leo with the Port Authority of NY/NJ, and this affected me big time handling traffic at the Port Newark Marine Terminal. We have 3 shipping terminals where trucks go in and out with containers, and 2 of the 3 were affected. Truck drivers were stuck for hours, grinding the entire Port to a halt, with our traffic eventually affecting Newark Airports traffic as well as the town of Elizabeth. It was a horrendous day pushing traffic all day on 90+ degree heat and also dealing with the frustrating truckers, who were just wasting fuel going no where and dealing with us having to push them north and south just to have the traffic flow moving. Worst day at work in my 12 yrs here with the Port Authority
Man the amount of Reliance that our society has to deal with the internet/network is insane... too much for what's going on. I saw this with bank telling as well (yet the ATM was fine
I think Mixrosoft's approach is not to make computers boot specifically from this update, but to make them boot regardless of the update so this doesn't happen again
Working emergency services we didn't get blue screens but pretty much all our core systems were down as we couldn't access servers or web services weren't functioning. My VM was completely, the system that handles my radios for Motorola was down, Computer Aided Dispatch was down, my online portal and tools were down, even our paging system wasn't functioning which meant calling individuals one by one manually instead of being able to alert groups state wide about incidents which require a response.
13 and a half hours all of that was down.
My company moved from Crowdstrike 2 years ago, sighing relief today. Good luck all.
This is scary how bad it got and how 1 update did this
And who’s surprised that a windows update of all things caused this
Our world is built with toothpicks painted to look like steel beams
CrowdStrike had an epic battle with his arch rival, Sephisoft.
Not just the end user PCs were affected. We also had servers stuck in a BSOD loop. This came right after a big Azure DevOps outage.
This is the exact reason some people said don’t rely heavily on technology. This is the consequences of us relying too much on tech.😮
My roommate got sent home for the day because of the shutdown, I got called off because it’s gonna be slow lol
I genuinely wonder how many people who were in critical condition in hospitals lost their lives because of this.
I'm working in public service in germany, we're an IT team of four in charge of around 500 coworkers, and I'm SO Glad I never heard about crowdstrike before yesterday 😮
Yeah I'm stuck in Las Vegas.... I was supposed to leave this morning.
F
F
I imagine there are worse places to be stuck
Half my company was down due to this.
I was affected by this and our IT team was stressed lol.
This happened last night Thursday, btw
All depends on your time zone. I was at work. It was 1:30am eastern.
My site started feeling the effects around 6pm EDT on Thursday. My guess is that MS Azure was one of the first to get hit with the update.
This may sound wrong and crazy but didn't this Crowd Strike incident happen before like in the last 4 years also around July? My memory is doing a Deja Vu where it seems there was a big catastrophe with airport computers and first the media narrative was some bad Microsoft Windows update as the cause but then later it turned out to be some company's software that people who are not techies have never heard of? Don't get me wrong I can dislike Microsoft as much as anyone. I especially can't stand Windows 11. Mostly, I am wondering, is my Deja Vu feeling correct or am I losing it?🤔
This video is how I realized my banking app is working again. Thanks lol
Cloud struck. What a shameful embarrassing failure 😳
And this is why you don't force updates.
I knew I was in for it when our org sent an automated alert at 4 in the morning. At least 1/2 of our 15k devices were affected.
The amount of calls for on-site repair i had to do for all these damn PCs was ridiculous. Had to do each one by one and verify they were calling back to servers. Got quick enough to get done with 5 or more PCs every 15 minutes. Feels like I must've touched upwards of 1000 today.
My mum is a call centre manager for a phone company in Australia and their PCs are down
We are have to remotely fix our home-based users. Its crazy that one update can bring the country to its knees.
The memes though as well as I can already see another Family Guy episode on the second Y2K.
I work online as a Video Relay Service Interpreter. The general email was people who left their computers on over night couldn’t log into work. Lucky I decided to shut down the night before. Back to back calls all day.
That was the worst day of work I’ve ever had.
I work at a Hotel, and my managers are unable to tell us what rooms needed to be serviced/have been checked out
Jon remembered his password finally!
This after Intel servers and desktop CPU's crashing at a high rate, gaming servers offline on halo and other games. It's bonkers. Tech is falling apart
Yep, spent all day fixing this issue at the very large company I work for, one computer at a time. Not the Friday I had planned lol.
Another reason this is taking so long to fix is that if your drive is encrypted (and best practice is to do so), when you boot into safe mode you need the bitlocker key, which the average user doesn't track (relying on IT to have properly stored it somewhere).
The IT Crowd Strike, sounds like the sequel we never had
Didn't expect to see this channel again
I got 3 automated work calls about outages this morning. Sounded like spam, didn’t realize it was legit
Several gas stations and stores in my area were affected. Only taking cash. A big university cancelled Friday classes because of it.
At my place of work, 90% of the staff (several hundred people) got to take the day off with pay. Of course, my team was the only full team that was forced to stay as we were still able to do our jobs with the 7 computers that were functional. If we wanted to leave, we had to use PTO or recieve a hit on our attendance (which can lead to a suspension at 6 and termination at 7) if we didnt have time to cover it. So while the rest of the building went home, we got to stay and deal with the additional headaches of processing the entire team's work through 7 computers. Thanks Crowdstrike.
Crowdstrike made a big ole oopsie
The meme is that they thought that testing/QA was a cheap way to increase profits. It is hard to ignore.
First problem: Forced updates
If there was a way to delay updates even by a few hours this wouldn't have been so bad or as wide spread.
Second problem: Backup systems getting the same issues as active systems. There needs to be a disconnected emergency backup situation that can kick in and take on the workload while limiting certain processes.
My day has been and continues to be very unpleasant today because of this and it hit our systems around 10 pst last night running on very little sleep
My work computer was affected and was widespread throughout my company. My IT did a good job fixing everything
I remember a couple years ago there was a problem with a Windows 10 update that had a conflict with Avast anti-virus software and it ended up completely screwing up the update. The fix was literally having to use a boot disk to boot up windows properly.
Everyone’s computer on my team was down until the fix was sent out at around 11.
Except mine. Mine worked just fine. All day. God damn it
i wanna thank crowdstrike for almost half a day off lol
I work Security at a Hospital and we had a huge Issue overnight that was a crazy thing to witness for sure. There was a time where we all thought some insane cyber hack was happening too!
Yeah, today was bad. I work for an archiving company and dear god, walking into that this morning was rough
The crazy thing is that this will be forgotten in around…. 12 more hours
Got me at work as well. My IT guy was an idiot and couldn't restore my laptop on his own even with a step by step instruction in front of him. A coworker had to walk him through like he was talking to a child.
I’m in tech support but I’m on vacation, thank God
Oh my day of work was very affected by it. I work in a hospital in the operating room & we couldn’t take care of patients or perform surgical procedures due to this outage.
I work at a newspaper and some people were complaining they didn't get their direct deposit this morning.
Direct Deposit's your friend.
@@Aries73 That's what I meant sorry, some people were saying they didnt get it
Someone just presses the wrong button and the world just pauses
John! When you think you're getting pirated, the first reflex should be to pull the internet down.
By making those computer inoperable, they're IMPOSSIBLE to compromise!
This is why you push to test servers months before pushing to live. Wild unprofessionalism.
Finally, the channel is alive
This happened to Australia 🌏 🇦🇺, in 3PM Friday afternoon yesterday. News channels couldn't show text banner news on the bottom of the screen.
Working Saturday and Sunday on this. Around 10K endpoints we have to address
1200 + calls on Friday. And Saturday and Sunday it’s all hands on deck with all IT (including web devs with no phone experience) calling out
Been working it for 2 days. My Friday started at 12:30am and ended at 9:00pm…. For my Saturday to start at 6:30am and it’s still going. 😢
The nature of it being a manual fix puts it at about 15-20min / pc * number of machines impacted and for some that estimate is in the 10,000 + devices.
Will be interesting to see if CrowdStrike is held liable for the losses. Stay tuned….
I’m sure many video game development teams lost several days of productivity.
From someone sitting in an airport during all this, not a good time 😂
And they called me crazy when I built this Y2K survival bunker.
Who’s laughing now?
Edit: Nonperishable foods apparently do eventually expire. My tummy hurts.
I work over night at a hotel and we were confused because everything just went down.
It’s always a weird thing with security. It can work 99.99% of the time, and no one bats an eye. It messes up once then the whole operation is under a microscope. Multiple car dealerships were down as well as a bunch of local urgent cares in my area.
If nothing else this has proven just how over reliant we have become on technology
A few years back my pc did a windows update overnight and I had to take it to best buy to fix w.e the update messed up. I was just so taken aback because I’d only ever heard of a windows update bricking a PC,until it actually happened to me. Ever since I’ve known how fragile it really all is.
I love and am also not surprised you covered this!! You are a great person.
This kind of reminds me of that 1 update Nintendo did for animal crossing wild worlds were a malicious kind of item was sent out that literally ate the floor wherever you put it and can never recover
we got lucky, we had our shop ready at 11am maybe ten minutes after the store open. it was crazy. Had no idea how many other places were affected.
More surprising that he finally brought back Spawn Wave Plus
This highlights that we are over dependent on technology. Now apply this to the video game consoles we love, they are also too relient on online connections and updates.
Anything now can be broken with a simple faulty update.
Depending on your remote access tools you can boot a system into safe mode even if it’s in a boot loop. If it’s a partial boot loop even basic system of remote access can get you into safe mode remotely.
I was there at 1 am CT deleting that stupid update off tons of PC's. Crazy times.
We closed the restaurant I work at last night because of the outage it was affecting the entire area
good thing all my naughty files are safe.
Why they dont do a usb tool update? Considering also a bitlocker key issue /support
Always test before rolling out an update in the production environment.
We are still dealing with the outage.
Our computers weren't affected, but some of the insurance companies we contract with were so some payments weren't processing until they got their servers back online
Skynet came online