Bankrupt In 45 Minutes From DevOps | Prime Reacts
ฝัง
- เผยแพร่เมื่อ 9 ก.ย. 2023
- Recorded live on twitch, GET IN
/ theprimeagen
Reviewed article: dougseven.com/2014/04/17/knig...
Author: Doug Seven | / dseven
MY MAIN YT CHANNEL: Has well edited engineering videos
/ theprimeagen
Discord
/ discord
Have something for me to read or react to?: / theprimeagenreact
Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/deeznuts - วิทยาศาสตร์และเทคโนโลยี
This is why you need a physical kill switch, something that blows up your server room.
OR you could just make sure your web app is made out of flask as a development server without WSGI. That way the server will immediately crash as soon as you reach thousands of users in a short span of time. Usually one doesn't want this to happen, but if you are scared of high volume and don't want to deal with it, than that solution is perfect lol.
@@morganjonasson2947haha 😅
They laughed at me and said I was crazy! Who's laughing now?!
😂
just put a small incendiary device on your main fiber line
Imagine getting an email saying something is wrong and seeing this much happen and not immediately killing the system
Imagine relying on someone outside of your company emailing you to tell you your shit is broken and now your company is bankrupt.
Email is not an adequate alerting mechanism for shit like this.
The people building this software are not themselves traders, nor are they regulators, and there are some budgetary problems in seeking to write every possible screw up and related protections into even the most heavy budgeted trading software. This install and uninstall problem clearly shows the problem with a design for the "parent order/child orders" design and the way set up did not provide for network wide deactivation of this capability all without removing the PEG software. A better design would have provided for ability to turn off PEG entirely without uninstalling.
@@vikramkrishnan6414IMAP Push is just as immediate as any ping.
@@vikramkrishnan6414 I'm amazed they didnt have have an app and push notifications for this lol.
I myself work as a System Engineer in a HFT company. Whenever there are any discussions regarding Risk Management, almost every time we get to hear about Knight Capital's incident
risk management. why do they call it ovaltine
@@harleyspeedthrust4013why not roundtine
@@boredphilosopher4254 the mug is round
Man, the amount of things that went wrong here
- Not removing dead code
- Re-using old feature flag (wtf were they thinking)
- No deployment review and validation that proper code was deployed on all servers
- No post-deployment monitoring
- No observability /traceability metrics? (They couldn't immediately pinpoint that one server making way more trades than it was suppose to?)
- No Kill switch
Any one of these in place would have prevented the whole thing or minimised the damage
No rollbacks?
@@nnnik3595when the stated deploy process is copy paste the new binaries into the server... I guess rollback it's not even a possibility
they did rollback, only the deployments that were working correctly though @@nnnik3595
@@nnnik3595 no staging ? no tests ? no automated anything ?
Don't blame them on reusing old feature flag. Do you know how much the new one would cost? Their price is so high that it's so obvious they reused an old one. Look how much they saved on it! /s
Imagine hundreds of millions of dollars are being dumped every 10 minutes while you try to debug your deployment in prod. jesus christ
the devs need alot of therapy after that xD
I worked at MS and I saw a team showing us a demo of a stock trading API. The dev forgot to switch the account to the test one and used real company account to execute the trade worth $30m of IBM stock. He started getting calls, the trade couldn't be reversed, but the trading floor closed the position in small profit, so it now just a funny story, nobody got in trouble. Couldn't imagine if it went the other way.
This is why robust testing is very important....and a staging environment as identical as you can get without actually making trades is crucial
I'd just run out of the door tbh can't deal with stress like that.
Article: "DevOps is broken!" Also Article: "Nothing about this is DevOps!"
It's actually is a perfect example of why DevOps is needed.
its like people that say the windows firewall don't work (its turned off most of the time, or someone put a too broad rule there)
@@composerkris2935 Right! Like engineering sent that shit over the wall and they deployed it. No one seemed responsible for it in prod, it's just was allowed to be. No one monitoring their new feature, no one designing automated deployments, backout plans, rollouts, etc. Just one big buerocratic process shoving new code in one end, and getting these results on the other.
It's actually "lack of devops" when they say "because of devops." Yes it's a confusing title.
I'm still not convinced that an automated deployment was what would have prevented this disaster. Surely if it was built perfectly, perhaps written in Haskell.
What's the worst thing that could happen...
*Siphons the entire AWS account
I thought that’s where this was going. “Our load balance was configured incorrectly and we allocated 500,000 instances which logged 5,000,000,000 errors and crashed cloudwatch and our s3”
@@awillingham I though they did something wrong and fired 1Billion instances and that cost $500M USD of AWS stupid charges. But it was even more amusing.
At least if that was AWS you could immediately cancel your entire bank account, claim hack, make a public PR storm and never pay it back.
@@monad_tcp Believe it or not, AWS has built in triggers to prevent anything like that from happening.
@@FourOneNineOneFourOne That means it already happened at least once, someone did exactly that and they implemented stoppers to prevent it.
U never ever ever reuse the same opcode for a new functionality ever.. never ever ever ever ever!!!!!! If you remove and deprecate you throw an error of sorts... But never ever ever repurpose an API endpoint!!!!!!
their code was probably cursed and adding a new flag was too difficult
How cursed of a codebase do you think they had to reuse the same flag code? I bet they only removed the dead code for the express purpose of reusing its code, but they didn’t even separate the deployment into two. Remove first, make sure everything is working, then reuse it (don’t ever do it), but if you’re doing it, don’t do it in the same step.
As a bounty hunter I love when there's some obsolete API leftovers accessible in a codebase. Makes life much more exciting 😉
Unfortunately, it's not considered bad practice to use single character command line flags.
Most likely it was something like reusing -p. I can point to tons of unix programs that are using pretty much any single letter command line flag they could possibly use. I think it's a lesson that in critical applications, --power-peg should probably be used instead of just -p
At that stage, I would just walk into the server room with a flame thrower and burn it down after around 10 minutes of this
remove the plug from the wall😂😂😂
@@ea_naseer I want to be extra sure. Also, are there some copper ingots you would like to sell me?
i was thinking sledgehammer. or a stick explosive or flooding it for the extreme measures.
@@ardnys35 Just nuke the entire site from orbit.
First 5 minutes I would go to the server room and press the red button and kill the power
I worked at a bank a few years ago, and the team I was on had a completely manual deployment process.
We had a round robin table of who would be in charge that week and they would have to go around through all the sub teams and collect a list of manual instructions. And this was never prepared ahead, you really had to fight tooth and nail to get those instructions.
We'd then wrap all up and send it off to the 1 and only 1 prod support guy that had been on the team longer than I was in the industry.
Eventually that guy turned 56 and it scared the hell out of management, and they blocked the entire team from deploying anything.
I was leaving the bank at that time so I don't know how it worked out, but every now and then I wake up in a cold sweet thinking about it.
Somewhat surprised that banks don't have better automation for this stuff. But then again, banks still rely on ancient code doing batch processing on csv files...
If it ain't broke, don't fix it. The problem is, when it does break, it can be catastrophic.
That guy must have won a very sweet severance package when he retired
@@Nil-js4bf "If it ain't broke, don't fix it. The problem is, when it does break, it can be catastrophic." If you think like that you deserve everything that happens because of complacency.
Man, it sounds almost like the bank I'm working at (one of the biggest). I'm getting a deployment to do every other month without even a clear understanding of how all components of the platform work (because of 'need to know' and such). Without me calling this ONE GUY when something seems odd, everything would go in flames on Monday twice already.
@@Nil-js4bf "oh shit it broke, call ted"
"uhhh ted died 6 years ago."
I remember reading a story about it few years ago, the real lesson here is: never "repurpose a flag".
I think it's ok as long as you space out the deployment between dead code removal and feature flag on
But yes, lack of separate feature flag, lack of kill switch, lack of knowledge to even kill the server
Why is this not in the lessons learnt? In would not pass my review. NEVER repurpose a flag. There is zero cost to a new flag, and if there is not then that is a problem by itself.
I think it's foolish to only take a single lesson.
There's also killswitch, automated deployments, accurate rollbacks (how did the rollback not stop the power peg system from running?), etc.
I have a close family member who works high up in data architecture at a major bank. You would not believe how batshit their dev processes are.
surely they heard and learned from this story
I have been there.
Can confirm.
Blame it on the CIO. The stakeholders are the pushers and the CIO needs to make it known to them how costly some mistakes are.
The only way to improve a bank system is: create a completely new bank, make it grow until it's bigger than the original, absorb the original. You can't really touch that COBOL in. Meaningful way
I used to work at an insurance company. However bad you think it is, it's worse.
Infinite loop + high speed trading, what could go wrong? I think the problem was they didn’t have any anomaly detection and mitigation.
A bad deployment caused this but the rollback made it worst, sometimes you can’t test for every scenario hence why you need anomaly detection and kill switches.
HFT is so fast I challenge what good a anomaly detector would actually be.
@@andrewyork3869number of trades per minute? number of type of trades per minute?
Better than nothing
@@andrewyork3869 About 400 MIllion worth at this point. The system would have halted a lot sooner.
As a DevOps engineer, this story shows exactly why you need good DevOps, or at the very least good engineers that can do good DevOps.
if not, you auto deploy PegOps
DevOooooops
I never worked on something this big but when I work on something, the guy just wants the thing to do a thing so I am pretty much doing all the front back ends, devops, testing ect.. Do you have dedicated people for each thing in the stuff you worked on?
@@wormius51 yes that's how things usually work when a company gets big enough.
for sure, this story seems like the title should be Bankrupt in 45 minutes from Lack of DevOps
Normally you do have to pay good money for a power pegging but $400 million is probably a bit steep
Entering findom territory there
Very low pegging on investment. Especially as it only ran on 1/8 of the capacity.
Steep?! it's effing vertical
"The code that that was updated repurposed an old flag that was used to activate the Power Peg functionality"
The article covers the deployment part but this is it's craziest thing. For such impactful functionality, they should have just deleted everything and reused nothing.
The last place I worked I made deleting unused code my religion.
I deleted millions of lines of code.
When I left there were plenty of still unused code that needed deleting.
This one really stuck out to me too. Do not EVER reuse flags. If, for whatever reason, you absolutely have to reuse a flag, do not repurpose that flag in the same release that removes the old code. That is a disaster waiting to happen. The old code should be completely removed from the system long before you even think of reusing a flag.
@@catgirl_worksExactly, I'm surprised Prime didn't mention this.
@@khatdubellyou shite a lot, sir😊
Yeah...what could go possibly wrong, if you try to repurpose code, that was 8 years dead.
Being able to roll back on a moment's notice seems to be pretty important, huh.
The rollback fucked them harder.
Let alone roll back, just shut the thing down would be impressive.
But in this case, the roll-back made things worse.
Rollback made this worse
@@katrinabryce Yes, a rollback as badly botched up as the roll out was. It sounds like their fundamental problem was having both poorly understood legacy code and a legacy server in the mix.
I remember hearing about this when I was working at a finance company back in the day and I couldn't believe it. Every time I see this article I still read it, despite knowing the history already because it's just so damn funny. Who doesn't love a story with a protagonist called power peg?
DevOps is shorthand for "job security"
It's t that SecOps? As in DevSecOps?
@@theangelofspace155 it's all just words in the ether
I knew it was a scam when I was on a team where they had hired a DevOps specialist who didn't know how to code so nothing was automated, deploying just meant copying individual files to the server and restarting.
Devops -> devs do operations. Companies so I will hire a team of non devs and put them in operations and will call them devops .
I would never hire an devops who has not been a dev before he switched. I saw a lot of ops guys jumping on the devops train and having no clue about what they are doing.
@@randomdude5430 You're not using your brain. You hire a rando off the street who vaguely knows how to turn on a computer, pay him accordingly, then you sell him to your clients at full [meme role du jour] rates and then you laugh all the way to the bank.
@@salvatoreshiggerino6810 It's called having ethics.
@@salvatoreshiggerino6810 But in that case you aren't hiring, your clients are.
Best outro yet. "The name is.... The PowerPegeagen"
Having worked with software where mistakes could potentially cause similar sized losses, I was always a bit amazed at how small the team was (3 people) and how little management cared to take extra precautions. At least I had pushed to get some good automated tests, and we did end up putting some other procedures in place over time, but it really felt like we were just lucky that nothing too bad ended up happening before we got a more safe setup in place.
It also is apart of the developer's job to inform management of risk and what can be done to address the risk.
Any manager who refuse to invest in risk countermeasures within reason does not have the company's best interest in mind.
With that said, it is important to note that risk management is a balance, hence the "within reason".
Just because a potential problem exists doesn't necessarily mean it's justifiable to spend 6 months of development time fixing it.
And it is the team leaders and manager's jobs to weight the cost and risk and determine the best course of action - Devs explain the risk and managers decide if it's worth the cost to fix.
If you keep a paper trail at least you can cover your own behind.
Automating deployment can also automatically deploy errors, introduce new errors, or be done in an environment who's state no longer represents the tested state in a critical way.
Mistakes anywhere along the process can always happen and human supervision is always required to make sure things are going right and if not, to react to the unforeseen/unhandled situation promptly.
Yeah, automated deployment of the wrong thing is DEFINITELY a huge problem, but part of the idea of DevOps, especially GitOps, is can you made it a pull-/mrge-request and have review of it.
"Bankrupt In 45 minutes from every single solitary individual in our company being a monumental idiot"
I bet a lot of people involved were saying openly to management that it was a bad idea. But management wasn't having any of that complaints
"Terraform deez nuts"
After dealing with this M$ piece of .... every day, I cannot agree more
Terraform is ass.
"repurposed a flag" ... WTF would you do that!? lol
The irony here is that the issue was caused precisely because of a lack of DevOps procedures…
even if you automate the automation can also create its own problem.
its like using triggers in a database where you forget it eventually.
Imagine being the poor dude who forgot to copy paste the new files to the 8th server.
Worse yet, coping it 8 times, and twice on one server
There is always a kill switch. It is called forcefully unplugging the 8th computer with Power Peg from the electricity net.
Cash equivalents = LIBOR bonds and short term US bonds (typically < 1yr), i.e. bonds of AAA rated countries with near to no interest rate risk
Imagine blaming “DevOps” when you still copying those files manually which is against DevOps’s principle itself
but... the article is about why you need good devops practices... lol
The "term of art" is change control.
Change management? Version control? Never heard of change control
Realistically their most inexcusable failing was not having a post deployment review to make sure everything was good (all servers in expected state), etc.
There are always gonna be suboptimal processes, and things that are manual that shouldn't be, and sometimes not enough staff on the team, or management won't pay for X tool, or whatever, but the one thing you can ALWAYS do is a proper checklist of what was supposed to be done, and making sure it got done.
Can't recall when and who, but there was some broker who's software developers didn't realize that bid and offer means the reverse in stonk trading.
Buy high, sell low
I love this new "Humorous Energetic Sports Commentator, But For Obscure Coding Topics" genre
I fail to see how the title of the article matches what was discussed. DevOps has always pushed for automated deployment processes (or at least as automated as humanly possible) to limit human error. In other words, the idea has been to apply some Dev processes to the Operations process. NOT to replace operations with developers NOR to is it to make the operations team into a development team.
Like Agile, the original ideas behind DevOps have been hijacked by managers and companies to get what they want from it rather than actually apply the benefits within those ideas. Nor have either of those ideas ever meant to be "This is the only way to do this" kind of attitude.
Yeah, everything they did wrong goes against everything I have ever been taught about DevOps. Just one giant oopsie after the next. If anything, this tale demonstrates why good DevOps practices are needed.
This is a new record for how hard i've laughed with prime. I can't even type...I may die of laughter whilst typing this on my inadequate keyboard.
slow clapping 👏 on this one for sure 😂
this flash crash story is well known in the finance/trading circles, I think there was also a book written about it and similar cases of flash crashes
At my workplace we have a replication of our production environment (sandpit) which as devs we deploy to and tested before devops deploys the same changes to production. Last year the person who did the deploys to sandpit left and I took over. The process was a list of different steps that all needed to be done correctly and as someone with adhd I can't get that right all the time/often. As it was a sandpit environment the only harm it caused was the ridiculous amount of delays in getting it all working but it drove me up the wall. I was able to convince my boss to give me the time to completely overhaul the process so that it is now just a simple one line command. We haven't had a single deploy issue since and also the DevOps team loves me now because it made their lives easier.
Modulo on an incremental user id is such a genius way to select a subset of deterministic experiment subjects. My grug brain would of just picked a random value and stored it for every hit of a common endpoint if the user hadn't been either selected or not selected prior.
This is low-key your best video yet. 😂
Hey ThePrimegen,
I don't know if you read the comments...
But we would definitely love a talk about how the implemented the kill switch, what it means, and how it would work in case of a real code-red situation.
Love your content !
Like the kill switch. I bought a plunger for my toilet 3 years ago. Haven't used it yet but I'm ever so thankful that it's there as an emergency option.
What's mind-boggling to me is how Knight was still able to be acquired at $1.4 billion despite this fiasco.
Well, their configuration of assets in the value chain was such that someone was still willing to pay for those assets. Future value matters. Temporary insolvency can be remediated. Also, $400 million isn't that much money on Wall St.
I enjoy the informative content and comedy !
Extremely high level - Market making are the people that exchange stocks for cash. They give the market liquidity.
“Market makers connect sellers with buyers” is probably a better description.
@@khatdubell Its both. There has to be liquitity in order to sustain the market.
Love those kinds of videos keep it up
i dont get it, there is nothing to do with devops in the article
Even at Pixar they were able to call the TI department and ask them to unplug the servers right now! to prevent continuously deleting of the assets, didn't help much, but they were able to do that
I worked on a high speed ad bidder around ten years ago. The kill switch was literally the first thing we built.
"pay somebody to automate it!" -- you mean like a devops engineer? 😂😂😂😂
NDC conferences has a good talk on this I believe
th-cam.com/video/qC_ioJQpv4E/w-d-xo.htmlsi=gVqxyI8naR8g-AOr
Found it
can you link it? please 🙏
@@robertluong3024 unfortunately no
@@robertluong3024 +1
The tism is really firing in this video .. I love it
How do you implement a kill feature? Is it like a hot load of a property or a deployment rollback ?
I wish I could have a video of the look on my face when Prime read that.
your best video title, had to see this
The real heroes of this story is the previous engineers that named the replaced software Power Peg, thereby setting up the perfect cherry to this masterpiece of ridiculous f-ups.
That was probably the best bed time story I've ever heard.
That is the greatest Story of all time!!!!
"PowerPegAgen" might be the best one yet. 😂
I like how every other day Prime seems to wake up and choose violence against me specifically
This seems related to a lack of controls like alerts / logging. Automated deployment will fix copy paste errors but not necessarily mitigate. Code will go wrong, it matters how quickly you can fix it.
This was one of the funniest thing I have come across in 2023. Hilarious and scary.
Its rare for a financial brokerage system not to have a Halt (or kill switch). ....really rare (and not to have a cluster backup).
P.S. Don't forget that no matter how much you plan, even a robot can be told to do deployment wrong. You need a kill switch and a backup cluster (for rolling back).
This development process was an obvious ticking time bomb. It went off in 2012, but if it didn't then, it would have sometime between then and now
I've listened to Power Peg explanation twice and still couldn't concentrate on the meaning.
I'm naming my next project "Power Peg", it also has an excellent abbreviation.
I am subscribed to the man, who introduced canary deploy before it even was mainstream 👏👏👏👏
That's one of the best opening salvoes I've seen aimed at the amalgam known as YT comments
The devastation when it when from a one server pegging to a all hands on deck 8 server pegging
This story is thought in IBM's devops certification btw
cash equivalent is something that is very liquid, high demand and hence you can sell instantly for cash, usually it is short term assets.
The fact people roll things out without a kill switch or gate keeper is insane
DevOps Engineer reporting in. love you :P
5:30 canary is just the coolest k8s setting
market making is done by HFT(high frequency trading) hence provide liquidy.
These system trades billions of time in a day hence they buy low and sell high more like 1cent per trade hence they make money.
Love the Slow Clap, totally stealing that terminology for A/B percentage rollouts
it was doomed from the beginning.. the name alone just makes the "horror" of potential issues just hilarious.. it is sad and horrible this happened but the humor in your deliver is hilarious!
this is why you shadow test
THIS IS WHY YOU SHADOW TEST
It’s crazy how fast an employee can lose a company money from a bad decision or process.
I add a second issue, dont repurpose flags, the old code wouldnt trigger , or even cause a crash
This is a perfect example of why DevOps and following strict release procedures is crucial.
Love your energy :D
Prime is the best news Anchor there ever was, is, and ever will be
we had a big red server stop button, someone lent on it one day.... now we have a big red button with a shield round it
"TerraForm deez nutz" right off the bat lmao
These stories are the best
Did Tom wrote the *Power Peg* functionality?
"No written procedures that required such a review."
I'm sorry, but having no procedures for somebody replacing code on servers is just asking for an Office Space 2.0. The amount of power those "technicians" had....
Any company without a disaster recovery plan, should go out of existence.
Backup backup backup.
I love this article.
I became an SRE last year. Never heard of the position before. It didn't take me a month to hate it. It took me 9 months to finally get moved back to SDET.
0:43 associating Continuous Delivery with Dave Farley was the best joke i’ve heard so far. 😂
but be careful, you are becoming associated with regex licensing and some Rust things 😂
wow, that's a speedrun for the history books.
Imagine having your company destroyed by something called the 'power peg'
Automation is the core principle of DevOps and The statement "Copying the code to the 8th server (Seems manual to me)" itself kills the concept of DevOps principles. The symbol of DevOps "Infinity loop" itself shouts "Automation!". Guess the "Tech"nician failed to understand that. The article seems to have been written in 2014. I won't be surprised if that's what people understood by DevOps at that time.
And if observability part is not taken care, it doesn't matter if the deployments are manual / automated, it is just a ticking time bomb.
100% , the only mention of Devops in this article was in the title , anyways we call it Platform Engineering now.
I watch these now only and only for the name section at the end 😂😂😂
Market making is the process of being the middle-man in a financial market for a particular commodity or security. In exchange for clipping the ticket in the middle, the market maker is responsible for ensuring sufficient liquidity at all times (so that the buy-sell spread doesn't blow out to ridiculous levels). This means that they need to step in from time to time and either buy or sell to market participants. At least, that's the theory. It's not hard to imaging how a simple mistake can send a MM - which is engaged in thousands of trades every minute, all processed algorithmically - to the wall.
They changed the code in all the servers, nobody said like: “why don't we just stop every server and diagnose the issue in offline mode?”
0:19 A wild arch user appears!
Wild arch user used "I USE ARCH BTW!"...
It was completely expected!