Thanks for watching! I thought I would try something slightly different with this video, focusing a bit more on telling a story. I had a lot of fun with it. Open to feedback from y'all, as well as suggestions for future videos (vulnerabilities, breaches, exploits, anything really). I'm doing a bit of travelling next week, so it might be a bit longer until my next upload. JOIN THE COMMUNITY ➤ discord.gg/WYqqp7DXbm Also, I think I finally fixed my intonations LOL Thank you for all of the support, I love all of you ♥ *EDIT* - At 2:23 those timestamps were meant to be 9:35am, my apologize for the mistake. I thought I fixed it, however I must have ended up uploading the wrong render. *EDIT #2* - 2:00 should have been "two and a half minutes", rather than seconds. Thanks to those who pointed this out!
At first I thought, how could anyone be this stupid?.. Then I got to the point in video where they were given a month to design and deploy a whole new piece of software, and everything made sense.
Yes, getting the software wrong is understandable. Not knowing how to turn your software off is however not understandable. The new routines worked within their existing framework. Deploying and decommissioning should be literally one of the first things they learned. Turn it off, and don't let it out of test mode again until you are sure it works.
They actually do, however they were not that helpful for Knight as they were designed for price swings, not trading volume. Mary Schapiro (the SEC chairman at the time) did end up reversing 6 of Knight's transactions, as they reached the cancellation thresholds outlined below: The SEC required more specific conditions governing the cancellation of trades. For events involving between five and 20 stocks, trades could be cancelled if they were at least 10 percent away from the “reference price,” the last sale before pricing was disrupted; for events involving more than 20 stocks, trades could be cancelled if they deviated more than 30 percent from the reference price. You can read more about this at Henrico Dolfing's report linked in my description.
Blaming the devs for losing the money when the company pushed for the release in a month, through procedures that involved manual unverified deployments, classic.
You know that Devs complained and weren’t listened to. Would be surprised if the company had a history of doing this. This time it the company paid the price.
I mean, it was a failure on all fronts. 1 Month to implement this, no kill switch, broken deploy scripts, at that point 7-year-old dead and dangerous legacy code in the codebase, being able to "reuse" a flag that causes "dead" code to revive, no plans in case of emergencies... There's a lot at fault here.
Similar to my company. Each rush job builds on the tech debt of the previous rush job, with the whole system getting worse each time. Then management demands to know why everything doesn't work optimally. Every objection is met with "If we don't get this out by next week, we'll miss the market!"
I’m a dev ops engineer and my employer made me delete our dev environment bc he didn’t see how it was needed and was costing money. So I could see a company literally just having prod and the devs have no say.
@@tr7zwthere was a kill switch. There allways is. They just wantes to.keep operationa going. The handling of thw situation ia more of a management and risk management failiure. But in America managament is rarely blamed. Based on the story they wanted to recover while being online so they don't lose face as a MM. They always had a hardware kill switch. A server kill switch would.have been a nicer option. I say it's a crisis management issue, because they were doing live debugging and troubleshooting while losing so much money and apperently nobody in the chain of command said " Stop take it offline". Beaides the fact that they had to.ve contacted from outside... like.. nobody was supervising that???
The part where the engineers were engaged in live debugging on a production system made me cringe into the next dimension. That's like trying to perform open heart surgery on a marathon runner as they're running the race. What an absolute disaster. Great video.
I'm surprised they couldn't figure out how to kill the servers. Once, our CTO fixed a production issue by driving to the colo and unplugging a network connection.
Exactly. Don't know why they didn't shut down all operations until they figured it out. They wanted to continue business as usual and ended up losing the entire company. What a bunch of dim wits.
It’s the power circuit for the room. A light switch essentially. You just literally turn off the power for everything in the server room so the servers immediately stop melting the company down. The most basic and sure fire way to stop the problem. You just hit the big power button.
As a coder who recently broke out with shingles due to the stress of being given far to short a deadline for something that ships to 9 million people I feel for these devs, that’s insane
"There is no kill switch," my man, unplug the server, throw a bucket of water or a cup of coffee on the buildings circuit breaker, litterally anything will cost a lot less than 2.3 million to fix.
You're assuming you have physical access to the servers involved. In the stock exchange world, most likely in 2012 you don't. Those servers were likely co-located in NYSE's datacenter.
@@baktru very likely. Im sure theres SEC rules that has regulations about where these systems can be held and likely has a secure facility access level requirement to run it. By the time they made it to the servers on site, they would have been bankrupt. I'm also sure it's probably not a server you can shut off remotely either even if you intentionally wanted to. They want those servers up 24/7 and isolated.
@@JacobSantosDev Again seems like just simply not pushing the test code to production machines is the safer option, but I'm not a financial expert/CS wizard like they are
@@thenayancat8802 oh sorry. The entire purpose of a feature flag is to be able to turn on and off features that you are testing in production. Just because you tested something in other environments does not mean the feature will work as expected in the production environment. The point of a feature flag is to facilitate the feature used in a live environment where you will want to turn it off. Technically, it is the "kill switch" and based on the limited information, turning that switch off would have saved them. Except it doesn't sound like anyone had training or didn't have access to the feature flag. Different teams are going to have different procedures for how feature toggles are switched. Better of it is a page where product can manage but might be entirely engineer owned. "Not deploy test code" is a non sequitur as depending on how you define it, all code is test code. The correct terminology would be "dead code" as the code should never run but because there existed a condition where it could, once it is revived, fuckery happens. You never want dead code to revive. I have never heard of good things happening when dead code suddenly runs.
It depeneds borrowing a stock at a high price selling the stock at a high price then buying when the stock price falls allows you to make some money although the potential losses are infinite
The problem here was that there was no distinction made between user and development software. This PowerPeg development software should have never been on a live program server at any point in time. It belongs on a dev server or stored on a HD somewhere.
Interesting. The way I've heard it, Power Peg was not "intentionally lose money for testing", but a production option to try and "peg" stocks to a given price (even if that would lose money). After being deprecated, it was judged too difficult to remove without effecting other production code. It was still being tested in builds until the 2005 changes caused those tests to fail, and they were removed. There was a script to automate deployment to the servers, but one was down for maintenance so the connection timed out and it was skipped without logging an error (or nobody bothered to read the log.) During the event it was obvious that something was wrong, but not obvious that it was causing huge losses until after the rollback accelerated things.
I just wrote about 120 tests over the past week or two for what I'm currently working on. Good to know that someone will just delete them when they fail in the future instead of actually figuring out why they are failing... eh, if the tests failed when the code is broke my job is done.
@@Entropy67 Meh. A test that never fails tells you nothing. A test that always fails tells you nothing. The only good tests are basically coin tosses, but then why write code that only works sometimes? =D I fuckin' hate tests. lol
As a developer i want to just say to all the companies and their executives, dont push too much for a little gain, give enough time as per the requirements or it might happen that the company may not even exist after launching that product.
@@enginerdy Or just one more guy: a deployment/build master responsible for making sure dead test code, or any other sus code or files don't make it into production builds, and production environments are properly deployed.
Executives aren't dev's, they will never think or act like a dev. Time is money, we can't waste the opportunity to make money here especially when its not ours.
Something similar nearly happened to my father. He works at an investment bank and was called in to solve an issue. Their network was being clogged by packets and it seemed a cascading effect has been caused between every server as they were stuck in a loop. 10 minutes before the stock market opened he went into the server room and just started pulling out ethernet cables and disconnecting servers and 5 minutes before stock market opened he pulled out the right ethernet cable and disconnected the server causing the issues. They could’ve lost billions of dollars that day.
@@ggsap well yeah. No off button, someone else is running the server and you have to access remotely. Most companies will only have a few people who truly know the steps to stop function.
@@camiscooked Its not that hard to stop the server. Its literally a giant red button in most cases, especially more easier an than diagnosing the issue
I've always wondered: does the NYSE (and other exchanges) have dummy/test environments for these HFTs to test their algorithms against? If not: these guys are always "testing in production" -- hopefully with a very small account/budget limit in case the new code goes haywire. It is nuts that the NYSE allowed them to place orders that couldn't be filled -- no circuit breakers in any of it. Try to do any of this on Robinhood or other trading app, and that app will prevent you from making trades that your account can't cover.
Other rollback: "finally sh*t has stopped hitting the fan" SMARS rollback: "holy sh*t! There's now 8 times more sh*t hitting the fan!" But seriously, for a software that runs at a scale of thousands of requests per second and work with millions of dollars, there should definitely be some sort of kill switch or feature toggle built in from the start. Although the "rollback cause even more problem" is definitely a first for me
probably someone saw every problem and said that they need more time to work on solutions and the company probably said. Just send anyway, it wont happen and we put in our next update, that never came.
This reminds me of a time I accidentally uploaded an older version of a report I'd been working on in college, overwriting the new one and setting me back days.
2:23 that's a bit of a fallacy IMO. There is a kill switch almost always. If affected services are on-prem - kill their Internet connection. Pull a plug on whole office/building if you have to. And if it's a datacenter, do basically the same - DC support can disconnect your servers/racks from the Internet.
Market-making (Knight's entire business) means being the middleman for every trade possible... with their infrastructure and resources, they were the #1 and were making a killing. IIRC Knight was responsible for nearly half of the volume across all exchanges in the stock market. If they seized operations here, they would lose everything. Their job is to remove friction between buyers and sellers by being a middleman, and as their reputation grew (along with their systems), it was paramount to always be online. It is not as easy as unplugging a box, and in many cases doing this would only make matters worse, logistics and business-wise. At the end of the day, yes there should've been a killswitch, and yes it should've been engaged. This was one of the first (if not the first) blowups in electronic markets that the industry had ever seen. And from the sounds of it, Knight was understaffed in their engineer/ops departments.
I remember an old video where they switched a telephone network over and had to take the old one offline first. It involved 30 or so people with bolt cutters :D
Yes it is And now I'm confused as to why I couldn't find it, apparently it was already in my likes? I must have been pretty tired when I went looking for it last time (or maybe it was set to private for a while 🤷♀️)
They didn't roll back the flags, though. That was their mistake. The problem started when they flipped the flag, and then they left the flag and replaced the code.
I still don't understand why they didn't just stopped all their servers and cancelled all the order's that weren't filled. That would probably take a couple minutes, instead of half an hour
They could certainly have cut the power, which would have been fastest but I don't think it would have been possible to cancel the orders that have already gone through
@@ME0WMERE Thats what I'm saying. Except I was saying that they should cut the mains power to the building instead of manually switching off each server
No kill switch? Like there was no circuit breaker to flip, no power cord or network cable to unplug? If I were the acting executive I would have walked into the computer room with cable cutters or an ax or something and just started chopping. I understand there could be large penalties for failing to complete market orders left pending, but it can’t be worse than $2.5 million per second.
2:26 I can only imagine how that phone call to customer support went!?? “Thank you for calling customer support, Knight in shining armor! How may help you?”? “OMG, We’re literally losing millions every second and can’t figure out WTF is going on…HELP!!”! “Um oh dear, Okay sir…um.. Have you unplugged your router, plugging it back in after 10 seconds?”
If you don't have software kill switch, you always have hardware kill switch. Disconnect malfuctioning algorith from web and run tests until reason is found and fixed.
Sorry if I don’t get the details but why couldn’t they literally send a so command out little pull the plug. It was not days of the cloud yet so wouldn’t they be running their own servers via some sort of enterprise setup?
Two lessons to get from this : 1- Software development is research, you CANNOT rush it, if you want to build faster, get more builders, don't pressure the ones you already have with tighter deadlines. 2- Being a good software engineer doesn't mean not making mistakes or knowing every function and library in existence, being a good software engineer means you clarify your code, document it, ask for reviews and testing and push back when management tries to give you unsustainable goals. it's 25% programming skill, 25% planning and 50% politics.
@@Bobrystoteles You're assuming it's a unified development process like back when everything was in-house. Each layer of devs is hired to build new systems on top of the old systems. So it's more like a mother giving birth to a mother giving birth to a...
Production tip: the endings of these episodes feel so abrupt, it’s kinda jarring. I think it would be lovely to have more of an intentional outro - maybe summarize the topics discussed, or talk about some takeaways and how things might be improved in the future or something. Also a pause between the end of the script and the start of the “if you’ve made it this far” to give an indication that we’ve reached the end. Love how well you talk about these topics!
Wait who hold on ..... Major deployment left to one person ? And then when trading started not a single engineer was monitoring trades just to check if everything was working as expected ? Then the CEO takes a break on launch date. We once deployed a new process service for company payroll, and on day one we had all leads and seniors monitor the system with several layers of safety introduced (limits to transaction amounts, limits to amount of transactions on first run) that data got checked to with an inch of its life, then the next set and the next with reports filled by the dev managers that had to be signed by the CIO before the remainder of the transactions could go through but even then as it ran at a staggered rate we had someone ready to pull the cord if anything seemed off. There were redundancies for redundancies as this system could empty 3 bank accounts in no time.
nope, you're right, it was meant to be 2 and a half minute period. a few other people mentioned this, and I updated the pinned comment. thanks for pointing it out 😀
Why did they bot cut the power to the servers, floor or the building or cut the fiber or something? This could have been stoped in minutes if someone has just taken the decision to take the measures needed to get the servers offline
Software development vs software engineering : the latter starts with system requirements that you have to be able to verify before production, the former can start with "you have one month..."
I said out loud just now that I WOULD like to watch your videos about cybersecurity. It was thrilling to know about this story and I’d love to hear more about the aftermath. Great video!
God, the worst part is thinking the new code was the problem and then reverting all the servers back to the old code, only to lose cash 8x faster. From 2.5 mil a second to 20 mil a second!
Great video. Would love to see some follow-up stories relating to HFT industry. Read Flash Boys years ago and absolutely loved it. I hope too see more of this from you in the future 😎
@@DanielBoctor Flash Boys is a great book. If you’ve never read anything from Michael Lewis that would be a great one to start with. The last couple of years I’ve mostly been listening to Audible. Hopefully you’re able to check it out.
VERY interesting content! The only thing I'd suggest is to slow your narration speed down a bit, as the story kinda comes at the listener like a fire hose. I look forward to your next video...thanks!
That's the issue with arbitrage bots, when they fail they lose years of profits in just minutes. Title should be "Manual Deployment ends up costing $440 Million. Maybe we need to hire some devops? "
Hire DevOps. QA. More than 1 dev. Good working practices. Estimation sessions from devs (no crunch time allowed outside of a P1). Proper pipelines and proper version control.
Why didn't they stop their software, stopping all trading, when they noticed something was wrong? I mean, this is not Star Trek where a sentient program can refuse to be terminated.
Excellent video - very informative! I enjoyed the blend of finance and software. Given how intertwined they are these days, there's likely many more topics to explore!
As someone who's worked in IT for 40 years from machine code programmer to head of engineering, this is definitely the CEO's fault. Testing takes time and yes men are too afraid to speak truth to power, instead ignoring all the advice from engineering. No one died here, it didn't work out so well for the people on the space shuttle.
They could have literally disconnected the network faster. It does not have to be automated. Even if in the cloud, a black-hole route is easy to create. As for process, this is classic: "Technology is a COST center" thinking. Cut budgets reduce time to deliver, and reduce talent in the technology pool as the best technology people have the easiest time replacing their employer.
One of my former professors once told us that computers are just hard working idiots. They will readily wipe you and your assets off the face of the earth if just one line of code tells them to.
I get like 30 useless emails daily about some system error for something I don't support, interspersed with random PTO notifications from coworkers and company/organization wide announcements that aren't relevant to me. I can totally understand just ignoring those emails.
Had no kill switch, didn't take the time to review the code, rushed the developers and didn't even test it in a controlled setting before pushing. They really stripped all safety features and weight out of a car, put in a formula engine and didn't consider what would happen if it crashed.
The problem was management all along. But they got golden parachutes as punishment ... The sooner these wallstreet types becomes personally liable for these kinds of screw ups the better.
What a crazy story... so insane to me to run a company moving that much money and not have integration testing. On first glance you'd want to blame the engineers here, but the majority of the blame would have to be on engineering management/upper management to allow prod code on financial systems to be deployed sans integration testing. This story is a great anecdote as to why infrastructure as code/virtualization is so critical.
At some point they should have just put in a firewall rule to block connections with the trading server so no more trades could be made while they’d figure it out. Better to do this at $40m than $400m…..
I don't have any knowledge regarding to trading, so can someone enlighten me on how there could possibly be no killswitches? Why can't they just kill the servers or literally just stop the running algorithm code?
Well a programming kill switch works be a part of the program that if canned shuttes the program off. So if no such code was implemented doesn't one exist. Why they didn't just pull the plug on the machines themselves however can i not tell you.
My assumption is that the people who manage the server didn't had much training, even without a killswitch stopping a service on a single server takes one line of command. (Two if they need to list of all running service to identify what to shut down) Even back in 2010s a skilled server operator would had been able to shut down the problematic service within ten minutes (at worst twenty minutes if they would had needed to login to each server individually)
@@Nivlache Exactly... Can't they just literally ssh to the server that's running the algorithm or something, and then terminate the algorithm's process? That doesn't need a built in killswitch to be deliberately programmed in, does it? I've been developing projects for over 4 years now, and I'm out here sometimes struggling to get my app running properly. Meanwhile they're out here not being able to stop a running process?
@@Kreze202 Exactly, or if not SSH they could just login through remote-KVM to disconnect the network controllers till the problem is solved. Also if someone would tell me to hit to killswitch on something I'd assume they want me to stop it as soon as possible even if it doesn't have a killswitch even if it means shutting down the servers completely. I think anyone who worked as a programmer or server administrator are confused by their actions, I only have three years experience as a server administrator but I doubt more experience would help me understand them.
In most places I've worked at we have an option to take the website offline and put up a maintaince page that will let us fix the problem without customers being impacted.
Working for an SBA lending company, we have to be SOC 2 compliant. I cannot fathom why Knight Capital wasn't audited on any of their procedures, especially considering they are connecting to the New York Stock Exchange!
Thanks for watching!
I thought I would try something slightly different with this video, focusing a bit more on telling a story. I had a lot of fun with it. Open to feedback from y'all, as well as suggestions for future videos (vulnerabilities, breaches, exploits, anything really). I'm doing a bit of travelling next week, so it might be a bit longer until my next upload.
JOIN THE COMMUNITY ➤ discord.gg/WYqqp7DXbm
Also, I think I finally fixed my intonations LOL
Thank you for all of the support, I love all of you ♥
*EDIT* - At 2:23 those timestamps were meant to be 9:35am, my apologize for the mistake. I thought I fixed it, however I must have ended up uploading the wrong render.
*EDIT #2* - 2:00 should have been "two and a half minutes", rather than seconds. Thanks to those who pointed this out!
I love pegging
Fascinating, love it. :)
@@BillAnt love you more
I enjoyed it. Good pace and explanations.
Great work dude. Glad this video fell into my recommended.
At first I thought, how could anyone be this stupid?.. Then I got to the point in video where they were given a month to design and deploy a whole new piece of software, and everything made sense.
yep, that will do it
@@DanielBoctor😂😂😂
Yes, getting the software wrong is understandable. Not knowing how to turn your software off is however not understandable. The new routines worked within their existing framework. Deploying and decommissioning should be literally one of the first things they learned. Turn it off, and don't let it out of test mode again until you are sure it works.
@@paraax I am still confused. Why couldn't they shut it down? Just pull the plug in worst case no?
@@exponentialcomplexity3051distributed system
Imagine the stress of the engineers trying to identify the problem, knowing their company is losing 2.5 MILLION dollars per second
At least they ordered pizza LOL
@@DanielBoctor- Give new meaning to "When the sh*t hits the server fan". Ha-Ha
I'm a little surprised NYSE doesn't have a mechanism to block trades when something is obviously going wrong
They actually do, however they were not that helpful for Knight as they were designed for price swings, not trading volume. Mary Schapiro (the SEC chairman at the time) did end up reversing 6 of Knight's transactions, as they reached the cancellation thresholds outlined below:
The SEC required more specific conditions governing the cancellation of trades. For events involving between five and 20 stocks, trades could be cancelled if they were at least 10 percent away from the “reference price,” the last sale before pricing was disrupted; for events involving more than 20 stocks, trades could be cancelled if they deviated more than 30 percent from the reference price.
You can read more about this at Henrico Dolfing's report linked in my description.
It's actually $292,000 per second, if the title is correct (440M / (28*60)). Still absurd.
Blaming the devs for losing the money when the company pushed for the release in a month, through procedures that involved manual unverified deployments, classic.
You know that Devs complained and weren’t listened to. Would be surprised if the company had a history of doing this. This time it the company paid the price.
I mean, it was a failure on all fronts. 1 Month to implement this, no kill switch, broken deploy scripts, at that point 7-year-old dead and dangerous legacy code in the codebase, being able to "reuse" a flag that causes "dead" code to revive, no plans in case of emergencies... There's a lot at fault here.
Similar to my company. Each rush job builds on the tech debt of the previous rush job, with the whole system getting worse each time. Then management demands to know why everything doesn't work optimally. Every objection is met with "If we don't get this out by next week, we'll miss the market!"
I’m a dev ops engineer and my employer made me delete our dev environment bc he didn’t see how it was needed and was costing money. So I could see a company literally just having prod and the devs have no say.
@@tr7zwthere was a kill switch. There allways is. They just wantes to.keep operationa going. The handling of thw situation ia more of a management and risk management failiure.
But in America managament is rarely blamed.
Based on the story they wanted to recover while being online so they don't lose face as a MM.
They always had a hardware kill switch. A server kill switch would.have been a nicer option.
I say it's a crisis management issue, because they were doing live debugging and troubleshooting while losing so much money and apperently nobody in the chain of command said " Stop take it offline". Beaides the fact that they had to.ve contacted from outside... like.. nobody was supervising that???
The part where the engineers were engaged in live debugging on a production system made me cringe into the next dimension. That's like trying to perform open heart surgery on a marathon runner as they're running the race. What an absolute disaster.
Great video.
well said. glad you liked it
it must have been torture, considering that there was no bug in their new code, it was a deployment issue 🤣
@@Entropy67The worst situation. Everything looks like it’s right and the problem turns out to be somewhere you never looked.
not gonna lie, that's a new fear unlocked for me as a future software engineer
I'm surprised they couldn't figure out how to kill the servers. Once, our CTO fixed a production issue by driving to the colo and unplugging a network connection.
If you're losing ~$150M a minute, there is a kill-switch. It's called the server room breaker
Exactly. Don't know why they didn't shut down all operations until they figured it out. They wanted to continue business as usual and ended up losing the entire company. What a bunch of dim wits.
What does that do, never heard of it ?
It’s the power circuit for the room. A light switch essentially. You just literally turn off the power for everything in the server room so the servers immediately stop melting the company down.
The most basic and sure fire way to stop the problem. You just hit the big power button.
These days it would be in the cloud and no one would have the credentials to nuke the account 😂
@@eyeofthepyramid2596Same as the breaker in your home. Turns off power to any given room or circuit (like laundry machines or stove)
As a coder who recently broke out with shingles due to the stress of being given far to short a deadline for something that ships to 9 million people I feel for these devs, that’s insane
I know, I find that most of these production nightmares are due to time constraints being pushed on the devs
too*
Isn't shingles deadly?
@@JorgetePaneteYou found the misspelling, yet missed the comma splice.
@@everythingpony I don't think so, but you're not really supposed to get it until 50+. I'm early 30s
"There is no kill switch," my man, unplug the server, throw a bucket of water or a cup of coffee on the buildings circuit breaker, litterally anything will cost a lot less than 2.3 million to fix.
2.3 BILLION*
You're assuming you have physical access to the servers involved. In the stock exchange world, most likely in 2012 you don't. Those servers were likely co-located in NYSE's datacenter.
@@baktrulog in with a console and shut down the server or the app instance.
@@baktru very likely. Im sure theres SEC rules that has regulations about where these systems can be held and likely has a secure facility access level requirement to run it. By the time they made it to the servers on site, they would have been bankrupt. I'm also sure it's probably not a server you can shut off remotely either even if you intentionally wanted to. They want those servers up 24/7 and isolated.
@@baktruyou don’t run quant funds on the cloud. They definitely had access to the servers
"I don't always test my code, but when I do.. it's in Production" - 😅
This made me lol
Crowdstrike? Is that you?
Everyone has a test server, it's just sometimes people are lucky enough to have a separate Production server as well.
"You either die a bug, or live long enough to become a feature".
Seems like the "buy high sell low" code is not something you want on your production machines, but then I'm not a financial expert like these folks
Well, they did have it behind a feature flag. But reusing a feature flag was a huge mistake.
@@JacobSantosDev Again seems like just simply not pushing the test code to production machines is the safer option, but I'm not a financial expert/CS wizard like they are
@@thenayancat8802 oh sorry. The entire purpose of a feature flag is to be able to turn on and off features that you are testing in production. Just because you tested something in other environments does not mean the feature will work as expected in the production environment. The point of a feature flag is to facilitate the feature used in a live environment where you will want to turn it off. Technically, it is the "kill switch" and based on the limited information, turning that switch off would have saved them. Except it doesn't sound like anyone had training or didn't have access to the feature flag. Different teams are going to have different procedures for how feature toggles are switched. Better of it is a page where product can manage but might be entirely engineer owned.
"Not deploy test code" is a non sequitur as depending on how you define it, all code is test code. The correct terminology would be "dead code" as the code should never run but because there existed a condition where it could, once it is revived, fuckery happens. You never want dead code to revive. I have never heard of good things happening when dead code suddenly runs.
It depeneds borrowing a stock at a high price selling the stock at a high price then buying when the stock price falls allows you to make some money although the potential losses are infinite
The problem here was that there was no distinction made between user and development software. This PowerPeg development software should have never been on a live program server at any point in time. It belongs on a dev server or stored on a HD somewhere.
Interesting. The way I've heard it, Power Peg was not "intentionally lose money for testing", but a production option to try and "peg" stocks to a given price (even if that would lose money). After being deprecated, it was judged too difficult to remove without effecting other production code. It was still being tested in builds until the 2005 changes caused those tests to fail, and they were removed. There was a script to automate deployment to the servers, but one was down for maintenance so the connection timed out and it was skipped without logging an error (or nobody bothered to read the log.) During the event it was obvious that something was wrong, but not obvious that it was causing huge losses until after the rollback accelerated things.
And that's why investing into code quality and tests is important.
I just wrote about 120 tests over the past week or two for what I'm currently working on. Good to know that someone will just delete them when they fail in the future instead of actually figuring out why they are failing... eh, if the tests failed when the code is broke my job is done.
As a QA, i would like to comment on this: hehe
@@Smaylik03 Google, Facebook, Twitter, Microsoft, Apple, and Adobe have entered the chat...
"Testing is what users are for."
@@Entropy67 Meh. A test that never fails tells you nothing. A test that always fails tells you nothing. The only good tests are basically coin tosses, but then why write code that only works sometimes? =D
I fuckin' hate tests. lol
As a software engineer as soon as I heard one month, I was like yep. Been there, done that.
As a developer i want to just say to all the companies and their executives, dont push too much for a little gain, give enough time as per the requirements or it might happen that the company may not even exist after launching that product.
Doesn't matter, they will never listen.
Also, don't try to pay the devs as little as possible. Because then this shit happens, and a good salary is a rounding error in comparison.
@@GamesFromSpaceI think they are probably well-paid, but there should have been 5x as many…
@@enginerdy Or just one more guy: a deployment/build master responsible for making sure dead test code, or any other sus code or files don't make it into production builds, and production environments are properly deployed.
Executives aren't dev's, they will never think or act like a dev. Time is money, we can't waste the opportunity to make money here especially when its not ours.
Something similar nearly happened to my father.
He works at an investment bank and was called in to solve an issue. Their network was being clogged by packets and it seemed a cascading effect has been caused between every server as they were stuck in a loop. 10 minutes before the stock market opened he went into the server room and just started pulling out ethernet cables and disconnecting servers and 5 minutes before stock market opened he pulled out the right ethernet cable and disconnected the server causing the issues. They could’ve lost billions of dollars that day.
Lol I was wondering why thsrs guys weren't doing the same, literally fixed in 5 minutes
@@Xalgucenniacloud computing exists... They may have been unlucky enough to have gone cloud computing only
@@camiscookedDoes cloud computing not have an off button?
@@ggsap well yeah. No off button, someone else is running the server and you have to access remotely. Most companies will only have a few people who truly know the steps to stop function.
@@camiscooked Its not that hard to stop the server. Its literally a giant red button in most cases, especially more easier an than diagnosing the issue
Oh no wont someone PLEASE think of the poor high frequency traders. Lol.
Any donation link?
@natmarelnam4871 Stock markets are literally leeches on society and bring literally no value. Prove me wrong.
Say that while watching your 401k go to zero.
Deploying code from dev to prod without a QA staging environment or subsequent smoke testing, is a recipe for disaster!
Yup. No DR env either
I've always wondered: does the NYSE (and other exchanges) have dummy/test environments for these HFTs to test their algorithms against?
If not: these guys are always "testing in production" -- hopefully with a very small account/budget limit in case the new code goes haywire.
It is nuts that the NYSE allowed them to place orders that couldn't be filled -- no circuit breakers in any of it. Try to do any of this on Robinhood or other trading app, and that app will prevent you from making trades that your account can't cover.
Really appreciate the amount of detail here! Most other coverage was surface level but you went into a lot of great detail here
I'm glad you liked it! Thank you for the support I appreciate it
Other rollback: "finally sh*t has stopped hitting the fan"
SMARS rollback: "holy sh*t! There's now 8 times more sh*t hitting the fan!"
But seriously, for a software that runs at a scale of thousands of requests per second and work with millions of dollars, there should definitely be some sort of kill switch or feature toggle built in from the start. Although the "rollback cause even more problem" is definitely a first for me
probably someone saw every problem and said that they need more time to work on solutions and the company probably said. Just send anyway, it wont happen and we put in our next update, that never came.
there should've been a metric, showing volume of orders from each server.
but yeaah, probably timing issues first
This reminds me of a time I accidentally uploaded an older version of a report I'd been working on in college, overwriting the new one and setting me back days.
Laughed so hard. Just subbed. Can't believe for this quality you only have 11k subs.
thank you for the support ❤
This makes all of the times I screwed up prod feel so much better. Thanks for the indepth analysis on this.
thanks for watching!
You can't screw like them..
**loses 100k per second **
2:23 that's a bit of a fallacy IMO. There is a kill switch almost always. If affected services are on-prem - kill their Internet connection. Pull a plug on whole office/building if you have to. And if it's a datacenter, do basically the same - DC support can disconnect your servers/racks from the Internet.
Market-making (Knight's entire business) means being the middleman for every trade possible... with their infrastructure and resources, they were the #1 and were making a killing. IIRC Knight was responsible for nearly half of the volume across all exchanges in the stock market. If they seized operations here, they would lose everything. Their job is to remove friction between buyers and sellers by being a middleman, and as their reputation grew (along with their systems), it was paramount to always be online. It is not as easy as unplugging a box, and in many cases doing this would only make matters worse, logistics and business-wise.
At the end of the day, yes there should've been a killswitch, and yes it should've been engaged. This was one of the first (if not the first) blowups in electronic markets that the industry had ever seen. And from the sounds of it, Knight was understaffed in their engineer/ops departments.
I remember an old video where they switched a telephone network over and had to take the old one offline first.
It involved 30 or so people with bolt cutters :D
I remember that video too
Went looking for it a while back
Couldn't find it anymore
Is this what you are referring to? This is fascinating
th-cam.com/video/saRir95iIWk/w-d-xo.htmlsi=uBbpgRjyGvrHR1_S
Yes it is
And now I'm confused as to why I couldn't find it, apparently it was already in my likes?
I must have been pretty tired when I went looking for it last time (or maybe it was set to private for a while 🤷♀️)
South Park: "Annnd it's gone."
That hit hard... "Hit the Kill switch (for the love of god !!) ...... There is no Kill switch....."
God! Imagine rolling back to a stable software and losing even more money. That would have sent people nuts!!!!
Losing close to 20m every second.
They didn't roll back the flags, though. That was their mistake. The problem started when they flipped the flag, and then they left the flag and replaced the code.
@@darrennew8211 thank you captain obvious
I still don't understand why they didn't just stopped all their servers and cancelled all the order's that weren't filled. That would probably take a couple minutes, instead of half an hour
They could certainly have cut the power, which would have been fastest but I don't think it would have been possible to cancel the orders that have already gone through
@@chunkyMunky329 better to cut their losses than continue to lose $2.5 M per second
@@ME0WMERE Thats what I'm saying. Except I was saying that they should cut the mains power to the building instead of manually switching off each server
Do you know where the master breaker is for your whole building? Well apparently they didn't either.
@@SianaGearz Yep I do, and if we were losing 2.5 million a second I'd probably hear someone yelling 'kill the power' and i would flip it
9:52 The engineers did a good job though. rarely are problems solved in that first 30 minute window.
No kill switch? Like there was no circuit breaker to flip, no power cord or network cable to unplug? If I were the acting executive I would have walked into the computer room with cable cutters or an ax or something and just started chopping. I understand there could be large penalties for failing to complete market orders left pending, but it can’t be worse than $2.5 million per second.
What music would have been playing as you did it?
@@eadweard. th-cam.com/video/I8EOAEYgsE0/w-d-xo.html
I blame usa education system
@@eadweard. Bat Out of Hell by Meatloaf.
Server probably is not even on the premises and they might not even have access to it.
2:26 I can only imagine how that phone call to customer support went!??
“Thank you for calling customer support, Knight in shining armor! How may help you?”?
“OMG, We’re literally losing millions every second and can’t figure out WTF is going on…HELP!!”!
“Um oh dear, Okay sir…um.. Have you unplugged your router, plugging it back in after 10 seconds?”
*uncomments the code
*casually loses $440 Mil.
Your content is excellent
Was invested by the end of the video for sure
Subbed, keep kickin ass man
thank you for the support! glad you enjoyed ❤️
@@DanielBoctorYep. As a fellow IT Tech, I've seen a lot of these debacles during my career (even caused one myself once - shhhh) SUBBED
4:34 you got the wrong stock footage, that’s the Gold Coast Australia not New York USA
Wow, no way this only has 3k views. Keep it up!
I'm glad you liked it! Thanks for the support 😊
*If you have $440M and you can lose even 10% of it in a single day, you've done a terrible job. This right here is abominable*
If you don't have software kill switch, you always have hardware kill switch. Disconnect malfuctioning algorith from web and run tests until reason is found and fixed.
Sorry if I don’t get the details but why couldn’t they literally send a so command out little pull the plug. It was not days of the cloud yet so wouldn’t they be running their own servers via some sort of enterprise setup?
Two lessons to get from this :
1- Software development is research, you CANNOT rush it, if you want to build faster, get more builders, don't pressure the ones you already have with tighter deadlines.
2- Being a good software engineer doesn't mean not making mistakes or knowing every function and library in existence, being a good software engineer means you clarify your code, document it, ask for reviews and testing and push back when management tries to give you unsustainable goals. it's 25% programming skill, 25% planning and 50% politics.
@@Bobrystoteles
This 💯 percent
@@Bobrystotelesaccurate. The Mythical Man-Month is such a good read!
2 women dont birth a baby in 4.5 months...
@@Bobrystoteles You're assuming it's a unified development process like back when everything was in-house. Each layer of devs is hired to build new systems on top of the old systems. So it's more like a mother giving birth to a mother giving birth to a...
Yes but also. If you think you don't need QA, you do.
Pair programming? You need QA.
Dev testing? You need QA.
QA is your friend.
Production tip: the endings of these episodes feel so abrupt, it’s kinda jarring. I think it would be lovely to have more of an intentional outro - maybe summarize the topics discussed, or talk about some takeaways and how things might be improved in the future or something. Also a pause between the end of the script and the start of the “if you’ve made it this far” to give an indication that we’ve reached the end. Love how well you talk about these topics!
Doing rollback and making the problem 8 times worse made me lol so hard.
Wait who hold on ..... Major deployment left to one person ? And then when trading started not a single engineer was monitoring trades just to check if everything was working as expected ? Then the CEO takes a break on launch date. We once deployed a new process service for company payroll, and on day one we had all leads and seniors monitor the system with several layers of safety introduced (limits to transaction amounts, limits to amount of transactions on first run) that data got checked to with an inch of its life, then the next set and the next with reports filled by the dev managers that had to be signed by the CIO before the remainder of the transactions could go through but even then as it ran at a staggered rate we had someone ready to pull the cord if anything seemed off. There were redundancies for redundancies as this system could empty 3 bank accounts in no time.
Yeah. Makes no sense. 😂 What a clown-show it was.
"Software will handle it!"
Software:
This is why you review your builds BEFORE launching!
Enjoying your content big time. Appreciate the work that you put in!
thank you for the kind words - glad you like it ❤️
@2:00 - did you mean a 2 and a half minute period or am I misunderstanding the numbers here.
nope, you're right, it was meant to be 2 and a half minute period. a few other people mentioned this, and I updated the pinned comment. thanks for pointing it out 😀
Fantastic vid! A complex topic made simple, great job
Much appreciated!
Crowdstrike: "HOLD MY BEER!"
😅😅
🤣😂
2:00 "two and a half seconds" did you mean minutes?
oops, good catch. should have been minutes. good catch! updated the pinned comment.
why didn't they shut down the servers? At least it prevents any further trades from going through?
Why did they bot cut the power to the servers, floor or the building or cut the fiber or something? This could have been stoped in minutes if someone has just taken the decision to take the measures needed to get the servers offline
Software development vs software engineering : the latter starts with system requirements that you have to be able to verify before production, the former can start with "you have one month..."
The editing and effects are amazing. Reminds me of Lemmino
really well done
wow, I never thought that my content itself would be compared to the legend himself. thank you for the support ❤️
That's why chaos engineering and DR testing is important... They will surely build a kill switch now 😂
Proper version control...
I said out loud just now that I WOULD like to watch your videos about cybersecurity. It was thrilling to know about this story and I’d love to hear more about the aftermath. Great video!
Its pretty cool to actually see what quant firms do behind the scenes great video 🔥
God, the worst part is thinking the new code was the problem and then reverting all the servers back to the old code, only to lose cash 8x faster. From 2.5 mil a second to 20 mil a second!
Great video. Would love to see some follow-up stories relating to HFT industry. Read Flash Boys years ago and absolutely loved it. I hope too see more of this from you in the future 😎
Thank you! It's definitely a area that I want to dive into. Thanks for sharing - I actually never heard of Flash Boys before ❤️
@@DanielBoctor Flash Boys is a great book. If you’ve never read anything from Michael Lewis that would be a great one to start with. The last couple of years I’ve mostly been listening to Audible. Hopefully you’re able to check it out.
VERY interesting content! The only thing I'd suggest is to slow your narration speed down a bit, as the story kinda comes at the listener like a fire hose. I look forward to your next video...thanks!
That's the issue with arbitrage bots, when they fail they lose years of profits in just minutes.
Title should be "Manual Deployment ends up costing $440 Million. Maybe we need to hire some devops? "
Hire DevOps. QA. More than 1 dev. Good working practices. Estimation sessions from devs (no crunch time allowed outside of a P1). Proper pipelines and proper version control.
Bro, in this case the problem which caused all of this was literally naming. Looks like I'm not the only guy who struggles with naming things 💀
This video was very well produced and executed, great content. Easy sub.
Fuck a kill switch, I'd trigger the fire alarm in the server room
Why didn't they stop their software, stopping all trading, when they noticed something was wrong? I mean, this is not Star Trek where a sentient program can refuse to be terminated.
No kill switch or procedure to resolve this? Sounds like the devs got 2 days to brainstorm and the company said "yup! autobots roll out" 😂😂😂😂😂😂
Excellent video! What a great summary dude well done.
Much appreciated!
Excellent video - very informative! I enjoyed the blend of finance and software. Given how intertwined they are these days, there's likely many more topics to explore!
THANK YOU MIGUEL! I completely agree as well ❤
As someone who's worked in IT for 40 years from machine code programmer to head of engineering, this is definitely the CEO's fault. Testing takes time and yes men are too afraid to speak truth to power, instead ignoring all the advice from engineering. No one died here, it didn't work out so well for the people on the space shuttle.
That was awesome. Thank you.
Thanks for watching!
Great video, I can’t understand the quotes though the audio is too cooked through my speakers.
why did not they turn off the computers? just pull the cables.
I'm not in anyway connected to this sort of stuff, but this was fascinating to watch/listen. You are an excellent communicator.
thank you! I'm glad you thought so. I appreciate the comment :)
Can i have the buy high sell low program, i wanna invert it.
1:58 wait so the stock exchange had no check to see if they had the reserves to actually buy the stock before authoring the trade?
No kill switch? Pull the network cable or the power cable?
They could have literally disconnected the network faster. It does not have to be automated. Even if in the cloud, a black-hole route is easy to create. As for process, this is classic: "Technology is a COST center" thinking. Cut budgets reduce time to deliver, and reduce talent in the technology pool as the best technology people have the easiest time replacing their employer.
One of my former professors once told us that computers are just hard working idiots. They will readily wipe you and your assets off the face of the earth if just one line of code tells them to.
Why did they not just turn the servers off? I do not understand.
Losing over $100 million in a minute? Even Cathie Wood couldn't do that, truly impressive.
amazing, sober, deeply technical analysis. Brilliant
Please please please give me some kind of mediator design pattern on this entire system!! So much pain.
why didnt they just cut the servers off?
Very Good ! Enjoyed this and will watch more of your content.
I get like 30 useless emails daily about some system error for something I don't support, interspersed with random PTO notifications from coworkers and company/organization wide announcements that aren't relevant to me. I can totally understand just ignoring those emails.
How in the heck are you gonna make software to buy and sell stocks and not have a kill switch? That's like driving a car with no brakes
Had no kill switch, didn't take the time to review the code, rushed the developers and didn't even test it in a controlled setting before pushing. They really stripped all safety features and weight out of a car, put in a formula engine and didn't consider what would happen if it crashed.
The problem was management all along. But they got golden parachutes as punishment ... The sooner these wallstreet types becomes personally liable for these kinds of screw ups the better.
I really like this content. This seems to a good channel. Insta-subbed.
Thank you so much! I'm glad you like it. Thanks for subbing
Oversight, or bro was on a mission to take down that firm.
“Well clearly it’s the new code that’s the problem! Let’s just roll it back to stop the bleeding.”
“… oh no!”
Should bot trading be banned?
Yes keep trying , it will stop them 😂
Just wait patiently while the bot programmers are replaced by AI.
No. It provides liquidity to the market (i.e. makes it easier for investors to buy or sell at a fair price).
Hi Dan, I really enjoy the format of video you make, I think you may even be the person who pioneered this genre. Please keep them coming.
Thank you so much. I can't say I pioneered the genre, but I appreciate the words
Imagine getting Power Pegged for -440 000 000 dollars💀
I feel like your channel is going to blow up soon . great video and editing . Can i know what editing software you use?
Thanks! For sure, I use DaVinci Resolve 😊
Surely if you can access the server to rollback this software, you can get to the server to shut it down
What a crazy story... so insane to me to run a company moving that much money and not have integration testing. On first glance you'd want to blame the engineers here, but the majority of the blame would have to be on engineering management/upper management to allow prod code on financial systems to be deployed sans integration testing. This story is a great anecdote as to why infrastructure as code/virtualization is so critical.
It's very hard to simulate the load that exist in prod. Add to that a code base that had grown and noone really understands it anymore.
@@Micke12312 sure; not $440 million hard
"You think it's a bad idea to let a few big firms manipulate the entire market?"
"Nah, it will go great."
Old server number 8 is the hero we need ;)
Very well done - digging into devops lapses?
At some point they should have just put in a firewall rule to block connections with the trading server so no more trades could be made while they’d figure it out. Better to do this at $40m than $400m…..
I don't have any knowledge regarding to trading, so can someone enlighten me on how there could possibly be no killswitches? Why can't they just kill the servers or literally just stop the running algorithm code?
Well a programming kill switch works be a part of the program that if canned shuttes the program off.
So if no such code was implemented doesn't one exist.
Why they didn't just pull the plug on the machines themselves however can i not tell you.
My assumption is that the people who manage the server didn't had much training, even without a killswitch stopping a service on a single server takes one line of command. (Two if they need to list of all running service to identify what to shut down)
Even back in 2010s a skilled server operator would had been able to shut down the problematic service within ten minutes (at worst twenty minutes if they would had needed to login to each server individually)
@@Nivlache Exactly... Can't they just literally ssh to the server that's running the algorithm or something, and then terminate the algorithm's process? That doesn't need a built in killswitch to be deliberately programmed in, does it? I've been developing projects for over 4 years now, and I'm out here sometimes struggling to get my app running properly. Meanwhile they're out here not being able to stop a running process?
@@Kreze202 Exactly, or if not SSH they could just login through remote-KVM to disconnect the network controllers till the problem is solved.
Also if someone would tell me to hit to killswitch on something I'd assume they want me to stop it as soon as possible even if it doesn't have a killswitch even if it means shutting down the servers completely.
I think anyone who worked as a programmer or server administrator are confused by their actions, I only have three years experience as a server administrator but I doubt more experience would help me understand them.
In most places I've worked at we have an option to take the website offline and put up a maintaince page that will let us fix the problem without customers being impacted.
Working for an SBA lending company, we have to be SOC 2 compliant. I cannot fathom why Knight Capital wasn't audited on any of their procedures, especially considering they are connecting to the New York Stock Exchange!
I got confused in the first 10 Seconds