Thanks for watching! I thought I would try something slightly different with this video, focusing a bit more on telling a story. I had a lot of fun with it. Open to feedback from y'all, as well as suggestions for future videos (vulnerabilities, breaches, exploits, anything really). I'm doing a bit of travelling next week, so it might be a bit longer until my next upload. JOIN THE COMMUNITY ➤ discord.gg/WYqqp7DXbm Also, I think I finally fixed my intonations LOL Thank you for all of the support, I love all of you ♥ *EDIT* - At 2:23 those timestamps were meant to be 9:35am, my apologize for the mistake. I thought I fixed it, however I must have ended up uploading the wrong render. *EDIT #2* - 2:00 should have been "two and a half minutes", rather than seconds. Thanks to those who pointed this out!
They actually do, however they were not that helpful for Knight as they were designed for price swings, not trading volume. Mary Schapiro (the SEC chairman at the time) did end up reversing 6 of Knight's transactions, as they reached the cancellation thresholds outlined below: The SEC required more specific conditions governing the cancellation of trades. For events involving between five and 20 stocks, trades could be cancelled if they were at least 10 percent away from the “reference price,” the last sale before pricing was disrupted; for events involving more than 20 stocks, trades could be cancelled if they deviated more than 30 percent from the reference price. You can read more about this at Henrico Dolfing's report linked in my description.
Blaming the devs for losing the money when the company pushed for the release in a month, through procedures that involved manual unverified deployments, classic.
You know that Devs complained and weren’t listened to. Would be surprised if the company had a history of doing this. This time it the company paid the price.
I mean, it was a failure on all fronts. 1 Month to implement this, no kill switch, broken deploy scripts, at that point 7-year-old dead and dangerous legacy code in the codebase, being able to "reuse" a flag that causes "dead" code to revive, no plans in case of emergencies... There's a lot at fault here.
Similar to my company. Each rush job builds on the tech debt of the previous rush job, with the whole system getting worse each time. Then management demands to know why everything doesn't work optimally. Every objection is met with "If we don't get this out by next week, we'll miss the market!"
I’m a dev ops engineer and my employer made me delete our dev environment bc he didn’t see how it was needed and was costing money. So I could see a company literally just having prod and the devs have no say.
@@tr7zwthere was a kill switch. There allways is. They just wantes to.keep operationa going. The handling of thw situation ia more of a management and risk management failiure. But in America managament is rarely blamed. Based on the story they wanted to recover while being online so they don't lose face as a MM. They always had a hardware kill switch. A server kill switch would.have been a nicer option. I say it's a crisis management issue, because they were doing live debugging and troubleshooting while losing so much money and apperently nobody in the chain of command said " Stop take it offline". Beaides the fact that they had to.ve contacted from outside... like.. nobody was supervising that???
The part where the engineers were engaged in live debugging on a production system made me cringe into the next dimension. That's like trying to perform open heart surgery on a marathon runner as they're running the race. What an absolute disaster. Great video.
I'm surprised they couldn't figure out how to kill the servers. Once, our CTO fixed a production issue by driving to the colo and unplugging a network connection.
At first I thought, how could anyone be this stupid?.. Then I got to the point in video where they were given a month to design and deploy a whole new piece of software, and everything made sense.
Yes, getting the software wrong is understandable. Not knowing how to turn your software off is however not understandable. The new routines worked within their existing framework. Deploying and decommissioning should be literally one of the first things they learned. Turn it off, and don't let it out of test mode again until you are sure it works.
Exactly. Don't know why they didn't shut down all operations until they figured it out. They wanted to continue business as usual and ended up losing the entire company. What a bunch of dim wits.
It’s the power circuit for the room. A light switch essentially. You just literally turn off the power for everything in the server room so the servers immediately stop melting the company down. The most basic and sure fire way to stop the problem. You just hit the big power button.
"There is no kill switch," my man, unplug the server, throw a bucket of water or a cup of coffee on the buildings circuit breaker, litterally anything will cost a lot less than 2.3 million to fix.
You're assuming you have physical access to the servers involved. In the stock exchange world, most likely in 2012 you don't. Those servers were likely co-located in NYSE's datacenter.
@@baktru very likely. Im sure theres SEC rules that has regulations about where these systems can be held and likely has a secure facility access level requirement to run it. By the time they made it to the servers on site, they would have been bankrupt. I'm also sure it's probably not a server you can shut off remotely either even if you intentionally wanted to. They want those servers up 24/7 and isolated.
@@JacobSantosDev Again seems like just simply not pushing the test code to production machines is the safer option, but I'm not a financial expert/CS wizard like they are
@@thenayancat8802 oh sorry. The entire purpose of a feature flag is to be able to turn on and off features that you are testing in production. Just because you tested something in other environments does not mean the feature will work as expected in the production environment. The point of a feature flag is to facilitate the feature used in a live environment where you will want to turn it off. Technically, it is the "kill switch" and based on the limited information, turning that switch off would have saved them. Except it doesn't sound like anyone had training or didn't have access to the feature flag. Different teams are going to have different procedures for how feature toggles are switched. Better of it is a page where product can manage but might be entirely engineer owned. "Not deploy test code" is a non sequitur as depending on how you define it, all code is test code. The correct terminology would be "dead code" as the code should never run but because there existed a condition where it could, once it is revived, fuckery happens. You never want dead code to revive. I have never heard of good things happening when dead code suddenly runs.
It depeneds borrowing a stock at a high price selling the stock at a high price then buying when the stock price falls allows you to make some money although the potential losses are infinite
The problem here was that there was no distinction made between user and development software. This PowerPeg development software should have never been on a live program server at any point in time. It belongs on a dev server or stored on a HD somewhere.
As a coder who recently broke out with shingles due to the stress of being given far to short a deadline for something that ships to 9 million people I feel for these devs, that’s insane
Interesting. The way I've heard it, Power Peg was not "intentionally lose money for testing", but a production option to try and "peg" stocks to a given price (even if that would lose money). After being deprecated, it was judged too difficult to remove without effecting other production code. It was still being tested in builds until the 2005 changes caused those tests to fail, and they were removed. There was a script to automate deployment to the servers, but one was down for maintenance so the connection timed out and it was skipped without logging an error (or nobody bothered to read the log.) During the event it was obvious that something was wrong, but not obvious that it was causing huge losses until after the rollback accelerated things.
I just wrote about 120 tests over the past week or two for what I'm currently working on. Good to know that someone will just delete them when they fail in the future instead of actually figuring out why they are failing... eh, if the tests failed when the code is broke my job is done.
@@Entropy67 Meh. A test that never fails tells you nothing. A test that always fails tells you nothing. The only good tests are basically coin tosses, but then why write code that only works sometimes? =D I fuckin' hate tests. lol
Something similar nearly happened to my father. He works at an investment bank and was called in to solve an issue. Their network was being clogged by packets and it seemed a cascading effect has been caused between every server as they were stuck in a loop. 10 minutes before the stock market opened he went into the server room and just started pulling out ethernet cables and disconnecting servers and 5 minutes before stock market opened he pulled out the right ethernet cable and disconnected the server causing the issues. They could’ve lost billions of dollars that day.
@@ggsap well yeah. No off button, someone else is running the server and you have to access remotely. Most companies will only have a few people who truly know the steps to stop function.
@@camiscooked Its not that hard to stop the server. Its literally a giant red button in most cases, especially more easier an than diagnosing the issue
I've always wondered: does the NYSE (and other exchanges) have dummy/test environments for these HFTs to test their algorithms against? If not: these guys are always "testing in production" -- hopefully with a very small account/budget limit in case the new code goes haywire. It is nuts that the NYSE allowed them to place orders that couldn't be filled -- no circuit breakers in any of it. Try to do any of this on Robinhood or other trading app, and that app will prevent you from making trades that your account can't cover.
As a developer i want to just say to all the companies and their executives, dont push too much for a little gain, give enough time as per the requirements or it might happen that the company may not even exist after launching that product.
@@enginerdy Or just one more guy: a deployment/build master responsible for making sure dead test code, or any other sus code or files don't make it into production builds, and production environments are properly deployed.
Executives aren't dev's, they will never think or act like a dev. Time is money, we can't waste the opportunity to make money here especially when its not ours.
Other rollback: "finally sh*t has stopped hitting the fan" SMARS rollback: "holy sh*t! There's now 8 times more sh*t hitting the fan!" But seriously, for a software that runs at a scale of thousands of requests per second and work with millions of dollars, there should definitely be some sort of kill switch or feature toggle built in from the start. Although the "rollback cause even more problem" is definitely a first for me
probably someone saw every problem and said that they need more time to work on solutions and the company probably said. Just send anyway, it wont happen and we put in our next update, that never came.
This reminds me of a time I accidentally uploaded an older version of a report I'd been working on in college, overwriting the new one and setting me back days.
I still don't understand why they didn't just stopped all their servers and cancelled all the order's that weren't filled. That would probably take a couple minutes, instead of half an hour
They could certainly have cut the power, which would have been fastest but I don't think it would have been possible to cancel the orders that have already gone through
@@ME0WMERE Thats what I'm saying. Except I was saying that they should cut the mains power to the building instead of manually switching off each server
They didn't roll back the flags, though. That was their mistake. The problem started when they flipped the flag, and then they left the flag and replaced the code.
2:23 that's a bit of a fallacy IMO. There is a kill switch almost always. If affected services are on-prem - kill their Internet connection. Pull a plug on whole office/building if you have to. And if it's a datacenter, do basically the same - DC support can disconnect your servers/racks from the Internet.
Market-making (Knight's entire business) means being the middleman for every trade possible... with their infrastructure and resources, they were the #1 and were making a killing. IIRC Knight was responsible for nearly half of the volume across all exchanges in the stock market. If they seized operations here, they would lose everything. Their job is to remove friction between buyers and sellers by being a middleman, and as their reputation grew (along with their systems), it was paramount to always be online. It is not as easy as unplugging a box, and in many cases doing this would only make matters worse, logistics and business-wise. At the end of the day, yes there should've been a killswitch, and yes it should've been engaged. This was one of the first (if not the first) blowups in electronic markets that the industry had ever seen. And from the sounds of it, Knight was understaffed in their engineer/ops departments.
I remember an old video where they switched a telephone network over and had to take the old one offline first. It involved 30 or so people with bolt cutters :D
Yes it is And now I'm confused as to why I couldn't find it, apparently it was already in my likes? I must have been pretty tired when I went looking for it last time (or maybe it was set to private for a while 🤷♀️)
Two lessons to get from this : 1- Software development is research, you CANNOT rush it, if you want to build faster, get more builders, don't pressure the ones you already have with tighter deadlines. 2- Being a good software engineer doesn't mean not making mistakes or knowing every function and library in existence, being a good software engineer means you clarify your code, document it, ask for reviews and testing and push back when management tries to give you unsustainable goals. it's 25% programming skill, 25% planning and 50% politics.
@@Bobrystoteles You're assuming it's a unified development process like back when everything was in-house. Each layer of devs is hired to build new systems on top of the old systems. So it's more like a mother giving birth to a mother giving birth to a...
If you don't have software kill switch, you always have hardware kill switch. Disconnect malfuctioning algorith from web and run tests until reason is found and fixed.
Production tip: the endings of these episodes feel so abrupt, it’s kinda jarring. I think it would be lovely to have more of an intentional outro - maybe summarize the topics discussed, or talk about some takeaways and how things might be improved in the future or something. Also a pause between the end of the script and the start of the “if you’ve made it this far” to give an indication that we’ve reached the end. Love how well you talk about these topics!
Great video. Would love to see some follow-up stories relating to HFT industry. Read Flash Boys years ago and absolutely loved it. I hope too see more of this from you in the future 😎
@@DanielBoctor Flash Boys is a great book. If you’ve never read anything from Michael Lewis that would be a great one to start with. The last couple of years I’ve mostly been listening to Audible. Hopefully you’re able to check it out.
No kill switch? Like there was no circuit breaker to flip, no power cord or network cable to unplug? If I were the acting executive I would have walked into the computer room with cable cutters or an ax or something and just started chopping. I understand there could be large penalties for failing to complete market orders left pending, but it can’t be worse than $2.5 million per second.
2:26 I can only imagine how that phone call to customer support went!?? “Thank you for calling customer support, Knight in shining armor! How may help you?”? “OMG, We’re literally losing millions every second and can’t figure out WTF is going on…HELP!!”! “Um oh dear, Okay sir…um.. Have you unplugged your router, plugging it back in after 10 seconds?”
God, the worst part is thinking the new code was the problem and then reverting all the servers back to the old code, only to lose cash 8x faster. From 2.5 mil a second to 20 mil a second!
Why didn't they stop their software, stopping all trading, when they noticed something was wrong? I mean, this is not Star Trek where a sentient program can refuse to be terminated.
I said out loud just now that I WOULD like to watch your videos about cybersecurity. It was thrilling to know about this story and I’d love to hear more about the aftermath. Great video!
Software development vs software engineering : the latter starts with system requirements that you have to be able to verify before production, the former can start with "you have one month..."
Wait who hold on ..... Major deployment left to one person ? And then when trading started not a single engineer was monitoring trades just to check if everything was working as expected ? Then the CEO takes a break on launch date. We once deployed a new process service for company payroll, and on day one we had all leads and seniors monitor the system with several layers of safety introduced (limits to transaction amounts, limits to amount of transactions on first run) that data got checked to with an inch of its life, then the next set and the next with reports filled by the dev managers that had to be signed by the CIO before the remainder of the transactions could go through but even then as it ran at a staggered rate we had someone ready to pull the cord if anything seemed off. There were redundancies for redundancies as this system could empty 3 bank accounts in no time.
Excellent video - very informative! I enjoyed the blend of finance and software. Given how intertwined they are these days, there's likely many more topics to explore!
VERY interesting content! The only thing I'd suggest is to slow your narration speed down a bit, as the story kinda comes at the listener like a fire hose. I look forward to your next video...thanks!
There is no kill switch? No dual human control that checks the deployment? The CEO is out during a major brand new trading deployment? This is all lies from the CEO trying to save his job. Anything that goes right or wrong is the CEO's responsibility.
They could have literally disconnected the network faster. It does not have to be automated. Even if in the cloud, a black-hole route is easy to create. As for process, this is classic: "Technology is a COST center" thinking. Cut budgets reduce time to deliver, and reduce talent in the technology pool as the best technology people have the easiest time replacing their employer.
nope, you're right, it was meant to be 2 and a half minute period. a few other people mentioned this, and I updated the pinned comment. thanks for pointing it out 😀
Most Americans find it hard to retire comfortably amid economy crisis. Some have close to nothing going into retirement, my question is, do I pull cash from my 401k and buy a house, or spread my money in stocks for cashflow? I'd love to afford my lifestyle after retirement?
Lately, I've been contemplating retirement, uncertain whether my 401(k) and IRA will ensure a secure future. I've also invested $200K in the stock market, experiencing fluctuations without substantial gains.
Using a 401(k) or IRA is a valuable strategy for retirement planning, providing potential savings growth and tax advantages. While the stock market is promising, expert guidance is essential for effective portfolio management
Opting for an investment advisor is currently the optimal approach for navigating the stock market, particularly for those nearing retirement. I've been consulting with a coach for a while, and my portfolio has surged by 45% since Q2.
'Grace Adams Cook' , is the licensed advisor I use. Just research the name. You’d find necessary details to work with a correspondence to set up an appointment.
Sorry if I don’t get the details but why couldn’t they literally send a so command out little pull the plug. It was not days of the cloud yet so wouldn’t they be running their own servers via some sort of enterprise setup?
As someone who's worked in IT for 40 years from machine code programmer to head of engineering, this is definitely the CEO's fault. Testing takes time and yes men are too afraid to speak truth to power, instead ignoring all the advice from engineering. No one died here, it didn't work out so well for the people on the space shuttle.
What a crazy story... so insane to me to run a company moving that much money and not have integration testing. On first glance you'd want to blame the engineers here, but the majority of the blame would have to be on engineering management/upper management to allow prod code on financial systems to be deployed sans integration testing. This story is a great anecdote as to why infrastructure as code/virtualization is so critical.
That's the issue with arbitrage bots, when they fail they lose years of profits in just minutes. Title should be "Manual Deployment ends up costing $440 Million. Maybe we need to hire some devops? "
Hire DevOps. QA. More than 1 dev. Good working practices. Estimation sessions from devs (no crunch time allowed outside of a P1). Proper pipelines and proper version control.
The problem was management all along. But they got golden parachutes as punishment ... The sooner these wallstreet types becomes personally liable for these kinds of screw ups the better.
It's the fault of the regulator allowing firms to buy shares that aren't actually available at that time. And it's the fault of the firm doing such a thing. Incompetence of the firm involved not to have taken proper precautions. Such profiteering shouldn't be allowed. As it was, they lost out. They'd not have complained if they'd accidentally made that sum rather than lost it.
Why did they bot cut the power to the servers, floor or the building or cut the fiber or something? This could have been stoped in minutes if someone has just taken the decision to take the measures needed to get the servers offline
My jaw just crashed through the floor. You're telling me a company that writes code managing BILLIONS of dollars automatically not only doesn't use proof systems to prove the code is correct before deploying it, not only doesn't take their time to carefully vet any code changes through multiple levels of review, but actually pushes developers to rush out code and deploy it as fast as possible with no review, and even has no way to turn it off?! This level of incompetence is beyond what I would have thought humanly possible.
I get like 30 useless emails daily about some system error for something I don't support, interspersed with random PTO notifications from coworkers and company/organization wide announcements that aren't relevant to me. I can totally understand just ignoring those emails.
One of my former professors once told us that computers are just hard working idiots. They will readily wipe you and your assets off the face of the earth if just one line of code tells them to.
As a software engineer, Im sometimes horrified at the practices other companies have. Its SO easy to keep the power peg algorithm, but not have it in production. Things like that is just astonishing to me.
In the end, the blame is on the CEO. Mistakes happen. The problem was greed, preventing proper procedure in a rush to grab more money. The entire stock exchange is there so a bucket of leeches can suck the life blood out of people who actually provide labor and goods, in favor of those too lazy, stupid, or entitled to work for a living. Betting on other's fortunes should be outlawed.
At some point they should have just put in a firewall rule to block connections with the trading server so no more trades could be made while they’d figure it out. Better to do this at $40m than $400m…..
Had no kill switch, didn't take the time to review the code, rushed the developers and didn't even test it in a controlled setting before pushing. They really stripped all safety features and weight out of a car, put in a formula engine and didn't consider what would happen if it crashed.
Thanks for watching!
I thought I would try something slightly different with this video, focusing a bit more on telling a story. I had a lot of fun with it. Open to feedback from y'all, as well as suggestions for future videos (vulnerabilities, breaches, exploits, anything really). I'm doing a bit of travelling next week, so it might be a bit longer until my next upload.
JOIN THE COMMUNITY ➤ discord.gg/WYqqp7DXbm
Also, I think I finally fixed my intonations LOL
Thank you for all of the support, I love all of you ♥
*EDIT* - At 2:23 those timestamps were meant to be 9:35am, my apologize for the mistake. I thought I fixed it, however I must have ended up uploading the wrong render.
*EDIT #2* - 2:00 should have been "two and a half minutes", rather than seconds. Thanks to those who pointed this out!
I love pegging
Fascinating, love it. :)
@@BillAnt love you more
I enjoyed it. Good pace and explanations.
Great work dude. Glad this video fell into my recommended.
Imagine the stress of the engineers trying to identify the problem, knowing their company is losing 2.5 MILLION dollars per second
At least they ordered pizza LOL
@@DanielBoctor- Give new meaning to "When the sh*t hits the server fan". Ha-Ha
I'm a little surprised NYSE doesn't have a mechanism to block trades when something is obviously going wrong
They actually do, however they were not that helpful for Knight as they were designed for price swings, not trading volume. Mary Schapiro (the SEC chairman at the time) did end up reversing 6 of Knight's transactions, as they reached the cancellation thresholds outlined below:
The SEC required more specific conditions governing the cancellation of trades. For events involving between five and 20 stocks, trades could be cancelled if they were at least 10 percent away from the “reference price,” the last sale before pricing was disrupted; for events involving more than 20 stocks, trades could be cancelled if they deviated more than 30 percent from the reference price.
You can read more about this at Henrico Dolfing's report linked in my description.
It's actually $292,000 per second, if the title is correct (440M / (28*60)). Still absurd.
Blaming the devs for losing the money when the company pushed for the release in a month, through procedures that involved manual unverified deployments, classic.
You know that Devs complained and weren’t listened to. Would be surprised if the company had a history of doing this. This time it the company paid the price.
I mean, it was a failure on all fronts. 1 Month to implement this, no kill switch, broken deploy scripts, at that point 7-year-old dead and dangerous legacy code in the codebase, being able to "reuse" a flag that causes "dead" code to revive, no plans in case of emergencies... There's a lot at fault here.
Similar to my company. Each rush job builds on the tech debt of the previous rush job, with the whole system getting worse each time. Then management demands to know why everything doesn't work optimally. Every objection is met with "If we don't get this out by next week, we'll miss the market!"
I’m a dev ops engineer and my employer made me delete our dev environment bc he didn’t see how it was needed and was costing money. So I could see a company literally just having prod and the devs have no say.
@@tr7zwthere was a kill switch. There allways is. They just wantes to.keep operationa going. The handling of thw situation ia more of a management and risk management failiure.
But in America managament is rarely blamed.
Based on the story they wanted to recover while being online so they don't lose face as a MM.
They always had a hardware kill switch. A server kill switch would.have been a nicer option.
I say it's a crisis management issue, because they were doing live debugging and troubleshooting while losing so much money and apperently nobody in the chain of command said " Stop take it offline". Beaides the fact that they had to.ve contacted from outside... like.. nobody was supervising that???
The part where the engineers were engaged in live debugging on a production system made me cringe into the next dimension. That's like trying to perform open heart surgery on a marathon runner as they're running the race. What an absolute disaster.
Great video.
well said. glad you liked it
it must have been torture, considering that there was no bug in their new code, it was a deployment issue 🤣
@@Entropy67The worst situation. Everything looks like it’s right and the problem turns out to be somewhere you never looked.
not gonna lie, that's a new fear unlocked for me as a future software engineer
I'm surprised they couldn't figure out how to kill the servers. Once, our CTO fixed a production issue by driving to the colo and unplugging a network connection.
At first I thought, how could anyone be this stupid?.. Then I got to the point in video where they were given a month to design and deploy a whole new piece of software, and everything made sense.
yep, that will do it
@@DanielBoctor😂😂😂
Yes, getting the software wrong is understandable. Not knowing how to turn your software off is however not understandable. The new routines worked within their existing framework. Deploying and decommissioning should be literally one of the first things they learned. Turn it off, and don't let it out of test mode again until you are sure it works.
@@paraax I am still confused. Why couldn't they shut it down? Just pull the plug in worst case no?
@@exponentialcomplexity3051distributed system
If you're losing ~$150M a minute, there is a kill-switch. It's called the server room breaker
Exactly. Don't know why they didn't shut down all operations until they figured it out. They wanted to continue business as usual and ended up losing the entire company. What a bunch of dim wits.
What does that do, never heard of it ?
It’s the power circuit for the room. A light switch essentially. You just literally turn off the power for everything in the server room so the servers immediately stop melting the company down.
The most basic and sure fire way to stop the problem. You just hit the big power button.
These days it would be in the cloud and no one would have the credentials to nuke the account 😂
@@eyeofthepyramid2596Same as the breaker in your home. Turns off power to any given room or circuit (like laundry machines or stove)
"There is no kill switch," my man, unplug the server, throw a bucket of water or a cup of coffee on the buildings circuit breaker, litterally anything will cost a lot less than 2.3 million to fix.
2.3 BILLION*
You're assuming you have physical access to the servers involved. In the stock exchange world, most likely in 2012 you don't. Those servers were likely co-located in NYSE's datacenter.
@@baktrulog in with a console and shut down the server or the app instance.
@@baktru very likely. Im sure theres SEC rules that has regulations about where these systems can be held and likely has a secure facility access level requirement to run it. By the time they made it to the servers on site, they would have been bankrupt. I'm also sure it's probably not a server you can shut off remotely either even if you intentionally wanted to. They want those servers up 24/7 and isolated.
@@baktruyou don’t run quant funds on the cloud. They definitely had access to the servers
Seems like the "buy high sell low" code is not something you want on your production machines, but then I'm not a financial expert like these folks
Well, they did have it behind a feature flag. But reusing a feature flag was a huge mistake.
@@JacobSantosDev Again seems like just simply not pushing the test code to production machines is the safer option, but I'm not a financial expert/CS wizard like they are
@@thenayancat8802 oh sorry. The entire purpose of a feature flag is to be able to turn on and off features that you are testing in production. Just because you tested something in other environments does not mean the feature will work as expected in the production environment. The point of a feature flag is to facilitate the feature used in a live environment where you will want to turn it off. Technically, it is the "kill switch" and based on the limited information, turning that switch off would have saved them. Except it doesn't sound like anyone had training or didn't have access to the feature flag. Different teams are going to have different procedures for how feature toggles are switched. Better of it is a page where product can manage but might be entirely engineer owned.
"Not deploy test code" is a non sequitur as depending on how you define it, all code is test code. The correct terminology would be "dead code" as the code should never run but because there existed a condition where it could, once it is revived, fuckery happens. You never want dead code to revive. I have never heard of good things happening when dead code suddenly runs.
It depeneds borrowing a stock at a high price selling the stock at a high price then buying when the stock price falls allows you to make some money although the potential losses are infinite
The problem here was that there was no distinction made between user and development software. This PowerPeg development software should have never been on a live program server at any point in time. It belongs on a dev server or stored on a HD somewhere.
"I don't always test my code, but when I do.. it's in Production" - 😅
This made me lol
Crowdstrike? Is that you?
Everyone has a test server, it's just sometimes people are lucky enough to have a separate Production server as well.
"You either die a bug, or live long enough to become a feature".
As a coder who recently broke out with shingles due to the stress of being given far to short a deadline for something that ships to 9 million people I feel for these devs, that’s insane
I know, I find that most of these production nightmares are due to time constraints being pushed on the devs
too*
Isn't shingles deadly?
@@JorgetePaneteYou found the misspelling, yet missed the comma splice.
@@everythingpony I don't think so, but you're not really supposed to get it until 50+. I'm early 30s
Interesting. The way I've heard it, Power Peg was not "intentionally lose money for testing", but a production option to try and "peg" stocks to a given price (even if that would lose money). After being deprecated, it was judged too difficult to remove without effecting other production code. It was still being tested in builds until the 2005 changes caused those tests to fail, and they were removed. There was a script to automate deployment to the servers, but one was down for maintenance so the connection timed out and it was skipped without logging an error (or nobody bothered to read the log.) During the event it was obvious that something was wrong, but not obvious that it was causing huge losses until after the rollback accelerated things.
And that's why investing into code quality and tests is important.
I just wrote about 120 tests over the past week or two for what I'm currently working on. Good to know that someone will just delete them when they fail in the future instead of actually figuring out why they are failing... eh, if the tests failed when the code is broke my job is done.
As a QA, i would like to comment on this: hehe
@@Smaylik03 Google, Facebook, Twitter, Microsoft, Apple, and Adobe have entered the chat...
"Testing is what users are for."
@@Entropy67 Meh. A test that never fails tells you nothing. A test that always fails tells you nothing. The only good tests are basically coin tosses, but then why write code that only works sometimes? =D
I fuckin' hate tests. lol
As a software engineer as soon as I heard one month, I was like yep. Been there, done that.
Oh no wont someone PLEASE think of the poor high frequency traders. Lol.
Any donation link?
@natmarelnam4871 Stock markets are literally leeches on society and bring literally no value. Prove me wrong.
Say that while watching your 401k go to zero.
Something similar nearly happened to my father.
He works at an investment bank and was called in to solve an issue. Their network was being clogged by packets and it seemed a cascading effect has been caused between every server as they were stuck in a loop. 10 minutes before the stock market opened he went into the server room and just started pulling out ethernet cables and disconnecting servers and 5 minutes before stock market opened he pulled out the right ethernet cable and disconnected the server causing the issues. They could’ve lost billions of dollars that day.
Lol I was wondering why thsrs guys weren't doing the same, literally fixed in 5 minutes
@@Xalgucenniacloud computing exists... They may have been unlucky enough to have gone cloud computing only
@@camiscookedDoes cloud computing not have an off button?
@@ggsap well yeah. No off button, someone else is running the server and you have to access remotely. Most companies will only have a few people who truly know the steps to stop function.
@@camiscooked Its not that hard to stop the server. Its literally a giant red button in most cases, especially more easier an than diagnosing the issue
Deploying code from dev to prod without a QA staging environment or subsequent smoke testing, is a recipe for disaster!
Yup. No DR env either
I've always wondered: does the NYSE (and other exchanges) have dummy/test environments for these HFTs to test their algorithms against?
If not: these guys are always "testing in production" -- hopefully with a very small account/budget limit in case the new code goes haywire.
It is nuts that the NYSE allowed them to place orders that couldn't be filled -- no circuit breakers in any of it. Try to do any of this on Robinhood or other trading app, and that app will prevent you from making trades that your account can't cover.
As a developer i want to just say to all the companies and their executives, dont push too much for a little gain, give enough time as per the requirements or it might happen that the company may not even exist after launching that product.
Doesn't matter, they will never listen.
Also, don't try to pay the devs as little as possible. Because then this shit happens, and a good salary is a rounding error in comparison.
@@GamesFromSpaceI think they are probably well-paid, but there should have been 5x as many…
@@enginerdy Or just one more guy: a deployment/build master responsible for making sure dead test code, or any other sus code or files don't make it into production builds, and production environments are properly deployed.
Executives aren't dev's, they will never think or act like a dev. Time is money, we can't waste the opportunity to make money here especially when its not ours.
Other rollback: "finally sh*t has stopped hitting the fan"
SMARS rollback: "holy sh*t! There's now 8 times more sh*t hitting the fan!"
But seriously, for a software that runs at a scale of thousands of requests per second and work with millions of dollars, there should definitely be some sort of kill switch or feature toggle built in from the start. Although the "rollback cause even more problem" is definitely a first for me
probably someone saw every problem and said that they need more time to work on solutions and the company probably said. Just send anyway, it wont happen and we put in our next update, that never came.
there should've been a metric, showing volume of orders from each server.
but yeaah, probably timing issues first
Really appreciate the amount of detail here! Most other coverage was surface level but you went into a lot of great detail here
I'm glad you liked it! Thank you for the support I appreciate it
This reminds me of a time I accidentally uploaded an older version of a report I'd been working on in college, overwriting the new one and setting me back days.
Laughed so hard. Just subbed. Can't believe for this quality you only have 11k subs.
thank you for the support ❤
This makes all of the times I screwed up prod feel so much better. Thanks for the indepth analysis on this.
thanks for watching!
You can't screw like them..
**loses 100k per second **
I still don't understand why they didn't just stopped all their servers and cancelled all the order's that weren't filled. That would probably take a couple minutes, instead of half an hour
They could certainly have cut the power, which would have been fastest but I don't think it would have been possible to cancel the orders that have already gone through
@@chunkyMunky329 better to cut their losses than continue to lose $2.5 M per second
@@ME0WMERE Thats what I'm saying. Except I was saying that they should cut the mains power to the building instead of manually switching off each server
Do you know where the master breaker is for your whole building? Well apparently they didn't either.
@@SianaGearz Yep I do, and if we were losing 2.5 million a second I'd probably hear someone yelling 'kill the power' and i would flip it
God! Imagine rolling back to a stable software and losing even more money. That would have sent people nuts!!!!
Losing close to 20m every second.
They didn't roll back the flags, though. That was their mistake. The problem started when they flipped the flag, and then they left the flag and replaced the code.
@@darrennew8211 thank you captain obvious
2:23 that's a bit of a fallacy IMO. There is a kill switch almost always. If affected services are on-prem - kill their Internet connection. Pull a plug on whole office/building if you have to. And if it's a datacenter, do basically the same - DC support can disconnect your servers/racks from the Internet.
Market-making (Knight's entire business) means being the middleman for every trade possible... with their infrastructure and resources, they were the #1 and were making a killing. IIRC Knight was responsible for nearly half of the volume across all exchanges in the stock market. If they seized operations here, they would lose everything. Their job is to remove friction between buyers and sellers by being a middleman, and as their reputation grew (along with their systems), it was paramount to always be online. It is not as easy as unplugging a box, and in many cases doing this would only make matters worse, logistics and business-wise.
At the end of the day, yes there should've been a killswitch, and yes it should've been engaged. This was one of the first (if not the first) blowups in electronic markets that the industry had ever seen. And from the sounds of it, Knight was understaffed in their engineer/ops departments.
I remember an old video where they switched a telephone network over and had to take the old one offline first.
It involved 30 or so people with bolt cutters :D
I remember that video too
Went looking for it a while back
Couldn't find it anymore
Is this what you are referring to? This is fascinating
th-cam.com/video/saRir95iIWk/w-d-xo.htmlsi=uBbpgRjyGvrHR1_S
Yes it is
And now I'm confused as to why I couldn't find it, apparently it was already in my likes?
I must have been pretty tired when I went looking for it last time (or maybe it was set to private for a while 🤷♀️)
That hit hard... "Hit the Kill switch (for the love of god !!) ...... There is no Kill switch....."
South Park: "Annnd it's gone."
9:52 The engineers did a good job though. rarely are problems solved in that first 30 minute window.
*If you have $440M and you can lose even 10% of it in a single day, you've done a terrible job. This right here is abominable*
This is why you review your builds BEFORE launching!
*uncomments the code
*casually loses $440 Mil.
Two lessons to get from this :
1- Software development is research, you CANNOT rush it, if you want to build faster, get more builders, don't pressure the ones you already have with tighter deadlines.
2- Being a good software engineer doesn't mean not making mistakes or knowing every function and library in existence, being a good software engineer means you clarify your code, document it, ask for reviews and testing and push back when management tries to give you unsustainable goals. it's 25% programming skill, 25% planning and 50% politics.
@@Bobrystoteles
This 💯 percent
@@Bobrystotelesaccurate. The Mythical Man-Month is such a good read!
2 women dont birth a baby in 4.5 months...
@@Bobrystoteles You're assuming it's a unified development process like back when everything was in-house. Each layer of devs is hired to build new systems on top of the old systems. So it's more like a mother giving birth to a mother giving birth to a...
Yes but also. If you think you don't need QA, you do.
Pair programming? You need QA.
Dev testing? You need QA.
QA is your friend.
If you don't have software kill switch, you always have hardware kill switch. Disconnect malfuctioning algorith from web and run tests until reason is found and fixed.
Production tip: the endings of these episodes feel so abrupt, it’s kinda jarring. I think it would be lovely to have more of an intentional outro - maybe summarize the topics discussed, or talk about some takeaways and how things might be improved in the future or something. Also a pause between the end of the script and the start of the “if you’ve made it this far” to give an indication that we’ve reached the end. Love how well you talk about these topics!
Wow, no way this only has 3k views. Keep it up!
I'm glad you liked it! Thanks for the support 😊
Great video. Would love to see some follow-up stories relating to HFT industry. Read Flash Boys years ago and absolutely loved it. I hope too see more of this from you in the future 😎
Thank you! It's definitely a area that I want to dive into. Thanks for sharing - I actually never heard of Flash Boys before ❤️
@@DanielBoctor Flash Boys is a great book. If you’ve never read anything from Michael Lewis that would be a great one to start with. The last couple of years I’ve mostly been listening to Audible. Hopefully you’re able to check it out.
Your content is excellent
Was invested by the end of the video for sure
Subbed, keep kickin ass man
thank you for the support! glad you enjoyed ❤️
@@DanielBoctorYep. As a fellow IT Tech, I've seen a lot of these debacles during my career (even caused one myself once - shhhh) SUBBED
No kill switch? Like there was no circuit breaker to flip, no power cord or network cable to unplug? If I were the acting executive I would have walked into the computer room with cable cutters or an ax or something and just started chopping. I understand there could be large penalties for failing to complete market orders left pending, but it can’t be worse than $2.5 million per second.
What music would have been playing as you did it?
@@eadweard. th-cam.com/video/I8EOAEYgsE0/w-d-xo.html
I blame usa education system
@@eadweard. Bat Out of Hell by Meatloaf.
Server probably is not even on the premises and they might not even have access to it.
2:26 I can only imagine how that phone call to customer support went!??
“Thank you for calling customer support, Knight in shining armor! How may help you?”?
“OMG, We’re literally losing millions every second and can’t figure out WTF is going on…HELP!!”!
“Um oh dear, Okay sir…um.. Have you unplugged your router, plugging it back in after 10 seconds?”
That's why chaos engineering and DR testing is important... They will surely build a kill switch now 😂
Proper version control...
God, the worst part is thinking the new code was the problem and then reverting all the servers back to the old code, only to lose cash 8x faster. From 2.5 mil a second to 20 mil a second!
The editing and effects are amazing. Reminds me of Lemmino
really well done
wow, I never thought that my content itself would be compared to the legend himself. thank you for the support ❤️
Its pretty cool to actually see what quant firms do behind the scenes great video 🔥
Why didn't they stop their software, stopping all trading, when they noticed something was wrong? I mean, this is not Star Trek where a sentient program can refuse to be terminated.
I said out loud just now that I WOULD like to watch your videos about cybersecurity. It was thrilling to know about this story and I’d love to hear more about the aftermath. Great video!
Software development vs software engineering : the latter starts with system requirements that you have to be able to verify before production, the former can start with "you have one month..."
Wait who hold on ..... Major deployment left to one person ? And then when trading started not a single engineer was monitoring trades just to check if everything was working as expected ? Then the CEO takes a break on launch date. We once deployed a new process service for company payroll, and on day one we had all leads and seniors monitor the system with several layers of safety introduced (limits to transaction amounts, limits to amount of transactions on first run) that data got checked to with an inch of its life, then the next set and the next with reports filled by the dev managers that had to be signed by the CIO before the remainder of the transactions could go through but even then as it ran at a staggered rate we had someone ready to pull the cord if anything seemed off. There were redundancies for redundancies as this system could empty 3 bank accounts in no time.
Yeah. Makes no sense. 😂 What a clown-show it was.
Fantastic vid! A complex topic made simple, great job
Much appreciated!
"Software will handle it!"
Software:
Excellent video - very informative! I enjoyed the blend of finance and software. Given how intertwined they are these days, there's likely many more topics to explore!
THANK YOU MIGUEL! I completely agree as well ❤
Enjoying your content big time. Appreciate the work that you put in!
thank you for the kind words - glad you like it ❤️
Bro, in this case the problem which caused all of this was literally naming. Looks like I'm not the only guy who struggles with naming things 💀
Doing rollback and making the problem 8 times worse made me lol so hard.
Fuck a kill switch, I'd trigger the fire alarm in the server room
VERY interesting content! The only thing I'd suggest is to slow your narration speed down a bit, as the story kinda comes at the listener like a fire hose. I look forward to your next video...thanks!
This video was very well produced and executed, great content. Easy sub.
No kill switch or procedure to resolve this? Sounds like the devs got 2 days to brainstorm and the company said "yup! autobots roll out" 😂😂😂😂😂😂
I'm not in anyway connected to this sort of stuff, but this was fascinating to watch/listen. You are an excellent communicator.
thank you! I'm glad you thought so. I appreciate the comment :)
There is no kill switch? No dual human control that checks the deployment? The CEO is out during a major brand new trading deployment? This is all lies from the CEO trying to save his job. Anything that goes right or wrong is the CEO's responsibility.
I feel like your channel is going to blow up soon . great video and editing . Can i know what editing software you use?
Thanks! For sure, I use DaVinci Resolve 😊
Excellent video! What a great summary dude well done.
Much appreciated!
Crowdstrike: "HOLD MY BEER!"
😅😅
🤣😂
why did not they turn off the computers? just pull the cables.
They could have literally disconnected the network faster. It does not have to be automated. Even if in the cloud, a black-hole route is easy to create. As for process, this is classic: "Technology is a COST center" thinking. Cut budgets reduce time to deliver, and reduce talent in the technology pool as the best technology people have the easiest time replacing their employer.
@2:00 - did you mean a 2 and a half minute period or am I misunderstanding the numbers here.
nope, you're right, it was meant to be 2 and a half minute period. a few other people mentioned this, and I updated the pinned comment. thanks for pointing it out 😀
amazing, sober, deeply technical analysis. Brilliant
Most Americans find it hard to retire comfortably amid economy crisis. Some have close to nothing going into retirement, my question is, do I pull cash from my 401k and buy a house, or spread my money in stocks for cashflow? I'd love to afford my lifestyle after retirement?
Lately, I've been contemplating retirement, uncertain whether my 401(k) and IRA will ensure a secure future. I've also invested $200K in the stock market, experiencing fluctuations without substantial gains.
Using a 401(k) or IRA is a valuable strategy for retirement planning, providing potential savings growth and tax advantages. While the stock market is promising, expert guidance is essential for effective portfolio management
Opting for an investment advisor is currently the optimal approach for navigating the stock market, particularly for those nearing retirement. I've been consulting with a coach for a while, and my portfolio has surged by 45% since Q2.
Market behavior can be complex and unpredictable. Mind if I ask you to recommend this particular coach to whom you have used their services?
'Grace Adams Cook' , is the licensed advisor I use. Just research the name. You’d find necessary details to work with a correspondence to set up an appointment.
Sorry if I don’t get the details but why couldn’t they literally send a so command out little pull the plug. It was not days of the cloud yet so wouldn’t they be running their own servers via some sort of enterprise setup?
As someone who's worked in IT for 40 years from machine code programmer to head of engineering, this is definitely the CEO's fault. Testing takes time and yes men are too afraid to speak truth to power, instead ignoring all the advice from engineering. No one died here, it didn't work out so well for the people on the space shuttle.
What a crazy story... so insane to me to run a company moving that much money and not have integration testing. On first glance you'd want to blame the engineers here, but the majority of the blame would have to be on engineering management/upper management to allow prod code on financial systems to be deployed sans integration testing. This story is a great anecdote as to why infrastructure as code/virtualization is so critical.
It's very hard to simulate the load that exist in prod. Add to that a code base that had grown and noone really understands it anymore.
@@Micke12312 sure; not $440 million hard
That's the issue with arbitrage bots, when they fail they lose years of profits in just minutes.
Title should be "Manual Deployment ends up costing $440 Million. Maybe we need to hire some devops? "
Hire DevOps. QA. More than 1 dev. Good working practices. Estimation sessions from devs (no crunch time allowed outside of a P1). Proper pipelines and proper version control.
I really like this content. This seems to a good channel. Insta-subbed.
Thank you so much! I'm glad you like it. Thanks for subbing
Very Good ! Enjoyed this and will watch more of your content.
Old server number 8 is the hero we need ;)
Losing over $100 million in a minute? Even Cathie Wood couldn't do that, truly impressive.
The problem was management all along. But they got golden parachutes as punishment ... The sooner these wallstreet types becomes personally liable for these kinds of screw ups the better.
It's the fault of the regulator allowing firms to buy shares that aren't actually available at that time. And it's the fault of the firm doing such a thing. Incompetence of the firm involved not to have taken proper precautions. Such profiteering shouldn't be allowed. As it was, they lost out. They'd not have complained if they'd accidentally made that sum rather than lost it.
So you're saying the regulators shouldn't have allowed them to build a house of cards out of a house of cards?
Why did they bot cut the power to the servers, floor or the building or cut the fiber or something? This could have been stoped in minutes if someone has just taken the decision to take the measures needed to get the servers offline
You took something technical & dry, and made it entertaining.
This is an awesome comment, glad you thought so! Thanks for the support ❤️
My jaw just crashed through the floor. You're telling me a company that writes code managing BILLIONS of dollars automatically not only doesn't use proof systems to prove the code is correct before deploying it, not only doesn't take their time to carefully vet any code changes through multiple levels of review, but actually pushes developers to rush out code and deploy it as fast as possible with no review, and even has no way to turn it off?! This level of incompetence is beyond what I would have thought humanly possible.
I got confused in the first 10 Seconds
I get like 30 useless emails daily about some system error for something I don't support, interspersed with random PTO notifications from coworkers and company/organization wide announcements that aren't relevant to me. I can totally understand just ignoring those emails.
One of my former professors once told us that computers are just hard working idiots. They will readily wipe you and your assets off the face of the earth if just one line of code tells them to.
As a software engineer, Im sometimes horrified at the practices other companies have. Its SO easy to keep the power peg algorithm, but not have it in production. Things like that is just astonishing to me.
Hi Dan, I really enjoy the format of video you make, I think you may even be the person who pioneered this genre. Please keep them coming.
Thank you so much. I can't say I pioneered the genre, but I appreciate the words
That was awesome. Thank you.
Thanks for watching!
i don't feel stupid anymore after watching this
You go to the backside of the offending computer... You then unplug its connection to the network.
why didn't they shut down the servers? At least it prevents any further trades from going through?
4:34 you got the wrong stock footage, that’s the Gold Coast Australia not New York USA
Happy anniversary
In the end, the blame is on the CEO. Mistakes happen. The problem was greed, preventing proper procedure in a rush to grab more money.
The entire stock exchange is there so a bucket of leeches can suck the life blood out of people who actually provide labor and goods, in favor of those too lazy, stupid, or entitled to work for a living.
Betting on other's fortunes should be outlawed.
Great video, I can’t understand the quotes though the audio is too cooked through my speakers.
This is like Skynet becoming self-aware. Engineers in a panic try to pull the plug, but are unable to.
A SMARS a day helps Knight Capital work, rest and play 🎵 🎶
At some point they should have just put in a firewall rule to block connections with the trading server so no more trades could be made while they’d figure it out. Better to do this at $40m than $400m…..
Imagine getting Power Pegged for -440 000 000 dollars💀
Oversight, or bro was on a mission to take down that firm.
I am going to share this with my office of why we shouldn't re-use flags.
How in the heck are you gonna make software to buy and sell stocks and not have a kill switch? That's like driving a car with no brakes
Had no kill switch, didn't take the time to review the code, rushed the developers and didn't even test it in a controlled setting before pushing. They really stripped all safety features and weight out of a car, put in a formula engine and didn't consider what would happen if it crashed.
“Well clearly it’s the new code that’s the problem! Let’s just roll it back to stop the bleeding.”
“… oh no!”