Low earth orbit doesn't require much rad-hard chips. We need them more for deep space missions (like the deep space gateway in the next decade). Source: I work at NASA on rad-hard camera systems.
Yeah i figured myself. LEO still profits from the dense part of earth' magnetic field to deflect charged particles. That is why cube sats are popular.... your Atmel ATmega something something will usually not encounter a lot of problems except colliding with another retrograde cube because one smartass studend picked Kessler syndrome as topic for the finals.
Didn't the upper stage of the falcon heavy maiden flight go trough the Van Allen belt before it's final burn to test if it could withstand the radiation and the 6 hour coast?
You got the Ariane V error wrong. It tried to convert a 64 bit float number to a 16 bit integer. And the result is not truncated. An conversion error is raised, and the error message is interpreted as valid data which caused this failure.
Hehe, I was about to comment on this too. To be exact, the Ariane 501 report states: "As a result of its failure, the active inertial reference system transmitted essentially diagnostic information to the launcher's main computer, where it was interpreted as flight data and used for flight control calculations." Thus I would argue it wasn't the error itself being interpreted as guidance data but some other "diagnostic information".
And it was due to the fact that Ariane 5 shared components and therefore sourcecode with Ariane 4 and t since the components and software were known to function this condition was not tested.
Thanks for the video, and I came here to make a similar comment. AFAIK, and our lecturer brought the secondary computer to demonstrate, the exception caused a fail-over, to rapidly reoccur also in the secondary computer, triggering the self-destruct. I was not aware of any faulty interpretation of data.
Clearly these people spent a long time thinking about everything they could think about and they clearly have the brain resources to aknowledge what they should do or not! It must be pretty exciting to work over there :) Thanks for sharing and keep it up the great job Xavier !
more like they are able to start the entire space program from zero, so they don't have to deal with bureaucracy, "years of protocols", and able to implement the latest computer, electrical and material science technology swiftly without the needs to seek layers of organizational approval. You have to realize that our military-industrial complex has became an elephant that it is better off to privatize and start from zero.
champan250 you'd still have to acknowledge that starting a space business from zero still is very far from easy. I guess you could say lightyears away from easy :D (sorry bout that, I had to)
champan250 You're absolutely right, i worked for a couple of companies when i was living in Paris and it's just annoying how their processes in big companies are slow. I clearly remember one company which wanted to move from Windows XP to Windows 7 and it took them basically 5 years to make it happened. Two years of reflexion about the impact in term of computer infrastructure and softwares engeenering and 3 other years to fully change and migrate everything. Big Elephant indeed.
Thanks for sharing. I am a professional embedded SW developer and SpaceX enthusiast; so, I had been curious about their code. I was glad to hear they use Unix and that they don't use Python (the horrible preferred choice of a lot of Mechanical types). I am a seasoned C++ programmer and do like that choice; but, I think a managed programming language like C++ of Java might have some advantages. C++ is a loosely constrained language and is also un-managed; so, it can be more difficult to find subtle bugs (like unassigned pointers, and memory leaks) and much more dangerous when you miss them. There are techniques for limiting this; but, programmers are still people and they can make and miss mistakes and testing every possible code path is often not possible. I am sure the developers on the Ariane 5 did lots of testing and still somehow missed the code bug that caused the failure. Anyone that claims their code is 100% bug free is either deluding themselves, lying, or not looking. I am not really sure which one of those is worse. The advantage of C++; though, can be better real-time determinism. With UNIX and C++ on an X86 core, one project I was on achieved sub-microsecond determinism. At the speeds they are flying, that may be the bigger concern. There are ways to build more safety of C++ code, especially if they don't use other people's libraries. Thanks for the cool info.
Glad to see videos like this! I was a software developer for the shuttle program for 25 years designing and writing various launch safety systems, worked for USA (United Space Alliance)
I was amazed to find a ticket-automat some weeks ago which had a failure and was re-boot all the time. Behind the "fancy" mask with color- touch display, there I saw the system is a 286er prozessor running Microsoft-DOS, in 2018. But normally it works well and failures are seldom and you can buy tickets there.
Cool, are you able to give some comments to how the SpaceX software approach is different from how you worked? E.g. did you use C/C++, continuous integration, store logs with source code, test on actual hardware etc as mentioned here? I believe I read somewhere that SpaceX built a board with all the various hardware components, actuators etc controlled by the computers, so that they could test the software against actual hardware. However that sounded like a no brainer to me. Would not all space companies do this? Or was your software only really tested upon launch of a rocket?
Such an elegant solution to a difficult problem. I have come across this many times in IT industry. Have built in redundancy throughout so any one failure does not compromise the system.
I remembered a news story about off-the-shelf hardware in the 90s. Irans or Iraqs spies bought a lot of ps1 consoles then turned them into scud missiles processor guidance system. The US immediately put an embargo on ps1 sales. Good times. Nowadays its not weird to find milspec hardware using your average laptop/desktop cpu. Good times we're living now. Good and cheap.
Mu Effe fun fact, the USAF uses several hundred PS3s setup with a customized version of Linux to turn it into a Super Computer cluster. The GPUs handled the heavy computational load with the CPUs handling the normal system functions & acting as a co-processor to the GPUs.
They haven't done anything like that. IBM/Sony Cell processor was very interesting, but turned out to be to difficult to take full potential off. Iranians had indeed smuggled some PS1s for "meteorological research" cluster, but I doubt they would waste that horsepower on disposable rockets, which don't even need that. Soviets had image guided Scud modifications in late '80, so had Americans with Pershing II (radar images). Problem with this is that you need to build up library first.... PS: I wonder, had some of Cell creators been hired by, say.... AMD ;) ?
This was a very interesting video! Their usage of multiple CPUs for redundancy is very clever, but I wonder how they manage to keep performance at optimum levels. Given that split second decisions can have a huge impact on the results. Your comment about game developers might contribute to this - they similarly have to write code that has consistent performance characteristics, you don’t want something that normally takes 5ms taking 100ms because it encountered a worst-case scenario. The choice of OS and language also contributes to this - a garbage collector or JIT compiler wouldn’t provide nearly the consistency they would need. I find it amazing that they can even spare the time to compare the results between CPUs, especially given that they likely have the CPUs spread out to provide extra redundancy, further increasing latency. That said, maybe the latency is so large that it outweighs any gains that could be made from increased performance. In summary - they face very interesting challenges, and it’s cool to hear about it.
Hardware control is usually done with real time software, certainly in such a demanding application, you have guaranteed reaction time. Forget about garbage collection, in fact forget about dynamic memory mapping, real-time software is written in such a way that these concerns are completely removed. Limitation is how fast you can run your communication cycle, here on my table I have a system with guaranteed 250 microsecond takt time, nothing special, off the shelf stuff and consumer PC, running windows no less, I'm sure SpaceX can do better, even with redundant cpu-s keeping an eye on eachother added on top.
I don't think they have to do too complex calculations. At least not anything that's a problem on current hardware. Computers were flying rockets 50 years ago. And I don't think that reaction time is a big issue either. Since rocket engines and reaction control jets are mechanical devices, acting on the scale of tens of milliseconds. A computer game has to simulate an entire world in just 16.67 milliseconds (60fps), and some games can even run 4x faster. I'm not saying that their job is easy, just that I don't think that performance is a big concern.
I guess for most things, a milliseconds isn't that long, after all back in the 70s a lot of stuff during spaceflights was done by hand, by humand :P (like docking, engine cut-off, ...) so if a human can react fast enough for those things, a triple redundant machine will perform alright :)
"Controlling a laser with Linux is crazy, but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT." -- Linus Torvalds
The use of multiple, redundant flight computers is anything but new in spaceflight. Most rockets or spacecraft have at least two of them, the Shuttle, for instance, had 5. However, most of them used more exotic languages like ADA or Real-time Operating Systems like VxWorks
This is actually something that can happen! What they do is designate one of the 3 computers as the master, and if everything disagrees they just use that.
Maybe they compare each calculation at a time and the likelihood of radiation hitting the same tiny spot on every CPU is just insanely unlikely perhaps
It would probably be very rare (that is why they chose 3 instead of 2). Even if they all returned different values, the memory is separate so they can possibly rerun the calculations to see which computers were affected. If that wouldn't work, I can already start thinking of other clever ways to fix this. Even if all three computers were hit, they could probably still figure it out. I'm not an expert but it seems that 3 computers is enough to be reliable if that is what they settled on and they probably have a solution for any possible scenario they can think of. I wish I had an exact answer but if you really are that curious I would learn the skills needed to get a job there and become part of the team 🙃 That or maybe see if their is videos on hardware redundancy in space 😂
woah, just found you. Cannot believe i never saw any of your videos! I was actually trying to make a video like this but i guess now i don't have to because, clearly, you have done a very good job :p
Small correction: There are select rad hardened mcus and mpus that have rad hardened models. They work just like any other processor would software-wise. There are some small quirks and handicaps, but they don't require any sort of special software engineer to code. These don't use special programming languages.
The computer actually crashed as Armstrong was descending. They crashed a lot. But, they never had to render fancy graphics or the like, just crunch some calculations.
This triple computer setup is actually fairly common. Most modern railway signalling interlockings use an almost identical system As do most nuclear powers stations
When you said SpaceX uses Linux as the primary software in their computers, it gave me the BIGGEST of smiles on my face, because as a Linux user, I've felt outcasted quite a few times because it's not very non-programmer-friendly, but after a few years of struggle, I am a proud Linux user.
Ironically those old clunky computers Nasa used for Apollo were far more Radiation immune due to their lack of nanometer scale parts and the higher wattage of said computers made the bitflips even more difficult because whatever radiation trying to cause it needs to overcome the higher voltage/amperage of the parts. Modern methods are more ideal though, for sure, no one wants to try to fly anywhere either less computing power than a digital watch... *again* if it can be avoided
Thank you very much Xavier for your work. Also thanks to the translators; in my case I congratulate Jordi Ferris. The video is great !! .. I have come to know a subject that I was ignorant of and which is of fundamental importance in aircraft safety. Thank you very much!!.. Greetings from Santiago de CHILE.
Astronauts: We're nearing the space station Windows 10: Please wait while we install a system update Astronauts: Asshhhhhhoooooollllllllle! Ground: Excuse me, did I say something wrong? ISS: You're coming in a little fast.
To be honest, the software testing described in this video is standard practice in large portion of software projects for embedded stuff. Most public European satellite projects for on-board software require significantly more and do not allow dangerous technologies like C++.
Still makes you wonder how did they send rockets to the moon almost 50 years ago. Why does the task sound so complicated even with the computing power that we have today.
It doesn't sound all that hard, tbh... Plus, a lot of the systems on the lunar missions were human-guided with electronic tools. Not all that automatic.
We have all this stuff nowadays because we can. Back in 1960s they couldn't fit much flight computers in the rocket so they did what they could. Now we have tiny really fast computers, so we use a lot them to make rockets as safe as possible.
Being in the Test & Evaluation industry for the DOD I really like the telemetry systems SpaceX uses. I know quite a bit of it is still on the NASA/DOD side but damn some of their feeds are amazing and most don’t even know just how good it is. It’s definitely taken for granted.
Awesome Vid. Makes so much more sense to have cheaper redundancy than rad hardened components. Never knew that they scaled the redundancy to each engine! SpaceX has also started to use bazel integration tools rather than their own custom build tools, furthering their interest in open source projects
Triple redundant computers plus a comparator is used in Vancouver Canada's unmanned public transit system called SkyTrain. It came into service in 1986 and I am fairly sure it was not the first in the world to use such a system architecture. So it is nice to know that this rather ancient (in computer terms) technology is still considered cutting edge.
"game developers are usually a good fit for spaceX, because they're used to writing code that runs in environments where memory and processing power are constrained". I didn't know SpaceX was hiring game developers from the 90's, because this is probably just false,or greatly exagerated for most of the game developers nowadays. No one would hire a game developer for this kind of work, if there's an EQUALLY GOOD embedded systems developer available as well.
I imagine that he meant game *engine* developers like Unreal, Unity, etc. (also including people who write custom physics libraries, graphics libraries, networking libraries, etc.) because they tend to care more about performance and optimization as compared to game *play* developers. Also, because of the nature of video games, engine developers typically have a solid understanding of real-time data processing/simulation. This is crucial for computer system on a vehicle producing tons of data every second which has to be analyzed in order to make decisions. So I agree, most gameplay developers wouldn't be good, but there certainly are folks in the game industry who have the right programming mindset to do this type of work.
XoddamCXVII ofc, if he was talking about game engine developers who deal with the low level stuff, then ofc I agree with you there. But, at the same time, there arent many game engine developers out there, that's also why I think he didnt mean that, he meant high level game developer, and that's why I commented.
Same logic (odd number or calculators, generally 3 and compare results of the calculations) has been used on driverless subways for quite some time. And it is also probably used in many other fields that need calculation safety. But nice to learn that this is a robust enough concept to be used also in space.
Cpu 2: can i copy your homework? Cpu 1: sure Cpu 1: got caught copying, his homework got confiscated. Cpu 2 and 3: here ya bud. Take these paper and redo your homework.
"No need to create custom software, when u can just use GCC and gdb " :D wow dude.... really? I thought everyone writes their own compiler and debugger.. holy shit :D
Yeah.. that may be true for a debugger... but c++ compiler?? and you use this as a reason why they choose C++, as it has tooling.. well every compiled language has a compiler... and i think a debugger too.. So i still think it's a bad idea to say these things as a REASON why they choosed C++. You could've just said, they choose it cause it's closer to hardware, with all the bitflipping, and memory stuff you mentioned before... It makes more sense ...
You "think" every language has a debugger? Also: tooling is much more then just a compiler! Not every language has so many tools like C++. Take Cobol for instance. Yes it has a compiler, but other tooling is very limited. But don't take my word for it! Take a look at the sources of this video. There are a few gems in there!
My point was that listing GCC and gdb as examples of good tooling, and the reason they choose C++ ..... is stupid. Cause every compiled language has a compiler and a debugger.
@@Enemji Wait, I don't thing spaceX use IBFT, or any kind of PoW/PoS/etc at all. It's just simple redundancy. Not block chain. Correct me if I'm wrong :D
This off-the-shelf-plus-redundancy approach is how it should have been done all along. But not only for CPU/RAM. What about the cameras? Instead of laughably outdated, low resolution JPGs coming from a multi-billion-dollar probe (Huygens, anyone?), send two _modern_ cameras up, _or_ at least send up both a space-hardened, outdated POS _and_ a modern camera.
Guessing, but I imagine it's a different ball game. The length of time probes & satellites are exposed to radiation is utterly different to rockets. I don't know how long a dragon capsule would be left in space, but I guess it could be deorbitted and refitted far more easily than satellites. So I suspect if you fit your satellite with cheaper hardware and the camera breaks you need to repair it in space. If the design is flawed it's completely dead.
It's actually just part of space tradition, one which hasn't been discarded wholesale the way SpaceX did with space-hardened software/hardware. SpaceX is equally guilty of it. They use 480i (yes, circa 1990) cameras for all of their space footage, including the recent, supremely historical Falcon Heavy / Tesla Roadster launch. Those cameras aren't used for longer than a few hours, so there is literally zero excuse.
Brian Reilly The ground footage is 1080p or similar. The onboard footage -- main booster, side boosters, and every angle on the Roadster payload -- is 480i, full stop. The digital resolution of TH-cam's video has no bearing on that. If you're not familiar enough with broadcast NTSC to recognize it at a glance, do this: Capture an image from one of the Roadster feeds (like one showing the "Don't Panic" screen). Paste it into Photoshop. Scale it down to 480p. Scale it back up to its original size. Directly compare this result to the image before you did any scaling. It should be bluntly obvious then.
Ground footage, you are sure to be correct. But why would they use ancient analog video technology when higher digital resolution options are available at a lower cost with less mass and less complexity on their vehicles? This is why your claim makes no sense. Why would they choose the inferior video option?
My guess is something that's been modified a bit and the kernel has been built without any driver they don't need. I wouldn't be surprised if they just put the program in the initrd, so it executes as soon as the kernel is ready. It's fairly common practice to other Linux based embedded systems. Routers are a good example of that.
Simply Explained - Savjee On consoles yes, I agree with you, but on PC, its a different story... Because PC gaming is a much smaller segment, very few game studios care to optimise their game to run well on that platform, leaving game engines behind the capabilities of graphics cards. It has only in the past year that optimising games to use multiple threads has become popular.
Yeah you do have a point about PC gaming. But in general game developers have to carefully manage things like memory and they can only run a limited amount of code to render each frame (heavier code = more time to render a frame = less frames per second).
Simply Explained - Savjee a lot of the lower level work is handled by the game engine, which aren't updated often enough to take full advantage of PC hardware. But yes, I understand the point you were making for the purpose of this video.
Falcon itself runs on PPC based controllers which are also used for low levels functions on Dragon the X86 computers don't directly control the vehicle it's kinda like a horse and rider arrangement. The electronics are not consumer grade COTS but more so industrial stuff like what you'd see running an assembly line or heavy equipment. I think the lowest end hardware they use is automotive quality stuff which is still very ruggedized compared to lets say a laptop or phone.
Going to the moon doesn’t require crazy technology, to put it simple you just need a lot of money and do a lot of math. Also they just used bigger computers and with more analog signals that aren’t affected by radiation but analog signals take longer to process.
Space travel has unique problems almost requiring out of date software and hardware. The space shuttles last flight in 2011 had hardware and software that was state of the art in 1992! The simple reason is one had to test for all variables. Hardened older systems have that. New parameters have probably changed that but 6 years sounds about right. Military stuff is even worse. I use to work for Lockheed.
Blockchain just uses agreement protocol in a particular step but this type of fault tolerance is known for a long time and has some formal theory. It's called byzantine fault tolerance.
It runs windows 10 with updates disabled :D
Imagine your about to land on mars and windows decides to update
@Luke skywalker The Starfleet commander humour.dll not found
That's not physically possible.
lol
@Luke skywalker The Starfleet commander telemetry monitors in the control center are on windows
import rocket
rocket.launch()
seasong launch function not defined
rocket undefined
cin >> software;
cout
if rocket.goingtocrash
don't
rocket.self_destruct('5minutes')
Low earth orbit doesn't require much rad-hard chips. We need them more for deep space missions (like the deep space gateway in the next decade). Source: I work at NASA on rad-hard camera systems.
accept my resume sir?
Yeah i figured myself. LEO still profits from the dense part of earth' magnetic field to deflect charged particles. That is why cube sats are popular.... your Atmel ATmega something something will usually not encounter a lot of problems except colliding with another retrograde cube because one smartass studend picked Kessler syndrome as topic for the finals.
Didn't the upper stage of the falcon heavy maiden flight go trough the Van Allen belt before it's final burn to test if it could withstand the radiation and the 6 hour coast?
jakejakeboom well parts of the van allen belts are in Leo so you might need it for there
Three words: South Atlantic Anomaly.
The Apollo program was a driving force in computer development, now computers helps rocket science.
The driving force was primarily geared towards nuclear and thermo-nuclear weapons.
A great positive feedback which benefits us all.
Proud to be a computer science student
How the tables have turned…
Not only a Developer, you are a good Designer too (nice visuals).
Thanks a lot!
The tool I'm using has nothing to do with the visuals...
Btw. What did you use for the visuals?
Most of it is done in Keynote and some stuff in Sketch. Mac only I'm afraid.
Sketch is supercool, and yeah, sad part is both are only for MacOS.
I'm guessing SpaceX prefers "spaces" over tabs.
And they take software launch and crash very seriously.
You got the Ariane V error wrong. It tried to convert a 64 bit float number to a 16 bit integer. And the result is not truncated. An conversion error is raised, and the error message is interpreted as valid data which caused this failure.
Hehe, I was about to comment on this too. To be exact, the Ariane 501 report states: "As a result of its failure, the active inertial reference system transmitted essentially diagnostic information to the launcher's main computer, where it was interpreted as flight data and used for flight control calculations." Thus I would argue it wasn't the error itself being interpreted as guidance data but some other "diagnostic information".
And it was due to the fact that Ariane 5 shared components and therefore sourcecode with Ariane 4 and t since the components and software were known to function this condition was not tested.
Thanks for the video, and I came here to make a similar comment. AFAIK, and our lecturer brought the secondary computer to demonstrate, the exception caused a fail-over, to rapidly reoccur also in the secondary computer, triggering the self-destruct. I was not aware of any faulty interpretation of data.
Wow so just converting a variable caused the ship to go wrong direction.. dang..
obviously spacex uses mechjeb
XD
lol
He is sayihg Mechjeb mod in Ksp
thx sherlock
xD
Clearly these people spent a long time thinking about everything they could think about and they clearly have the brain resources to aknowledge what they should do or not!
It must be pretty exciting to work over there :)
Thanks for sharing and keep it up the great job Xavier !
more like they are able to start the entire space program from zero, so they don't have to deal with bureaucracy, "years of protocols", and able to implement the latest computer, electrical and material science technology swiftly without the needs to seek layers of organizational approval.
You have to realize that our military-industrial complex has became an elephant that it is better off to privatize and start from zero.
champan250 you'd still have to acknowledge that starting a space business from zero still is very far from easy. I guess you could say lightyears away from easy :D (sorry bout that, I had to)
champan250 You're absolutely right, i worked for a couple of companies when i was living in Paris and it's just annoying how their processes in big companies are slow. I clearly remember one company which wanted to move from Windows XP to Windows 7 and it took them basically 5 years to make it happened. Two years of reflexion about the impact in term of computer infrastructure and softwares engeenering and 3 other years to fully change and migrate everything. Big Elephant indeed.
are they prone to hacking? since they are using generic softwares?
maybe but nobody cares hacking spacex softwares... especially the onboard computers because they dont use internet
Thanks for sharing. I am a professional embedded SW developer and SpaceX enthusiast; so, I had been curious about their code.
I was glad to hear they use Unix and that they don't use Python (the horrible preferred choice of a lot of Mechanical types).
I am a seasoned C++ programmer and do like that choice; but, I think a managed programming language like C++ of Java might have some advantages.
C++ is a loosely constrained language and is also un-managed; so, it can be more difficult to find subtle bugs (like unassigned pointers, and memory leaks) and much more dangerous when you miss them. There are techniques for limiting this; but, programmers are still people and they can make and miss mistakes and testing every possible code path is often not possible.
I am sure the developers on the Ariane 5 did lots of testing and still somehow missed the code bug that caused the failure.
Anyone that claims their code is 100% bug free is either deluding themselves, lying, or not looking. I am not really sure which one of those is worse.
The advantage of C++; though, can be better real-time determinism. With UNIX and C++ on an X86 core, one project I was on achieved sub-microsecond determinism.
At the speeds they are flying, that may be the bigger concern. There are ways to build more safety of C++ code, especially if they don't use other people's libraries.
Thanks for the cool info.
Glad to see videos like this! I was a software developer for the shuttle program for 25 years designing and writing various launch safety systems, worked for USA (United Space Alliance)
And what software and hardware was used for the shuttle back then?
I was amazed to find a ticket-automat some weeks ago which had a failure and was re-boot all the time. Behind the "fancy" mask with color- touch display, there I saw the system is a 286er prozessor running Microsoft-DOS, in 2018. But normally it works well and failures are seldom and you can buy tickets there.
Cool, are you able to give some comments to how the SpaceX software approach is different from how you worked? E.g. did you use C/C++, continuous integration, store logs with source code, test on actual hardware etc as mentioned here? I believe I read somewhere that SpaceX built a board with all the various hardware components, actuators etc controlled by the computers, so that they could test the software against actual hardware.
However that sounded like a no brainer to me. Would not all space companies do this? Or was your software only really tested upon launch of a rocket?
my dream...
Hi Mojo. Did you guys happen to use FPGA's while there? Just curious.
Such an elegant solution to a difficult problem. I have come across this many times in IT industry. Have built in redundancy throughout so any one failure does not compromise the system.
Arduino uno, obvious!
Henrique Lemos hehe
Great Scott could Launch a dragon spacecraft, bring it to Mars and make it land, using an Arduino Nano.
I just finished my arduino toilet when I saw this. I call it the toiletduino. I'm not even joking
Wow, what metrics do you collect about your toilet?
@@simplyexplained lol I just hooked it up to a solenoid valve and a push button, nothing fancy
I remembered a news story about off-the-shelf hardware in the 90s. Irans or Iraqs spies bought a lot of ps1 consoles then turned them into scud missiles processor guidance system. The US immediately put an embargo on ps1 sales. Good times. Nowadays its not weird to find milspec hardware using your average laptop/desktop cpu. Good times we're living now. Good and cheap.
Mu Effe fun fact, the USAF uses several hundred PS3s setup with a customized version of Linux to turn it into a Super Computer cluster. The GPUs handled the heavy computational load with the CPUs handling the normal system functions & acting as a co-processor to the GPUs.
NewHorizon processor came from ps1. because it was used from million of people and it works very well.
They haven't done anything like that. IBM/Sony Cell processor was very interesting, but turned out to be to difficult to take full potential off. Iranians had indeed smuggled some PS1s for "meteorological research" cluster, but I doubt they would waste that horsepower on disposable rockets, which don't even need that. Soviets had image guided Scud modifications in late '80, so had Americans with Pershing II (radar images). Problem with this is that you need to build up library first.... PS: I wonder, had some of Cell creators been hired by, say.... AMD ;) ?
This was a very interesting video!
Their usage of multiple CPUs for redundancy is very clever, but I wonder how they manage to keep performance at optimum levels. Given that split second decisions can have a huge impact on the results. Your comment about game developers might contribute to this - they similarly have to write code that has consistent performance characteristics, you don’t want something that normally takes 5ms taking 100ms because it encountered a worst-case scenario.
The choice of OS and language also contributes to this - a garbage collector or JIT compiler wouldn’t provide nearly the consistency they would need.
I find it amazing that they can even spare the time to compare the results between CPUs, especially given that they likely have the CPUs spread out to provide extra redundancy, further increasing latency. That said, maybe the latency is so large that it outweighs any gains that could be made from increased performance.
In summary - they face very interesting challenges, and it’s cool to hear about it.
Hardware control is usually done with real time software, certainly in such a demanding application, you have guaranteed reaction time. Forget about garbage collection, in fact forget about dynamic memory mapping, real-time software is written in such a way that these concerns are completely removed. Limitation is how fast you can run your communication cycle, here on my table I have a system with guaranteed 250 microsecond takt time, nothing special, off the shelf stuff and consumer PC, running windows no less, I'm sure SpaceX can do better, even with redundant cpu-s keeping an eye on eachother added on top.
I don't think they have to do too complex calculations. At least not anything that's a problem on current hardware. Computers were flying rockets 50 years ago.
And I don't think that reaction time is a big issue either. Since rocket engines and reaction control jets are mechanical devices, acting on the scale of tens of milliseconds. A computer game has to simulate an entire world in just 16.67 milliseconds (60fps), and some games can even run 4x faster.
I'm not saying that their job is easy, just that I don't think that performance is a big concern.
I guess for most things, a milliseconds isn't that long, after all back in the 70s a lot of stuff during spaceflights was done by hand, by humand :P (like docking, engine cut-off, ...) so if a human can react fast enough for those things, a triple redundant machine will perform alright :)
"Controlling a laser with Linux is crazy, but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT." -- Linus Torvalds
The use of multiple, redundant flight computers is anything but new in spaceflight. Most rockets or spacecraft have at least two of them, the Shuttle, for instance, had 5.
However, most of them used more exotic languages like ADA or Real-time Operating Systems like VxWorks
Three computations comparing to each other. Simply brilliant. 3:25
curious how spacex handles the issue when all 3 computers calculate something incorrectly.
This is actually something that can happen! What they do is designate one of the 3 computers as the master, and if everything disagrees they just use that.
@@nathanheidt1047 That seems sketchy??? Curious to where you found this information
Maybe they compare each calculation at a time and the likelihood of radiation hitting the same tiny spot on every CPU is just insanely unlikely perhaps
And then they recalculate or sum I dunno
It would probably be very rare (that is why they chose 3 instead of 2). Even if they all returned different values, the memory is separate so they can possibly rerun the calculations to see which computers were affected. If that wouldn't work, I can already start thinking of other clever ways to fix this. Even if all three computers were hit, they could probably still figure it out. I'm not an expert but it seems that 3 computers is enough to be reliable if that is what they settled on and they probably have a solution for any possible scenario they can think of. I wish I had an exact answer but if you really are that curious I would learn the skills needed to get a job there and become part of the team 🙃 That or maybe see if their is videos on hardware redundancy in space 😂
I was a little tired of watching the same SpaceX facts over and over again but you always find something new! ^^
Keep Going! This was a very interesting video:) I want know more about SpaceX
Exactly the right amount of explanation I want as an engineer and a developer. Thank you. You certainly got a sub.
woah, just found you. Cannot believe i never saw any of your videos! I was actually trying to make a video like this but i guess now i don't have to because, clearly, you have done a very good job :p
Thanks a lot! Glad you find my videos interesting.
First french speaking guy that I hear on youtube that actually makes an effort for his english accent, ears say thank you Xavier
I'm not French, but thanks for the compliment! I will keep practicing ☺️
Perfect video, very instructive. Thank you.
Small correction: There are select rad hardened mcus and mpus that have rad hardened models. They work just like any other processor would software-wise. There are some small quirks and handicaps, but they don't require any sort of special software engineer to code. These don't use special programming languages.
great stuff, best vid yet. glad I am subscribed
I was impressed with these topics you mentioned CI, Linux, Monitoring, Code sharing. Thanks for clearing my curiosity
I just finished my html course on udemy
Can i work for spacex now?
Yeah, sure--you'll be tasked with updating their Website... :)
@@ZeHoSmusician HAHAHA
HTML is for web frontends, not control hardware.
Sorry, he said c++, please take a new course 😂😂
It's really that easy tho
@@ZeHoSmusician I cracked at that HAHAH
Wow you explained it so easily! Great job!
"Voting generals problem", already used in Space Shuttle avionics.
This is so sick. Studying computer science and physics rn. Working at space x is a dream. In 2-3 years time it'll be my life
I still wonder how we went to the Moon with old hardware 😂
competence and a bit of luck
Make the transistors so big that radiation doesn't matter.
make everything analog so bits doesnt matter
Watch the "Apollo AGC" series by CuriousMarc. They have most of the old hardware and got it kinda working again. Cool stuff.
The computer actually crashed as Armstrong was descending. They crashed a lot. But, they never had to render fancy graphics or the like, just crunch some calculations.
What a cool video, I love the detail on what makes space x different not just in a general sense but in a specific sense like this.
Thanks for sharing. I love C++ too :)
This triple computer setup is actually fairly common.
Most modern railway signalling interlockings use an almost identical system
As do most nuclear powers stations
7:6 when you tap on screen and see penguin crying!😂😂😂😂😂😭😭😭😭😭
When you said SpaceX uses Linux as the primary software in their computers, it gave me the BIGGEST of smiles on my face, because as a Linux user, I've felt outcasted quite a few times because it's not very non-programmer-friendly, but after a few years of struggle, I am a proud Linux user.
More stuff like this please! :)
Guys, are there any other similar channels on TH-cam?
Thanks, I learned a lot from this and will consider some of these strategies in my development.
Amazing how nasa with 60s tech landed on the moon.
Ironically those old clunky computers Nasa used for Apollo were far more Radiation immune due to their lack of nanometer scale parts and the higher wattage of said computers made the bitflips even more difficult because whatever radiation trying to cause it needs to overcome the higher voltage/amperage of the parts.
Modern methods are more ideal though, for sure, no one wants to try to fly anywhere either less computing power than a digital watch... *again* if it can be avoided
Or did they.... 😜
@@thedownunderverse dont start with this shit man plz
Thank you very much Xavier for your work. Also thanks to the translators; in my case I congratulate Jordi Ferris. The video is great !! .. I have come to know a subject that I was ignorant of and which is of fundamental importance in aircraft safety. Thank you very much!!..
Greetings from Santiago de CHILE.
Thanks for the kind words! Nice to hear you found it interesting.
Astronauts: We're nearing the space station
Windows 10: Please wait while we install a system update
Astronauts: Asshhhhhhoooooollllllllle!
Ground: Excuse me, did I say something wrong?
ISS: You're coming in a little fast.
To be honest, the software testing described in this video is standard practice in large portion of software projects for embedded stuff. Most public European satellite projects for on-board software require significantly more and do not allow dangerous technologies like C++.
Still makes you wonder how did they send rockets to the moon almost 50 years ago. Why does the task sound so complicated even with the computing power that we have today.
It doesn't sound all that hard, tbh... Plus, a lot of the systems on the lunar missions were human-guided with electronic tools. Not all that automatic.
We have all this stuff nowadays because we can. Back in 1960s they couldn't fit much flight computers in the rocket so they did what they could. Now we have tiny really fast computers, so we use a lot them to make rockets as safe as possible.
Being in the Test & Evaluation industry for the DOD I really like the telemetry systems SpaceX uses. I know quite a bit of it is still on the NASA/DOD side but damn some of their feeds are amazing and most don’t even know just how good it is. It’s definitely taken for granted.
If they used windows the whole sky will become blue screen of death. (BSOD)
Awesome Vid. Makes so much more sense to have cheaper redundancy than rad hardened components. Never knew that they scaled the redundancy to each engine! SpaceX has also started to use bazel integration tools rather than their own custom build tools, furthering their interest in open source projects
Video title says “software”, yet the video speaks mostly about hardware
Awesome video--thanks for making and posting it!
Title: Software powering Flacon stuff. Actually: Hardware powering Falcon stuff. I am disappoint.
Quality video. Thanks, very interesting.
great stuff ! seems you got a new sub! doe zo voort !
Awesome stuff, never thought about this before, thanks for making this video
Me neither! Was fun to research this topic and make the video. Glad you liked it!
it's awesome :)
linux, linux everywhere!
C++, linux!? Regular chipsets in space! Outstanding! The bitflip issue troubleshooting is very interesting.
Betaflight
Butterflight
Betterflight
wonder what pids that thing runs
No, its multiwii
@@J5Jonny5 pristine pids
Triple redundant computers plus a comparator is used in Vancouver Canada's unmanned public transit system called SkyTrain. It came into service in 1986 and I am fairly sure it was not the first in the world to use such a system architecture. So it is nice to know that this rather ancient (in computer terms) technology is still considered cutting edge.
"game developers are usually a good fit for spaceX, because they're used to writing code that runs in environments where memory and processing power are constrained". I didn't know SpaceX was hiring game developers from the 90's, because this is probably just false,or greatly exagerated for most of the game developers nowadays. No one would hire a game developer for this kind of work, if there's an EQUALLY GOOD embedded systems developer available as well.
game developers today dont care about memory or optimisation at all compared to the 90s
DasEtwas that's exactly my point.
Unemployed embedded programmer spotted
I imagine that he meant game *engine* developers like Unreal, Unity, etc. (also including people who write custom physics libraries, graphics libraries, networking libraries, etc.) because they tend to care more about performance and optimization as compared to game *play* developers. Also, because of the nature of video games, engine developers typically have a solid understanding of real-time data processing/simulation. This is crucial for computer system on a vehicle producing tons of data every second which has to be analyzed in order to make decisions. So I agree, most gameplay developers wouldn't be good, but there certainly are folks in the game industry who have the right programming mindset to do this type of work.
XoddamCXVII ofc, if he was talking about game engine developers who deal with the low level stuff, then ofc I agree with you there. But, at the same time, there arent many game engine developers out there, that's also why I think he didnt mean that, he meant high level game developer, and that's why I commented.
Same logic (odd number or calculators, generally 3 and compare results of the calculations) has been used on driverless subways for quite some time. And it is also probably used in many other fields that need calculation safety. But nice to learn that this is a robust enough concept to be used also in space.
Cpu 2: can i copy your homework?
Cpu 1: sure
Cpu 1: got caught copying, his homework got confiscated.
Cpu 2 and 3: here ya bud. Take these paper and redo your homework.
I used your video for a school presentation. Thanks for the useful information!
Haha awesome. Hope you got a good score ;)
@@simplyexplained I'll be sure to let you know.
Can you make those flight computers which can survive bitflip
up
Excellent explanation for the lay person.
"No need to create custom software, when u can just use GCC and gdb " :D wow dude.... really? I thought everyone writes their own compiler and debugger.. holy shit :D
Those are two examples. However it's not uncommon to see homegrown tooling for special hardware parts!
Yeah.. that may be true for a debugger... but c++ compiler?? and you use this as a reason why they choose C++, as it has tooling.. well every compiled language has a compiler... and i think a debugger too..
So i still think it's a bad idea to say these things as a REASON why they choosed C++.
You could've just said, they choose it cause it's closer to hardware, with all the bitflipping, and memory stuff you mentioned before... It makes more sense ...
You "think" every language has a debugger?
Also: tooling is much more then just a compiler! Not every language has so many tools like C++. Take Cobol for instance. Yes it has a compiler, but other tooling is very limited.
But don't take my word for it! Take a look at the sources of this video. There are a few gems in there!
My point was that listing GCC and gdb as examples of good tooling, and the reason they choose C++ ..... is stupid.
Cause every compiled language has a compiler and a debugger.
tblb1, It seems you point is to be overly anal. The example was perfectly clear and perfectly fine.
Gotta love SpaceX. Looking forward to the launch this week
They are essentially using the Blockchain concept to counter the bit flip.
what? No.
its just redundancy, blockchain uses this yeah.
A Nut - ah. Yes. It is. Look up IBFT algorithm.
Hahaha
@@Enemji Wait, I don't thing spaceX use IBFT, or any kind of PoW/PoS/etc at all. It's just simple redundancy. Not block chain. Correct me if I'm wrong :D
What a brilliant video. Thank bro!
This off-the-shelf-plus-redundancy approach is how it should have been done all along. But not only for CPU/RAM. What about the cameras? Instead of laughably outdated, low resolution JPGs coming from a multi-billion-dollar probe (Huygens, anyone?), send two _modern_ cameras up, _or_ at least send up both a space-hardened, outdated POS _and_ a modern camera.
Guessing, but I imagine it's a different ball game. The length of time probes & satellites are exposed to radiation is utterly different to rockets. I don't know how long a dragon capsule would be left in space, but I guess it could be deorbitted and refitted far more easily than satellites.
So I suspect if you fit your satellite with cheaper hardware and the camera breaks you need to repair it in space. If the design is flawed it's completely dead.
It's actually just part of space tradition, one which hasn't been discarded wholesale the way SpaceX did with space-hardened software/hardware. SpaceX is equally guilty of it. They use 480i (yes, circa 1990) cameras for all of their space footage, including the recent, supremely historical Falcon Heavy / Tesla Roadster launch. Those cameras aren't used for longer than a few hours, so there is literally zero excuse.
Asterra2....those images from Falcon Heavy do not look like 480i. They look more like 1080p. Do you have any reliable sources to back that claim?
Brian Reilly The ground footage is 1080p or similar. The onboard footage -- main booster, side boosters, and every angle on the Roadster payload -- is 480i, full stop. The digital resolution of TH-cam's video has no bearing on that. If you're not familiar enough with broadcast NTSC to recognize it at a glance, do this: Capture an image from one of the Roadster feeds (like one showing the "Don't Panic" screen). Paste it into Photoshop. Scale it down to 480p. Scale it back up to its original size. Directly compare this result to the image before you did any scaling. It should be bluntly obvious then.
Ground footage, you are sure to be correct.
But why would they use ancient analog video technology when higher digital resolution options are available at a lower cost with less mass and less complexity on their vehicles? This is why your claim makes no sense. Why would they choose the inferior video option?
Watched this vid twice already... fascinating stuff!!
Can you tell any more about the linux they use? Is it a forked distro? Custom kernel? unix-like?
My guess is something that's been modified a bit and the kernel has been built without any driver they don't need. I wouldn't be surprised if they just put the program in the initrd, so it executes as soon as the kernel is ready. It's fairly common practice to other Linux based embedded systems. Routers are a good example of that.
they don't used linux they used Windows ME
Really cool video. This is a perspective I haven't seen from the other space channels.
Most PC game developers don't care to optimise their code.
Sources? Games on consoles are usually very optimized because of the constraints....
Simply Explained - Savjee On consoles yes, I agree with you, but on PC, its a different story...
Because PC gaming is a much smaller segment, very few game studios care to optimise their game to run well on that platform, leaving game engines behind the capabilities of graphics cards. It has only in the past year that optimising games to use multiple threads has become popular.
Yeah you do have a point about PC gaming. But in general game developers have to carefully manage things like memory and they can only run a limited amount of code to render each frame (heavier code = more time to render a frame = less frames per second).
Simply Explained - Savjee a lot of the lower level work is handled by the game engine, which aren't updated often enough to take full advantage of PC hardware. But yes, I understand the point you were making for the purpose of this video.
yes there are a lot of games without good graphics that require high end PCs
incredibly well done, very informative and easy to understand, even for someone who doesnt know anything about programming
its obvious.
hardware: Falcon 9
software: Elon Musk (AI edition)
Falcon itself runs on PPC based controllers which are also used for low levels functions on Dragon the X86 computers don't directly control the vehicle it's kinda like a horse and rider arrangement.
The electronics are not consumer grade COTS but more so industrial stuff like what you'd see running an assembly line or heavy equipment.
I think the lowest end hardware they use is automotive quality stuff which is still very ruggedized compared to lets say a laptop or phone.
You sound Flemish
That might be correct 👌
i was expecting more details about software.but yet a good effort by you.
7:09 *Linux is a kernel*
Great presentation!
so did we land human in the moon back in 1969?
yes, with bigger computer and more risk
Going to the moon doesn’t require crazy technology, to put it simple you just need a lot of money and do a lot of math. Also they just used bigger computers and with more analog signals that aren’t affected by radiation but analog signals take longer to process.
For anyone curious the memory spoken about earlier using parity bits is called ECC memory( Error Correcting Code Memory).
Spoiler alert.....he is mostly referring to what Space X used back in 2012. So if 6 year outdated information is your thing, watch it all.
Paul Z Sources for updated information?
Nine: Updated information is restricted by ITAR....but yes, this video has mostly 6 year old information.
Space travel has unique problems almost requiring out of date software and hardware. The space shuttles last flight in 2011 had hardware and software that was state of the art in 1992! The simple reason is one had to test for all variables. Hardened older systems have that. New parameters have probably changed that but 6 years sounds about right. Military stuff is even worse. I use to work for Lockheed.
probably now they use ten quad core processor, but linux and c++ remain
Great video, very informative! Have a nice day.
Who else reminded this of Blockchain Technology ?
Blockchain just uses agreement protocol in a particular step but this type of fault tolerance is known for a long time and has some formal theory. It's called byzantine fault tolerance.
Honestly blockchain tech looks very pale compared to this.
The Blockchain is overhyped crap.
I have been thinking a lot about this, and your video was really great at explaining everything. Much appreciated! 🙏🏾
Cool thanks for the detailed info!
You are awesome, This video has a in-depth information than other videos. I doubt where did you find this info, Great work.
All my sources are documented. Check the description!
Not only does space x buy consumer grade hardware, they also love buying used equipment
This is fascinating. I never thought I'd know any of this info about SpaceX - I love it!
Nice video Savjee!
Amazing work man. Keep it up
Great video ! Very interesting.
Very well made, thanks
Excellent!
Thanks
Wow , i learned lot of new thing in this video , thank you
Great video, loved how you broke it all down!
Absolutely awesome video.
loved it. Great simplicity and worth a watch.
Thank you for doing this video🙏🙏
Great job with this video !! You should make more !!
That was very enlightening, thanks.