Linux Was Obsolete 30 Years Ago
ฝัง
- เผยแพร่เมื่อ 27 พ.ย. 2024
- Nowadays it's well established that Linux is the king of the free software world but that doesn't mean when it was first created it was following "optimal" operating system design principles, in fact the creator of Minix would have failed Linus Torvalds out of his OS class.
==========Support The Channel==========
► $100 Linode Credit: brodierobertso...
► Patreon: brodierobertso...
► Paypal: brodierobertso...
► Liberapay: brodierobertso...
► Amazon USA: brodierobertso...
==========Resources==========
Minix Thread: groups.google....
Linux Thread: groups.google....
=========Video Platforms==========
🎥 Odysee: brodierobertso...
🎥 Podcast: techovertea.xy...
🎮 Gaming: brodierobertso...
==========Social Media==========
🎤 Discord: brodierobertso...
🎤 Matrix Space: brodierobertso...
🐦 Twitter: brodierobertso...
🌐 Mastodon: brodierobertso...
🖥️ GitHub: brodierobertso...
==========Credits==========
🎨 Channel Art:
Profile Picture:
/ supercozman_draws
🎵 Ending music
Track: Debris & Jonth - Game Time [NCS Release]
Music provided by NoCopyrightSounds.
Watch: • Debris & Jonth - Game ...
Free Download / Stream: ncs.io/GameTime
#Linux #OpenSource #FOSS #LinuxDesktop
DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase I may receive a small commission or other compensation.
I was unaware of the use of MINIX in Intel ME, congrats to Andy for his project being embedded in such an obscure companies CPU
Its also known to be a intentional backdoor. It has access to your TCP/IP Stack, can send and receive network packets even if the OS is protected by a firewall. It is signed with an RSA 2048 key that cannot be brute-forced and cannot be disabled on newer Intel Core2 CPU's. It also can run on a the 5 volt rail by the power supply while the computer is powered off as long as its plugged in... There are companies like Pine64 or Framework that try to limit Intel ME by modifying the bios.
I was gonna make a joke about it in my comment, but now there is no need
Well, take a step back though.... What if intel had 'screwed up' and those other companies took right off, and we ended up reaching micro-kernels & hybrids by design as the standard now.
I believe that a large portion of linux success in just existed belongs to intel ending up the king of the space with an arguable interchangeable competitor in ATI/AMD in the end.
Seriously though, intel could have folded in the early 90's... It wasn't the likely outcome.. but on the samehand, neither was them becoming the CPU leader for what.. 3 decades, and yes even with AMD now being as good as it is, concepts that stood strong in ST performance & straight-out reliability causes the design to favor certain avenues. We can argue about intels methods or ethics till the cows come home, but the fact of the matter is they where among the earliest & they stuck around and in the earlier days made the better products so more people 'cloned' their cpus, which only impowered their cpus in the long run. (because as we know...all that does is give your competitors customers later for some money today, because it's their infrastructure.. not yours.... and by that I mean all the SW would still be running for THEIR design which means momentum for them)
I know it wasn't entirely set in stone than, but a good enough alternative could have de-throned Intel completely and boosed an entirely new set of main CPUs
@@uniqueprogressive9908 The official reason for all of that is for companies to remotely administer fleets of computers (erase data off of them, reinstall the OS, etc.). Even if bioluminescent three-letter agencies do not have backdoors there, it's a significant security risk e.g. because bugs and exploits exist.
Still, PINE64 doesn't have any hardware with Intel CPUs in them (it's all ARM and RISC-V, Primarily SoCs by RockChip and Allwinner). I haven't heard about Framework getting into the business of ME neutralization either.
Companies that do make Intel laptops with coreboot and are trying to disable the management engine are Purism, System76 and probably a few others as well
the lesson: permissive licenses are a bad idea
"Where is Minix today?", possibly running in Intel ME which runs on every modern Intel CPU, though nobody can confirm or deny this.
No it is confirmed that Minix drives Intel ME, it's not really a secret.
Yeah, plus it adopted NetBSD's userland (_re_ 6:16)
"If it works, it's obsolete". A well loved quote from the electronics world of way back when that I regularly apply to the worlds of software designs.
Another favorite quote from way back when: "Well, anything can be made to work one time on the test bench."
"if it ain't broke don't fix it"
Aah, the memories. Being young teenager in Finland (country that has only 5 million people) and constantly hearing rumors about some Finnish dude who has made his own operating system. Then finally playing with Linux in Assembly 1994 demoparty in Helsinki where Linux people had their own booth. I would so want to know if Linus was there, but I remember playing with the OS and laughing (it was the time of Amiga vs. PC war and I was in the Amiga camp) and then some dude who looked a lot like Linus got angry and threw us out from the booth.
Safe to say I was a little bit wrong about the future prospects of that project. Though in my defense demoscene was cool, but the Linux people were the stereotypical 80s nerds. Those were the times when future seemed great and not technocratic dystopia.
The hel(l) sink.
@@Reichstaubenminister Yep, also HellSinging as we have the most amount of metal bands per capita.
@@priyapepsi As opposed to the luddite gehenna of modern age?
@@priyapepsi The people responsible are mostly fatherless young men. The are disillusioned and have no anchor for their world view, so they just endulge in consumption, in this case technology.
I was an Atari ST guy, but I knew Linux was a huge deal. The demo scene did some stuff with the ST, but I'm guessing the Amiga's specialized hardware permitted some much more amazing stuff.
But I had used Unix at the University and set up UUCP with email and Usenet on my ST at home. I knew Unix was the future.
It's unfortunate that people confuse completed software with obsoleted software. We're forever doomed to the experience of abandonware, bloatware, feature breaking deprecation, and increasing user exploitation.
Exactly! Stable software is good software. Once features are stabilized, any change would be bug fixes. Once that's also stabilized, it just means that the code is really good.
I see the mindset of software being 'stale' if it didn't do new gimmicks. It's one thing I dislike about the Linux world. The constant change that breaks things, change for the sake of change. Also the assumption that everyone needs only the latest version of a piece of code.
@@anonymouseniller6688 I like the suckless way of doing things. You get a barebones minimalist piece of software and if you want certain features, you apply patches to it. My only gripe is that is that there isn't a good tool that I know of for managing these things. It's fairly easy to screw up your package by installing two patches that conflate with each other.
"the braindead nature of the x86 CPUs makes that difficult to do otherwise" THANK YOU. I have had endless debates with people who claim that including drivers in the kernel is the only way to go. In fact, Intel designed the x86 memory model to support three rings, from 0 to 3, with the drivers living outside the kernel. The problem is (as usual for Intel) they botched it. They tried to build tasking into the hardware, and the result was a bug ridden low performance mess that nobody uses. In fact, if you go back and read how Multics was designed, you will realize that Intel copied Multics design into their 80286 processors (yes, that Multics, the overcomplex system that Unix was created as a rebellion against). I explain the origin of 80286 protection model as being like the episode of Startrek where they go to a planet that is structured on the gangs of old Chicago because some previous visiting ship messed up and left a book on Chicago history behind, and they based their whole civilization on it. If you read up on the x86 protection/memory model, which all stems from the 80286, you realize that the designers read a book on Multics and that was it. They imitated the design. As one German Zeppelin engineer stated when getting a look at the R-101, which crashed and burned in France killing almost all abord: "you have imitated our design completely, including our mistakes".
PS. before you mark me as an Intel basher, I am an ex-Intel employee.
Whoa.
Interesting…
In 1991 when Linus made his initial announcement, there was a common phrase (that thankfully fell into disuse) called vapourware that referred to operating systems and new office suites that were announced many years before it could actually be compiled let alone run and even more years before serious bugs were finally eliminated. A large part of the success of Linux was that right from the start it existed and has never suffered from the vaporware blight. This was critical for the emergent Internet in the early 1990's where one could simply install Slackware and get on the Internet and world wide web albeit with Lynx, Pan, Veronica, Gopher and Archie. To provide some perspective, in 1991 we were hardly using WIndows 3 yet, Windows 95 was four years away while Windows NT which was eagerly expected by 1991 was released in July 1993. When Windows 95 was launched in 1995, Linux had version 1.0 of the Apache web server which was argueably the most critical part in the success of Linux..
vapourware very much still exists, just look at tesla's so called "full self driving"
it's always interesting to see how people of the past watched into the future
Microkernel is very slow even on modern machines because of Meltdown mitigation.
I still hav Tannenbaum’s 90s textbook from Comp Sci. I used to sing ‘O Tannenbaum’ when taking it out of my bag :-)
The only textbooks that are worse than Tannenbaums are Patterson's architecture books.
microkernels aren't a meme, they're still seen as the kernels of the future even today, they're just *much* harder to program well
Precisely why they're a meme.
the main reason making something more complicated ... is alwasy a disaster in long term...
Why is JS and Python used so much when they are messy cause it's way simpler.
Future where Meltdown and SPECTRE never existed
Obsolete is simply not an argument for choice of software. I run a sendmail server that first went online in 1997. It (sendmail) is rock solid and fabulous, and I never worry about who might be reading my mail.
Since 97!? That is rock solid!
I have had many apps I really liked that are no longer made, and I'm sure many of us have, and would still be using them if they would still work with our newer gear and OS's. The craziest thing I had happen was with a simple game (scroll shooting alien ships) I used in Windows 3.1 on a 486. When I upgraded to an AMD K6, and Windows 95, I installed the game and it ran super fast!😲 There was no way the aliens couldn't win, and they blasted the crap out of me! I got lucky in killing just one of them before my ship got vaporized! 🤯Of course I had to give up that somewhat addictive game.😢
/me laughs in `bind`
I hope you upgraded.
The problem with this old software like bind or send mail isn’t necessarily functionality wise. I operated sendmail myself a long time ago but then switched to postfix, because of security reasons. Much of older software has not been designed with security in mind and is problematic in today’s world where everything is always under attack. Same goes with monolithic kernel vs micro kernel. Performance it was the better choice. However regarding security it would have always been a better choice to keep the parts more separated. Even today we can see that security is victimized to performance in the system level, while on the application level then people use resource wasting languages like Python.
In the 2nd part of the seventies, I designed a mini kernel OS, that was used in our Air Traffic Control systems till 1990. It was a mini kernel part with a minimal set of routines written in assembler with a pre-emptive task scheduler, basic IO, semaphores, interrupt handling, etc (~20 assembler routines of max 2 pages each :) On top of that written in RTL/2 (like C); device handlers; messaging; name translation for semaphores; processor/task names. It had to run in 16-bits minis, so the majority of the inter-task communication was based on messaging. The message address had 2 bytes processor-id, task-id.
The OS had tasks written in RTL/2, that took care of; networking (during that time HDLC); file management; booting PCs; crash dumps; operator communication; debugging; hardware error logging and software error logging. All used messaging and could thus be used over the network. Minis on the radar head were booted over telephone line and so were the radar display minis. The RTL/2 part of the OS kernel had its own page table and each OS task could run in an own page table. The applications were organized in task-groups (processes) with N tasks (threads) sharing a page table. All IO was done by a small task , that e,g would read from the radar and send it to the tracking application or a large task to receive the tracks from the multi-radar tracker and display it on the 25" circular graphical display, all green or the very advanced ones in 4 colors red, orange, greenish yellow and yellowish green :)
A typical system of us would have say 1 to 5 radars with 2 mini computers (master/slave) per radar with say 5 to 40 graphical radar displays controlled buy a mini computer. The central part would be a master/slave configuration for multi radar tracking and another set for flight-plan processing. In more advanced system we had 2 processors for Status Control and Monitoring. Often was a third set available, but powered off or used for testing.
The OS design was based on the theories of;
1. semaphores and layering an OS from Dijkstra and
2. information hiding from Parnas (the first essential step to OO).
All program code was in read-only pages and r/w data was initialized dynamically in the r/w pages. All job control was done during the system generation (off line), stuff related to name translation; page tables; task groups; (their tasks, app data-areas; message areas; stack sizes and init parameters). The end result was one image per type of processor. The system was completely memory based. No virtual memory no program overlays, there was no times for it, we had to process say up to ~250 aircraft per 5 seconds and each radar return required a lot sorting out and mathematics.
Now with the new snap based immutable Ubuntu Core 22.04, you see those theories being used more and more, since some of those OS functions will use messaging and can be easily moved to snaps. On the first boot I see ~10 OS snaps being initialized. They also trying to minimize the size of the Ubuntu kernel, that I think runs in a lxc container.
Would this system also connect to an ibm mainframe?
@@jirehla-ab1671 No, but later we connected it with VAX computers from DEC. We used the VAX computers for the flight-plan processing. We wrote a translator from our API to VMS, so most of the flight plan processing would run on the VAX without change after recompilation. One of those system was used in Riyadh before, during and after the first Gulf war.
17:25 Today minix is in your firmware, controlling your computer from the shadows.
INTEL MANAGEMENT ENGINE backdoor embedded in every intel's CPU.
It's ok, Intel is really small and obscure.
MossadOS
@@happygofishing
KosherOS
Being "the best" in some theoretical technical sense has always been nothing compared to having a product out there that actually solves people's problems. Case in point: the IBM PC/x8i6 platform. It was never the most elegant, the fastest, nor the most sophisticated computer out there, but it was good enough, it had widespread support, and it was easy for clone for much cheaper than any name-brand 16/32-bit computer. People do not want "the best computer", people want the computer that meets their needs, and the clone makers did it better than Commodore, Apple, Acorn, and even IBM themselves. Minix and Hurd might in some technical sense be "more advanced" but Linux actually showed up with a working system and showed up first, while Hurd has been five years away from prime-time since I was in diapers, and Minix has never even seriously tried to be a viable general-purpose OS. If there was ever an OS that threatened to eclipse Linux as "the" FOSS operating system, it was 386/FreeBSD, which was even more archaic, with a history going straight back into AT&T Unix in the '70s! And it might have indeed been better than Linux since it was much more tightly integrated, but SCO's ratfucking ensured it was not there at the right time to solve people's problem (how do we free the PC clone platform from dependency on Microsoft), and Linux was. Hence Linux is everywhere and BSD is an also-ran.
Ninety percent of life, as they say, is just showing up. Linux showed up, and the pie-in-the-sky dreams of hyper-advanced operating systems from Plan 9 to OS/2 to Minix to Hurd to BeOS--didn't. Hell, you could say that in many ways the Apple ARM platform is technically superior to the x86 PC, but since it's enslaved to Apple's strategy of selling computers as a luxury brand, it will never threaten the PC's dominance. Luxury computers don't solve people's problems, workhorse commodity computers do, even if they're slower and less energy-efficient.
See also: Motorola 680x0. It was in many ways better than the x86 (at least until the Pentium II turned the x86 into a workstation-grade RISC beast that could be had for consumer-grade prices), but it didn't matter because the computers the x86 shipped in solved more problems.
True. BSD could have beaten Linux if it not had split in OpenBSD, FreeBSD and NetBSD. Myself started with NetBSD 0.9x. Got a system up and running after a lot of fiddling around. Only US-keyboard layout support, so that was to be the first real project to fix... However the floating-point emulation was buggy. (Had no 386 board with 387 support). Saw that the FP-emulation code was taken from something called Linux. Maybe they fixed the bug in a newer version... but the new Linux FP-emulation was completely rewritten. So tried installing Linux then and found out that national keyboard support was already implemented :-)
Regarding 680x0, it was much nicer to program than x86. What I understood was one of the reasons for IBM to go with 8088 that they could use already existing inexpensive peripheral components from the 8080 family. The same was not available for the 68000 at the time.
Also, as soon as Linux got some traction, IBM and Intel poured resources into it. Intel to have a *nix on x86 that removed the oxygen from the RISC competition, IBM to hurt SCO and later to sell Linux clustering in mainframes as an alternative to big clients considering Beowulf clusters
Hey, I have a friend who runs Plan 9. I met him through a Latin reading group, so he evidently likes dead languages. But does OS/2 really belong in that group? I thought it was going quite nicely until killed by typical Microsoft perfidy.
The enemy of the great is the good enough. Otherwise we'd all be running Plan 9 and BeOS right now.
Haiku Os > Everything else.
@@gregandark8571 Temple OS
In my opinion, code is only obsolete if its horribly insecure such that a normal user wouldn't be safe to use it.
Or if I receive a ticket everyday to restart it.
OMG, all services my company sells are based on obsolete code! (Or is it just shit code wrote by the intern?)
Anyway... I need a new job.
@@joaomaria2398 It's code designed to sell a service contract, probably. Code that "just works" is undesirable if the goal is to make top dollar.
You mean like the horribly insecure Intel Management Engine WHICH RUNS MINIX? 😉
AKA all software ever...
Hybrid Kernel Windows NT would quickly demonstrate what not to do and at times it seems like the NT devs have been fighting at their own design moving bits in and out of kernel space in the stability vs performance battle.
NT 3.51 was the best os Microsoft ever managed to produce. They pissed it all away with NT 4.
The company I work at (thousands of developers) is just starting to finally have customers migrate the core user-facing client from a Visual Basic app to something web-based. Microsoft apparently said (probably now a decade ago) they never expected anyone would extend VB this far. Obsolete only matters if you can't get what you need done with the tools you have.
There are companies still relying on MS DOS in their production.
I still support an application running partly in COBOL on a mainframe and partly in Visual Basic 6. We are planning to recreate the application as a web app.
@@brianschuetz2614 best of luck with that cutover
@@brianschuetz2614 Good luck!
Great video as always Brodie. Learning a lot in this one honestly. Thanks for posting (obvs still watching but wanted to add a comment for the all holy algorithm ;) ).
YES! I really enjoy the "let me teach you something you probably take for granted" videos from Brodie and DT.
I want a separate video for every Linus flame war
I love these kinds of videos where you share the history & lore of linux and the softwares around it. It's my favourite story time!
I've been using linux since the late 90's, Sometimes going ten years without touching windows, watching it mature and helping with bugs and creating fixes for what I could along the way was an amazing experience. 10/10 would do it again.
crazy to think that the x86 line was relatively "obscure" if you were an academic in the early 90s - but you'd be hard pressed to find one in a unix workstation environment. probably what he meant by that comment!
I think the VAX was still big then. And, at least in business schools, IBM.
Although hands down my favorite OS is linux, I am concerned about what happens to it after Linus is unable to be involved due to being mortal like the rest of us. I'm afraid things will sell out to the highest bidder and linux will no longer be "free." These multi billion/trillion dollar corporations destroy everything they touch. There's the BSD derivatives, but the software support is not there.
You know if you don't like how it's going you can have it forked
If we can't come up with something better before Linus kicks the bucket we, as a species, deserve to go extinct.
The GPL guarantees that Linux will never get closed up.
Blows my mind how these wizards were communicating through the internet in fucking '92
My local BBS had usenet access since the mid 80's.
It’s not the best system that usually wins out. It’s the good enough system that gains enough traction and is cost-effective to implement. The World Wide Web is another example. But also famously VHS from the videotape format war. And bicycles (almost all bikes are based on the Rover Safety Bicycle from 1885).
As more of a hardware and silicon kinda guy, the "386 sux we'll all run SPARC" (13:55 onward) part is my favorite.
To the people who don't under stand the x86/286/386/486, the best bit of that joke is that the next one is the 586... also known as the original Pentium. You know, the CPU that cemented Intel's dominance for so long, before they ran into Pentium's power wall, as well as fail on side projects like Itanium and the failed 64-bit extensions.
I wonder if that OS professor shat his pants when he saw the Pentium's release. That and maybe Windows 95.
Still waiting for the Hurd kernel
Minix, still a toy/research operating system today. Tho it found strange usage in embedded systems (including... the Intel Management Engine).
Funilly enough, MINIX is usable today with the whole *userspace* from FreeBSD. So, the people wanting to turn Minix into BSD... Did not really succeed, as it is a microkernel OS, but from the *user* point of view of running *actual programs*, it's kinda BSD-ish (well, it's all UNIX-like anyway soooo yeah.)
Future predictions are scary. If Dartmouth had used copyleft with BASIC, Microsoft would not have made its debut with a BASIC interpreter. If Gary Kildall had open-sourced CP/M (what I learned in high school), then Microsoft would not have acquired QDOS. IBM might have been the world's leader in open-source software (maybe). Phil Katz stole the code from Arc to make PKArc, and was somehow the good guy. Everyone thought the future looked like GNU Hurd, BSD, or Amiga OS or whatever back then. Academics deal only infrequently with accountants, who are happy when obsolete software running on boring hardware lasts a long time and drives costs down.
Obsolete at every kernel release because new hardware isn't included but that's a manageable problem thanks to Linus' dedication. Linux made buying decent hardware worthwhile because you didn't have to spend money on the software as well. So many talented software writers, writing just for fun made it better than the latest buggy release of something you have to pay to troubleshoot. 'As long as it works' is shiny enough for me.
The quality of linux software is really not good? This is much worse compared to mac and windows, sometimes I would rather pay to get high quality software.
@@scvg114 I painted my profile image with open source software on Debian. I have lots of examples of work done in this environment on my channel under different playlists. Nice software is relative to your use of it.
As a Slackware user, I fucking love obsolete shit. You can take sysvinit from my cold, dead hands (I use a systemd-based distro for my server though, it's great for set-and-forget even if it's worse for tinkering).
9:33 To be fair micro-kernel does still sound VERY dope.
Could be, but as I've heard somewhere: Imagine if your memory allocation or file system process crashes. The whole system is going down anyway. Putting them in the kernel is no less safe and it improves the performance of everything.
It's great seeing you revisit the past, like this epic thread or the X11 history. I started w/Linux in the late 90s and read & lived a lot of things like including graphics code in the kernel, loadable kernel modules, the birth of GTK, KDE, Gnome...Minix was always too academic AFAIK (Intel ME!?), but there are several successful micro-kernels like QNX. Not sure if the L4+Linux is in use. As some people mentioned it, it's not a settled debate. I'd like to see how newer kernels like Zircon from Google Fuchsia or Rust-based RedoxOS will do in the future. Also, I think unikernels could be big, maybe replacing containers.
minix isn't really an os(before minix 3 atleast), it was a learning os, like an example of an unix clone
Oh no did someone discover the Linus Tannenbaum debate
Interestingly, I see a rebirth of message passing with io_uring, Also, Minix runs the Intel Mangement Engine. That's on every Intel CPU for almost a decade. I think the arguments are still alive and well, this is not as settled as it might appear.
That was funny blast from the past! In 199x it was normal to write an operating system. Hardware was weak, software fairly simple and disk space small. So people tried to write their own OSs here and there. IT landscape was changing very fast and nobody could reliably predict what's right or wrong. I remember reading back then that window UI is a dead end and will be abandoned. Good times!
It would have been dead and gone replaced by the IBM OS/2 look and feel. Windows look and feel changed a lot since the 3.x days and somehow still is shit lol
I've never installed a version of Linux in my life and I have to say I really enjoyed your narration of this drama even though I barely understand what is going on 😂
History is littered with objectively better technology that doesn't get adopted for one reason or another.
As a user, and I mean, a professional of graphic software, and while I absolutely love solutions like Blender (my main 3D tool, currently, having been trained and having job experience with 3DS Max) and Krita, absolutely capable currently, the work of a graphics professional involves many other apps as well. In some cases, better Windows/Mac commercial software, in other cases, apps of lower quality (than Blender, Krita or etc) but tied to a large work pipeline and industry standards that cannot be avoided, as a 85% to 90% of the clients do require sooner or later (or constantly) the handling of such software, or 100% features and file format compatibility (which depends heavily on internal features and how they are developed, so it is an impossible mission).
That is indeed how most software monopolies in the graphic industries do "defend" their business, but even against apps in the same OS (full compatibility of the PSD format is impossible even between Windows apps, even less with 3DS Max native format).
Wine emulator does not cut it, as for the same reason, these apps take good care (with brilliant exceptions) in not facilitating any detail or implementation to make it doable for Wine team to support such software apps. The list of essential software in this situation is very long to make an OS switch for most of us, as much as we'd like to; mostly among those of us with long previous experience in Linux, even in the times when there was not yet even one graphic desktop (I have installed Slackware with floppy disks... I believe even 1.0). Then there's the raw performance lost in emulation, and the complex path of using the GPU trick with Wine seems to not cut it in too many situations.
I don't know if that day will ever come, or what would need to be done, but the day I can run ANY Windows native app on Linux or an emulator, without serious performance penalty, and similar features/capabilities, that day me and a big number of others will make the switch. But I guess in the global, world stats, this portion of the population (graphic experts) is extremely small despite being a large absolute number, and a number that probably will be severely decreasing with the jobs destruction in our fields (courtesy or our beloved AI). So... I don't know.
An other gem from the past, thank you bringing it to light.
I agree completely with the microkernel point. This hasn't aged terribly. Most OSes have. Microkernels are just better.
Even if they're better, they're not what's primarily being used
@@BrodieRobertson No I agree. It's just I think it's a huge detriment to the OS space. Microkernels are really cool.
Heh... I _remember_ that post. It actually got me worried that I shouldn't be so interested in this new OS idea
I'm an OG Linux fan and power user. Alas I'm useless at modern programming so I've never been able to contribute any code (I used to program utterly unstructured, very spaghetticoded, but highly efficient programs in Fortran for the feds, but I was never able to bust out of the mindset of preserving resources at all cost).
And the original Slackware lol. It took me two weeks solid, working morning till night, sleeping, then waking up and going at it again, to get that beast running on my system.
makes me think of debate between functional and imperative languages lol
To give some perspective for the point in time when these comments were made: id software had not even released Wolfenstein 3D at that point. That should give some idea what kind of PC hardware is being referenced in the conversation.
My memory fails me. I was sure I was playing doom in 1992, but I guess it was wolfenstein. Still amazing Linux was born when I was 11.
1) Yes, Brodie, aesthetical is a word.
2) “Remember, English, second language” (of Linus). Nope. Remember, he's a Finland-Swede, so bilingual from, at the latest, elementary school. In Finland’s two official native languages: Finnish and Swedish (for him, in the opposite order). So remember: English, _third_ language.
1. Fair enough
2. Sure the point is that it's not his first language
The proper term is Finland-Swedish. :)
@@monksuu : As an adjective, yes. But here I used the noun. (Niinku... Suomea osaat varmaan mua paremmin, mutten usko että Enklantia. :-)
@@ChristianConrad "Aesthetical" is, however, not the word a native English speaker would use (or, at any rate, would have used in the 1990s). On the other hand, Torvalds' mastery of English as a third language puts the efforts of native English speakers to shame.
Hey, I resemble that remark! I've been using XFCE ever since I switched off of FluxBox. Speaking of obsolete, I ran Slackware 1 on a brand spanking new 66MHz 486. Remember superprobe and it's warning about maybe exploding your monitor? Good times.
CRT monitors did make alarming noises changing modes back then :-)
Side note: Based on the numbers I looked up and some estimations, Linux should now run on 4Bn devices, most of them ARM phones.
My take, not that anyone will read it, but I still wish a new OS would crop up that combined the best aspects of both Windows and Linux. If someone were to read this I'm sure they'd disagree, but I think a lot of the original API design choices that Microsoft made were actually closer to good than Linux is even now. Even with source code in hand, I'd prefer that everything wasn't fully exposed to running programs. And I think the idea of having the GUI components implemented as part of the base OS API is a way better design, for so many reasons. However, some of the design choices of Linux were definitely better than Windows. Chief amongst those being the kernel separated from the majority of running components which allows the GUI to crash and not kill the whole OS. For that matter, I'm sure someone would want to remind me that Linux isn't an OS, but a kernel, and I'd say that's an irrelevant point because I want more of a base than just a kernel provides. And finally, if I ever get free time I may have to write it myself because surely no one else would.
The "gui" is not part of the core is under every windows NT successor. Not even ms is that stupid.
@@DJDocsVideos The GUI definitely is part of the core of Windows, and even though it's separate from the kernel it still takes down the whole system if it crashes. And I wouldn't say Microsoft is stupid, evil yes, bad at implementing things, sure, but not stupid.
Boooooy Andy was Pissed, Linux was like an Ultimate FU* in his eyes
It just goes to show that just because you're educated, doesn't mean you're smart.
Hindsight is 20/20. That said, Ken's comments about things being obsolete are, and have always been correct
Before desktop portals were a thing, all it could've taken for screenshare to work on the desktop client
of Discord on Linux was them updating the version of Electron used, as it had native support.
Plus, unless you use daily builds for everything. You're always on "obsolete" versions (That work well)
this micro vs monolith kernel does remind me of similar situation with services.
Well, the Mach mentioned in 3:21 is also part of Mac OS X and iOS, I believe it's a hybrid kernel and the Mach part is the microkernel part. So pretty widely used.
[removed comment about Intel ME engine]
Yes OS X is a fork of Mach, it uses a lot of Linux, BSD, and other Linux software as well.
@@MS-ig7ku usually people say: Mac OS X is: (fork of) Mach + BSD userland. I wonder what Linux software you are referring to ? What I do know Linux took from Mac OS X : CUPS printing system.
@@autohmae Apple takes code from nearly every form of Unix and Linux, Apple used a lot of KDE code and everyone has used BSD code. Apple's browser is a fork of KDE Konquor as is Chrome.
@@autohmae No Apple did not develop CUPS, the Common Unix Printing System, but took it over later
@@MS-ig7ku great point, Safari engine came from KDE khtml.
Many of the most critical systems our world depends on are outrageously obsolete.
Everything from Boeing using 1.44mb floppies for plane software updates, to ATMs on those rare occasions you see them reboot running Windows embedded (win7) to ancient DOS machines still running factory lines and machinery.
But what's scarier than using an old OS is how many systems depend on some excel spreadsheet written by some guy who left 20 years ago.
I swear if a virus took out excel in a meaningful way, we'd see widespread blackouts globally in under 3 days as plants failed - most scarily the nukes - and a whole host of other systems would fail too.
I thought OS/2 still survived in ATMs. And I once sold a whole (obsolete) external storage system to a guy who needed the cable for some business-critical piece of gear he was running.
Minix was live and active until recently-ish. Sadly I believe Minix 3 is dead now. It had a NetBSD userland. Fwiw from an OS design standpoint there is much to be said for microkernel design. I think Ken Thompson is spot on. The main reason we've not moved from Monolithic to micro is likely: legacy, performance, and ease of development. Is it technically a better model? Yes. Is it the end all be all? No absolutely not. If you've never built redoxOS try building it locally. It's incomplete but it's beyond trivial at this point. You'll need ethernet to do networking and such it's a bit of a chore, but very possible. Intel ME runs Minix, but from what I've heard it's been modified heavily. So it's certainly widespread. In fact from it's inclusion in other similar things it's not at all impossible that minix is technically one of the most widespread operating systems... if only that counted (it definitely doesn't). From a user facing OS standpoint it basically doesn't exist. We were shown it in our OS class for like a single hour and it was nothing to write home about.
"Slackware, started by Patrick Volkerding in late 1992"
from: "Chapter 1 An Introduction to Slackware Linux"
"I don't usually get into flames, but I'm touchy when it comes to Linux."
Well, when he does, it usually is related to Linux.
taking this video as 1st part, then the 2nd part the Ken Thompson backdoor, let me suggest a 3rd part: Andy Tanenbaum's revenge or how Intel shipped Minix in your CPU
Thanks for pointing out the sarcasm. Couldn't tell. 😂
Really appreciate the attention you give to the ancient history of FOSS
GNU Hurd is still (kinda) alive!
Besides tales about Debian GNU/Hurd that people mention every now and then, GNU _censored_ has Hurd packaged and you can pretty easily use that instead of Linux in your /etc/config.scm. Not sure what hardware, (besides QEMU) is actually supported tho…
Apart from that, one very interesting (and active) microkernel project is _censored_ de Vault's Helios. Cool stuff
ah yes, the yt comment moderation bots, perfect with no false positives or extreme hatred to tech terms at all :)
@@liquidmagma0 wait, this censored text is done by youtube?
@@fulconandroadcone9488 Nope. But for some reason YT was removing my comment until I got rid of mentions of GNU Guix and Drew de Vault
@@fulconandroadcone9488 no, but yt likes to nuke comments for basically anything, so you have to heavily censor or cleverly word some things just so your comment doesn't get nuked in the literal first second. and even then, it might get deleted in the first minute or so.
TH-cam censored "Guix"?!
I must respectfully disagree. Andy's post aged rather well given the advancements in hardware technology. At the time microkernel architectures didn't perform well on computers that didn't have multiple cores making message passing terribly slow. Now, modern CPUs are like super computers on a chip. The Linux kernel has evolved into a hybrid kernel.
How is it a hybrid kernel?
I still remember the day when the emotional unstable got their claws into the kernel. It was a dark day indeed.
The correct quote at the white board should be:
"I'm die, thank you forever"
Compared to Theo de Raadt Linus is a very mild mannered, soft spoken and polite Person 🤣
Andy was right. Developing a monolithic kernel in 1992 was a mistake. However, Unix was monolithic, and Linux was a Unix work-alike. And people don't recall just how badly monolithic Unix was. The drivers were compiled in. If you wanted to support a new printer, you had to compile the entire system. Linux fixed that. I don't agree with HOW they fixed it, but it did get fixed. The fact that kernel drivers emulate symbolic linking is what creates %90 of the problems with Linux kernel drivers.
Well, Dr. Tanenbaum has retired in 2014 and the last activity on Minix is in 2018 (If I remember correctly), so I can say Minix already dead today (except it's hiding in Intel platform as Intel Management Engine).
Memories, in the corner of my mind. Flaming water colored memories, of the concepts left behind.
AST missed the point that theory only works in theory. Every engineering student gets hit with that in practice.
I mean, he was right about his first point. Microkernel architectures are better in almost every way, ignoring problems with x86 anyway. But as Ken Thompson pointed out: for the vast majority of uses no one gives a shit, so first mover wins.
Xfce not only does what i need it to do, it does it consuming far less system resources than most of the less obsolete alternatives, and is therefore better for my purposes.
I started using linux like 20 years ago, in the Windows XP era. I started to be more security conscious so I started to use in Windows as well an account without admin rights, and rhe administrator credentials just to install programs. It had so many issues... Something as simple and basic. Meanwhile linux worked like a charm in my 128MB pc while xp struggled a little bit. I left definitely XP and only come back to windows 7 and because of games.
One lesson is to not underestimate software complexity which I think killed both the micro-kernels and the intel Itanium.
Both is wrong. Itanium got killed by price. Micro Kernels are still up and running but hybrids like Apple's XNU work better with current hardware.
@@DJDocsVideos But why was it so expensive? Because it was damn hard to get a decent performance out of it. You had to hand-code in assembler to really use it. I have one of the HP-workstations with 2x900MHZ Itaniums, it draws about 300W and with gcc the performance is on par with the RaspberryPI. Or if you thinks that is an unfair comparison, A two year older AMD K7 outperforms it on both floating point and gzip. For microkernels, ok they lives on but I was more thinking of Linux/Android/*BSD.
Mach is also still around, the microkernel Tanenbaum mentioned. Apple uses it in their kernel for macOS, iOS, watchOS, iPadOS etc etc.
It’s just called XNU and Apple turned in, you guessed it, a hybrid kernel 😂.
OK, honest answer to "where is Linux today?".
I say this as a 100% Linux power user who left Windows for good 3 years ago. There never will be a year of the Linux desktop. Why? Because Linux already reached "critical mass" in late 2009; a market OS share then of close to 5 percent. Did that critical mass result in commercial companies porting across their software? No. The bottom line is that people de facto are exposed to other operating systems.
If only Linus had a counterpart in the graphics department, right from the very beginning, that ensured that the GUI on top of Gnu/Linux was not X11, but a new system, with a good toolkit. This would have made Linux and its userspace much more tight with the desktop early on, and not fragmented. This would have made Linux today to have 10-15% of the desktop OS market, instead of the current 2.5%. In fact, that was the one gripe that Linus has today with Linux: that the desktop got fragmented, and users were not getting a consistent experience.
Linux does lament the fragmentation of the GUI environment. However, it was never the role of the kernel developers to create/fix problems in that space.
That said, I fully agree with your sentiment. I learned Linux as a server like the back of my hand 2 decades ago. However, I still feel like I know less about the GUI system than humanity knows about quantum physics.
☮️❤️🌈
@@RichardBronosky That's why I mentioned a counterpart. Linus needed an equal person in the gfx department. Someone as sure of himself, and as early as Linus was in the project. The tragedy was X11 ported to Linux, and later on, various toolkits created for it (from tcl/tk to wxwidgets, to openstep, and of course qt and gtk later on). This should have prevented. If there was a visionary person just as Linus was for the kernel, in the userspace, to create a compositor + toolkit + DE as early as 1991 or 92 (just like Romain Guy was for Android's gui implementation in 2015 which is currently considered among engineers the most advanced in the world), the whole bruhaha with X11, and the fragmentation of toolkits/DEs could have been prevented. Even if more of these were to be created later on, the original could have prevailed -- just like the kernel was never forked.
@@EugeniaLoli oh, "an equal person in graphics". Copy that.
X11 shared acros *NIX, as I know it also runs on Mac.
I do not see how one could increase market share by avoiding existing code. Fragmentation is result of openness, compare with browsers.
Early Linux had X and "MGR", which I know nothing about except that it lost out to X.
I am absolutely hyped for GNU Hurd video.
As someone used his text books and was required to write code for Linux in the same operating system course, imho that A S Tanenbaum didn't really say anything wrong. Popularity doesn't mean better. Is Windows the best operating system out there? To this day Linux bits itself in @$$. Even Linus said you don't write software for Linux. You write for RHEL or Ubuntu or.... flavour of the day. The greatest thing to happen to Linux was FreeBSD getting thrown in court.
Was fun to see OS history
Though I don't understand why microkernels loose
@@09f9 >I think in the long run microkernels will probably be the future
people 30 years ago be like
@@09f9 > it is very hard to design a performant microkernel because context switching is very hard
true, but it's probably can be good enough on modern-ish systems
> the security and reliability benefits are only theoretical until someone actually builds the thing and people have a chance to try and break it.
Obviosly everything more complex than a rock has bugs, but being more reliable in theory seems as good boost for real life for me
@@ОлегСаперенко year3billion. microkernels are coming
@@ОлегСаперенко Frankly, I believe safe-language operating systems would be the superior choice. This way you don't need to sacrifice any performance for context switching or hardware-enforced privilege separation. All that is guaranteed at compile time.
But I don't see any OSes like that even trying to become general-purpose, so there's that.
On the other hand, when it comes to microkernel OSes there's Ares OS (with its Helios microkernel), which is in active development
Code should be considered "obsolete" when it no longer conforms to modern requirements and it is not possible or feasible to upgrade that code to do just that. For example, embedded code, which transfers sensitive data in plain text could be considered "obsolete".
Linux is an old and obsolete kernel design, but there are challenges for any distruptive innovation. Hardware and software evolved together, so now hardware is very good at making the inferior design shine (security bugs aside, although they are a big caveat).
And now Linux is part of industry and de-facto standards, so it is getting more and more cemented in a dominant position.
It sucks all air out of the room in terms of researcher and developer attention, leaving little to no space for new more innovative designs.
Like Windows for desktops, it will be harder and harder for alternatives to show up and really contend with Linux where it is dominant. It will take a major environment change to alter the status-quo, a bit like the GPL was for Linux.
I wonder how Andy feels about being so wrong
weirdly enough he thinks he's won because MINIX is the basis for... the Intel Management Engine. So MINIX runs on more PCs than Linux
Andy is not wrong.
Linus did not invent any thing he just implemented tried and tested solution and gave it for free.
@@gauravshah89 He was absolutely wrong. Microkernels are bloated garbage outside of niche, embedded uses.
Ken Thompson was right on point : We users couldn't care less about what's under the hood :)
Does it do what i want to do ? is it fast? is it easy to use? and that's it. all the rest is nerdy brain wankery i'm afraid.
Well, a lot of drivers these days run in userland (FUSE, libusb, vgalib, GPIO control stuff, etc), which is kinda like the microkernel architecture.
15:45 That's not the real Ken - see the footer. Ken didn't use USENET; see "Three Wise Men: Kernighan Ritchie Thompson". (But he did co-sign "Andy Tanenbaum hasn't learned anything", which reads to me like a Rob post.)
I remember running MkLinux back in the day, so for a short time there was a micro kernel version of Linux.
It's amazing that the professor's time was so worthless that he could spend it watching CNN
Great video! The "$24" tier really is a thorn in my eyes...
They jump was too high so the bit falls apart
tbf, something like a microkernel is the "next" technology because it needs more resources. I think it needs support from the CPU for message passing.
OSX used to be microkernel and all the apple fanboys were saying how much better OSX was because of it. Then in 2002, someone did a benchmark and OS X got creamed by both Windows and Linux. OS X then quietly switched to hybrid kernel.
Apple fanboys like anything other fanboys happily ignore problems for the sake of arguing, Linux fanboys are no exception
the thing that bothers me about "Linux vs [arbitrary kernel]" comparisons is the glossing over of what Linus Torvalds' intention was. he cobbled together his kernel as a learning exercise, not as an exemplar of an architectural paradigm or as a "product," and he wanted to share that learning experience with other similiarly inclined individuals. those individuals learned the (very) basics of kernel design, and created a developer base. that base would begin to create enhancements, and those enhancements would go back into the improved kernel and be released to an increasing user base.
This is the opposite of Andy Tannenbaum's objective: he was trying to teach concepts and "proper" OS design. and he charged a not-insignificant amount of $$$ for the opportunity. It's almost as if Linus challenged folks to try their hand at making something that worked, warts and all, instead of trying to be modern or correct, and a lot of people followed his steps.
it's no wonder - to me at least - that has Linux succeeded where Mach, BSD, Hurd, Minix, et al did not.
I was taking a drink at "especially an obscure line like Intel"... my laptop is now covered in water
This was a great and entertaining dive into history. Subbed.
I believe that Linus is one of those in Finland who speak Swedish (about 5 or 6 % of the Finnish population, I think), and yes, ”there's no point” is ”det är ingen idé” in Swedish, so he just mixed up the two languages a bit. Idé means idea, but Swedish and English can't be translated word by word. We (Swedish speakers) compose our phrases differently, both the order of the words and what words we use.
The Swedish word for point is punkt (period, dot) or poäng (as in scoring points). There's actually an alternative to ”det it ingen idé” using the word poäng instead of idé, but it doesn't sound right. You could however reverse it and use poänglöst, pointless: ”Det är poänglöst”. That sounds like something some people would actually say, but I guess most Swedes say ”Det är ingen idé”, I don't know about the Swedish speaking Finns, though. Their Swedish is a little different, but close enough for us to understand each other.
This is the best Linux video I ever saw... awesome piece of history...
Microkernels are a much cleaner design. They just don't perform as well, which is why NT4 moved the display system into the kernel.
Given the strength of today's hardware, I'd love to see a high-security microkernel option for people who would prefer security over speed.
I wonder why no microkernel adopts the NIX 'Fastcall' (curried syscall) thing. Actually, Plan 9 itself should adopt it - get the IP stack back out into userspace, as well as stuff no-one ever uses like devaoe.
On the other hand, citing more powerful hardware to justify inefficient software is why we were locked into a perpetual upgrade path when Intel/Windows was the only game in town. I'll take a $75 refurbished laptop that will continue to be useful running Linux for years to come over a $3000 status symbol any day.
@@RealDevastatia ... which is why it should be a boot option. Sometimes you need to prioritise performance. Sometimes putting security first is better. Sometimes, showing up with a status symbol bags you a new client. Options are good.
The Intel architecture is weird and poorly designed. I wasn't surprised that it became fabulously popular. But that was in spite of its deeply flawed nature, not because of it.
What's kind of amusing about all this is that it didn't take Linux very long at all to become far more portable. I wouldn't be surprised if it's surpassed every other operating system in history in terms of the number of fundamentally different architectures it runs on.
Yuppers kids. I got a cd in 1994 from PC magazine with a version of Linux on it that had a tornado as it's mascot. This was before the Penguin was chosen. It had a desktop, a search tool, a web browser. It had instructions to set IP and DNS. I had never done such a thing and it took quite a while for this dumb guy (me) to get a dial up 32kbps modem to connect to any board or at that time AOL. It was very impossible to install for a non-programmer. Buty It was quick actually. So much was broken and buggy that It crashed often. sometimes it didn't even boot.
But it was so damned cool to have it and tinker until it broke, install, do it again.
Brodie, thank you for dhis.
Please, do not get me wrong. I am someone who use PC for ~25 y, 99,999% it was Windows, I think I am power user, if you compare me to average PC user. I had some adventures with macOS, and of course with Linux, mostly in around 2008'ish. Now since around maybe a year, I started to interest way more - and understand more, even thinking about switching, because of W11. And personally I am waiting for a Vanilla 2.0 and confirmation that two of the project I am interested in, will work on Linux with Proton. And I really like how much Linux, or rather from my perspective its desktop environments, move in the good direction - what is a fantastic thing. Now, being realistic and putting aside computing and the whole server side of Linux, and mobile with Android - where Linux is gigantic. Desktop is unfortunately still, well... kind of nothing, it is getting bigger, and it is growing (thanks valve, and Microsoft), but is a niche and a tiny part of personal computing since a long time - magic will start to happen around probably 7-10% of PC using Linux - because it would bring more proprietary software. FOSS, is great, but often it is what it is. In 2022 Linux Desktop has ~2.48%, so there is a long way, because it barely gains any % share in that 20 years because it oscillates around ~2.5-3.5%, it gains a lot in specific groups like "developers", it has above 6% in 2020, and goes down to ~4% in 2023.