Backporting patches are almost always done by someone and released to all. So I don't necessarily feel like people are duplicating backporting efforts. As for Google making these claims, excuse me for thinking they have a motive in this and it's not a benevolent one. They have had constant issues with taking open source projects and code, not supporting those projects, and then making them incompatible and inevitably trying to kill those projects for their proprietary software. And it just so happens they have a vested interest in their new Fuchsia OS. Coincidence? I think not.
I agree about being skeptical about the flurry of Google's sudden public announcements about Linux issues and security issues. Perhaps they are trying to draw attention away from the huge security nightmare that is Android. Or maybe they are trying to turn Chrome OS into a viable commercial project and charge licensing fees like Microsoft does with Windows. I am more than a bit cynical about Google's claims of there being an immediate need for 100+ developers. They make it sound like if you use Linux you are doomed.
@@GaryExplains So can Redhat/IBM, so can Canonical, etc. In fact they do and release bug fixes regularly. It's not like every bit of Microsoft's stack is constantly being security vetted. They'll have bits they haven't looked at for years that are shot full of holes. If no one is looking, no one is fixing them. And the more servers you've got the most investment you should but in maintaining them. Patching 100 servers is hard, it'll mostly be a tiny team of systems administrators and some home grown scripts. Patching 10,000 isn't so difficult, you'd have the Ansible or Chef or Puppet, or whatever infrastructure to manage it.
@Him well tbh, I am not sure if he meant exactly this but Linux youtubers like Luke and DT often do videos reflecting on things while walking in their backyard. Its kind of a meme in the Linux youtube space.
magic is real. except in real world. we know that someone has to write its code and control mechanisms to be synced and programmed and prowided to user as a result. really all the magic stories i heard. i was allways wondered how that would work. today i know that it can actually be done using software. think about that voice activated door on story that alibaba had opened by saying secret words :D which today actually works :D and actually can be applied to any magic you can imagine .
Without Linux, there would be no Google! Without Linux, there would be no android! Without Linux, the internet would be a very dull place. Without Linux, you would pay a lot more money for Windows OS. Linux still is, very important for smaller businesses (start-ups). My first Linux distro was an early Slackware version. I am using Linux for about 25 years now. I had no viruses. Restarting the computer, because of a software bug, was extremely rare. Maybe it was once a year the case. In the past, Window users were negative about Linux. Nowadays, it is the opposite. Window users who tried Linux are not going back to Windows. I love Linux. I love those prorammers, working so hard keeping Linux the best OS there is! Be honest Gary! Tell ppl too how many monthly bugs Microsoft and Apple have in their software! Read once that Microsoft did repair a bug after 20 years.
@@Ilestun Yeah yeah, GNU/Linux. Actually XOrg/systemD/GNU/Linux. Actually KDE/Wayland/systemD/GNU/Linux. Actually... We get it. We're talking about the kernel here.
@@SuperDraganco no you are 100% wrong Linux is the kernel. The distribution you are running is a collection of GNU tools and programs by the FSF written back in the day. A quote from Wikipedia : "GNU is an extensive collection of free software, which can be used as an operating system or can be used in parts with other operating systems. The use of the completed GNU tools led to the family of operating systems popularly known as Linux. Most of GNU is licensed under the GNU Project's own General Public License."
Android even has the bigger issue that it always uses an outdated kernel. The SoC companies are forced to push out their SoCs every year, so they publish a low quality linux kernel fork. Which ends up to be outdated on arrival.
When you have a lot of servers serving the same set of websites, it's actually easier to apply the fixes without any downtime (provided that the site has been engineered well). You can just do a rolling reboot. When you only have one laptop, and you are watching Gary Explains, you don't want to reboot until the video is finished.
With Apple you have to keep buying a new computer for every x number of OS upgrades. Windows 11 most of us will have to get new motherboards, RAM, CPU and other components because of the TPM 2.0 and other things. Linux can work on the oldest hardware to the latest hardware and all the small computer boards like the raspberry pi. The cost to buy a license for the Windows Server is expensive. WINE is free and allows you to run I don’t know how many Windows programs. If all computers were running Linux instead of Windows or Mac OS we could make a serious dent on cutting down on e-waste.
@@MyNameIsBucket You look at Linux and they don't need a TPM 2.0 and it is very secure. They just need to make better written code to protect the operating system. Maybe they can incorporate some principles from open source and make their OS more secure. Also there is so much more that users could do to make their computers more secure as well as their passwords.
I think you're seriously biased. My Apple MBP 2013 is running latest MacOS 11.5.2 It's not fast but runs OK with latest XCode and other things I need. Realistically hardware is updated more often than Apple drops support of it. I had 3 DELL laptops at work running Windows 10. You don't need to run Windows 11 because it's Windows 10 SE or something like that. Yes, Linux is great to run on some old, weird, underpowered, obsolete or custom hardware.
@@henson2k With Windows and Linux you can custom build your computers. Yes you buy can premade Windows computers but with custom built desktop and towers you can swap out a board, change the processor, change the ram, change the hard drive, change the video card. With Mac you can't do it unless you can afford the Mac Pro which is simply beyond my budget. Im still running a mac mini 2011 and a MacBook pro 2012. Im hoping to save up for the Mac mini 2018 though. I like the Mac OS especially mission control with two monitors. I can not get that feature with Windows 10 or Linux (Debian). I like the terminal program although I prefer iterm2 .With my MacBook pro 2012 and mac mini 2011 I can at least upgrade the memory and the hard drive. If Apple could have at least made the Mac mini's like the 2012 but with better hardware instead of instead of soldering ram and soldering hard drives. You are right I am biased I like building my machines from the ground up as much as I can.
There's no such thing as absolute security and that's a problem from reality itself. However, if all that work is not shared (upstreamed) it's wasted! That's a very important point!
Nah mate, the world stops turning, everything turns dark, communications fall, logistics, food chains crumble, the plundering starts, Zombie Devs roam the streets in search of Cheetos, everybody dies a horrible dan and... You're right, chaos...
Well a couple weeks ago one of the largest DNS networks went down for under an hour and the whole internet practically ceased to exist in the north Western Hemisphere
@gghhkm as are the people able to contribute to the linux kernel btw. the linux kernel in fact is very tiny. It's only in the kernel/kernel subdirectory. The overwhelming majority of the kenel source code are drrivers. You could in fact remove the entire drivers directory and choose only those drivers for the hardware you actually have. Those drivers by the way mostly come from chip vendors, who indeed make mistakes with not actually testing some model of the IC and not another. You would be left with something in the order of audacity in code complexity.
@gghhkm that means we will be stuck with rust although I saw a 5 minute video on it and I already didn't see a single thing I liked. It's not solving problems in the correct way at all. And it has a completely broken type system. If you can't name your types, you are doing something wrong. What I think they should have done is progress the linux kernel towards a micro kernel architecture. You can already do most things in user space, it's time to deprecate the old.
yeah I'm confused about what the actual point of this video is, like he described the system of management that exists and then says it needs to exist, like, no shit? LOL Open source anything can have people fix things locally and they're under no obligation to send it "upstream" if they don't want to
Gary, can you please explain about the diligent work of how linux kernels are released and the process of how the bugs are identified? I know it is a big subject, that cannot be contained in a single video, but you could just give a gist of it atleast.
Same goes for Linux apps, I'm getting a nearly 50% success rate installing apps that actually work, without having to spend days trying to figure out to maybe get it to sort of work.
I think that one of the biggest downsides of Linux is that when people talk about it, it is often too technical for n00bs and intimidates them as they think it sounds extremely technical, and yet for daily tasks, it can actually be rather easy with more modern versions of Linux and Desktop environments like Gnome thanks to improvements over the past several years. I think that it's rare I ever see people doing side by side comparisons with other common OS's doing daily normal user stuff, like using a web browser, installing apps like Skype, Telegram, Spotify, Steam and things that people are familiar with on other platforms, scrolling over the top bar to switch workspaces to increase productivity between apps. I have found that once I got used to the simple things like GUI layout, the overall advantages of Linux over Windows just made it good enough that I now use Linux as my daily driver and removed Windows entirely after I went over a month without even booting Windows.
@@eltonmsn yeah, most of them just trying to be inferior to these n00bs. I mean, they're technically not wrong about those technological terms. But for n00bs, they really find it overwhelming to hear all of those things
@@bigl9527 does it matter, point is Linux is not hard to use, anyone can use it. Android linux is so easy anyone can use it, desktop Linux can be just as easy ;)
There is some backporting, and it probably is done by more than one engineer sometimes, but Linux is all about standing on the shoulders of giants - far more than proprietary OS's. Linux is massively parallel development.
Interesting vision, but I can't see the relationship with the tragedy of the commons, which has more to do with the management of non-renewable resources, the Linux kernel is not a non-renewable resource, but a piece of software of enormous complexity that has to be gotisproved somehow. On the other hand and also important to emphasize is that the success of Linux is only a consequence of the possibility of having cheap hardware, a good kernel is useless if you do not have a place to run it.
The problem with a bug fix is that it often introduces two more bugs. There is always a compromise between generally applicable and exploitable. Software that "does everything" is also necessarily full of bugs.
Which is why there should be a scaled back version of an OS that doesn't try to do everything. Make the "commons" smaller. Make the size of the "commons" dependent on user type, of which there should be more than two: root and normal. If I don't need a media player, then let me select a "user type" that doesn't need the multitude of (buggy) files it calls. See the article "Dynamic Linking Libraries" by Paul Lutus: "Place the downloaded DLL files only in the same directory as the application that needs them."
@@artsmith1347 I agree. At least Linux users have the option to build only the components they need and even to optimize those components to a particular application... it just takes expertise and time. Someone with the expertise should offer massive online classes for a reasonable fee.
The thing that concerns me is that we seem to just keep adding features that we could likely go without. If you froze feature updates and just spent a year or two fixing security and bug problems, cleaning out old unimportant components that could lead to security threats, etc. That would likely make a more stable system. You might even be able to cut down on the bloat that exists. Everything seems to be getting so big and complicated for no real reason.
Focusing just on bug and security fixes would probably solve that issue, but it would also mean that Linux would fall behind feature-wise compared to other operating systems. Besides, there are Linux vendors (Google being the largest one in this case) for whom new features are more important than bug fixes. New features in Android draw more new customers (and keep old ones from switching to something else), but no one really cares about bug fixes because the average consumer doesn't even think about that.
@@thedoofguy5707 Fall behind? C'mon. Windows 11 just updated a look that could have been easily done with Linux 5-6 years ago. Functionality-wise, Linux only usually loses out in gaming compatibility and that's getting better - but do we sacrifice security for game compatibility? Seems like a bad trade to me. I get that building adoption is important, but most people only use a fraction of the features that they even have. I think that the problem is that people like to see a list of more things, even if they don't use them. And I know that there are outliers to use-cases, heck I even used the old floppy compatibility still in Linux recently. (It's no longer supported for updates, but is still there.) Can you think of a recent feature that came out that was a massive improvement on your computing experience?
@@sbrazenor2 theres' few system shortcut that i hope that will be inplement on linux just like windows have so that it ll be easier to navigate the system. No mouse & no terminal. Eg: super + x (open the system menu) -> u (select shut down) -> u (confirm ur choice) . Other things that i can think about is face recognition or touch id for app or system authentication. Next, it ll be super helpful if there a preview panel so i can see the content of document just by click on them and not having to rum them on any app as it will increase both cpu & ram usage. Theres few thing that can still be added to show that it always good to have more features built in. World is getting more modern and people's workflow change along with it. Ease of use is needed. I hope that 1 day linux can be run, tweak etc fully at 100% based on gui only and not having to be relied in terminal unless the user want it as it help us non techy user and save our time from having to browse and memorize every single god damn command line.
As was mentioned already, Linux licensing requires anybody who add anything modify kernel to release it to the public so the effort is never duplicated unless they intentionally violate Linux license by not releasing the changes, in which case feel free to report it to gnu. Now the second point, it was stupid of Google to use the old version of kennel in the first place and manually back port all the patches. Linus mentioned it somewhere that forking Linux is an enormous effort and that is another reason to get your changes accepted in the mainline kernel.
Explained. Thanks Gary for shedding light on this important issue. I'm a Linux desktop user. I love it but no software is perfect and this is a great illustration on some of the reasons.
The situation is even worse than simply too few programmers to fix bugs. There are too few programmers to spot deliberately introduced malware and back doors, much of which can be incredibly hard to identify. And before any self-proclaimed gods-of-open-source cast aspersions at my assertion from their lofty perches in the "I write 50 KLoC/year, no bugs" inner circle, let them riddle me this: how many years did HTTPS have a major flaw in its protocol logic that went undetected by the open source gurus out there?
Yet AMD is on a hiring spree for Linux developers to help with the Steam Deck launch. People are trying to ditch Google and Apple spying more than ever. Linux maybe about to take a vast part of the market. Lag in certain areas is bound to happen.
Why doesn't Linux has a centralized monte-carlo stress test package that all Linux developers can use to test their bug fixes? If the bug fix doesn't survive the stress test it doesn't get in. This methodology was used by navigation teams for their spacecraft and ground software.
It's more than just stress testing. A lot of these bugs are in hardware drivers, and that's not such a simple test case, since you need physical platforms and there may be a requirement for human involvement to get things set up. There could be some merit for these duplicate "downstream" use cases to switch to a hardware certification model where they focus on specific combinations, have a testing platform, and then always use the latest development head. They would then only release new kernels that pass on their certified platforms. Bug fixes would go straight to upstream, and the practice of maintaining an old kernel would die off. However this is a challenge, as RedHat, SUSE, Oracle, etc. are not going to want to provide restrictive lists of whitelisted hardware that is supported. Part of this problem was created within the Android ecosystem, where hardware manufacturers wanted to get to market as fast as possible. To do that, they modified the Linux kernel to work with their device, and then *never* contributed those drivers upstream. This makes them a single point of failure/bottleneck in getting any kind of updates to end users. And if you didn't already know, most hardware companies abandon their released devices incredibly quickly. Google now mandates 2 years of support - and it was a coup to get it extended that long. The Linux ecosystem has other duplication issues too, like multiple implementations of similar services that compete, multiple packaging and deployment mechanisms that compete, etc. It's both a benefit and a curse of the platform being open.
The problem of system errors occurs in Linux, Windows, Mac and so on. The problem is not with the number of developers, but with the size of the OS. Because an OS becomes too big, it becomes more difficult to detect errors in the system. An OS should be minimal in my opinion, and only control hardware. I think in the longer term this is just necessary to guarantee safety.
The different developers doing the same task is because they chose to do that.. Android CHOOSES to use a 3 year old kernel (and modify it to add their own functions until they put stuff upstream).. They do that because they want to. And they put in that extra effort. It's nothing to do with core kernel Devs.
That's the kernel/os, I can only imagine the number of software engineers needed for other layers on top - eg Boeing software used for their spacecraft.
i think it is the other way around. linux has to many developers. while one side develops, the other side produces bugs. and if everyone is exhausted from the running around, one claims we can find the bugs. in the mean time apple, microsoft, and all the f* rest, are filing for the patents.
linux problem with wide usage is that there are 100 main and even more sub distros. if i wanted to join linux family, choosing the first one is like turning the rice pot to chose one single rice, they all rice but each one a lil different that previous one...
@@tcroyce8128 Where did he said he hated it though? I only recall him saying that debian didn't work for him in 2012 (which imo back then was legitimately annoying for certain hardware)
@@ChristopherGray00 That's being polite saying it didn't work for him. And he still deals with linux, not the other Utils that gets shipped along with distribution. The point of linux winning is total domination in server and in anything custom. We all know and love that. All but desktop. All the choices and variations yet still no significant market.
@@tcroyce8128 Yeah, it didn't work for him and he didn't like it, that is debian however, other distros can provide more hardware compatibility out the box, and the kernel has drastically improved with compatibility since.
Thoughtful points on how expansive the scope of engineering going on to maintain and improve the Linux Kernel across thousands of distros and platforms has become. Even as a relatively hard core and long time Linux fan, it's not something I had given much thought to. I think what this exposes is not a crisis, but a clear need to have a recognized commons resource for better orchestrating and coordinating the maintenance and development tasks. I think the challenge will be in keeping the commercial monopoly interests from trying to take over the show and control Linux from a top down authoritarian model, since that seems to be the only model they know.
Hi Gary, great explication video as always. Please make a video for the Dummy to explain them: Open Source and Free Software doesn't means I work for no money!
What irritates me is the Linux developers and the Obsession with updates when most of them are useless. Updates can break things especially graphics drivers. If I want to side load Linux on a PC I use for Classic gaming with a GPU Nvidia thinks is legacy then those drivers should be carried toward no matter what Nvidia says about it. Keep the 307 and 340 drivers relevant.
I suppose big part of this problem, where there's requirement to backport fixes, has to do with two things: #1 Lack of ABI backward compatibilities with older kernels (which would break userspace applications), #2 smartphone vendors often use proprietary kernel modules (drivers) that are not part of upstream kernel; if they decided to use newer kernels in their product, these drivers would need to be recompiled and tested.
Commercial Linux has developed methods of patching running kernels to reduce rebooting. Actually rolling out updates to replicated services like web servers should be automated. They just need to have the reboots serialised so it looks like a brief network outage. The core argument about duplication of effort is true, yet most businesses value stability and validation other rapid deployment of bug fixes.
@@AU-hs6zw They are complete operating systems unlike linux. Which means you don't glue pieces together to make an OS. The userland is made by the same people who make the kernels. FreeBSD has ZFS, jails, DTrace. NetBSD runs on every hardware you can find. They are genetically Unix(Forked from 386BSD/ 4.4bsdlite2). It may not mean much to you but many people care. Aside from that, it's always good to have competition. Many people think of GNU/Linux as the only OS of it's kind. But if you are going to replicate them to look like your Linux desktop, it's not worth it.
From Montreal Canada hello! I'm pretty sure the b-roll you used with the police and firemen was from Montreal! Really like your videos keep up the good work.
Image if every Linux OS devs just came together and made 1 version of Linux OS. The fragmentation of linux is a really big problem since devs are creating the same things multiple times.
With the Steam Deck bringing Arch Linux to a bunch of new people, issues with the kernel could cause huge problems with bringing people into the Linux ecosystem.
And yet the Steam Deck is going to have a relatively limited variety of hardware, so it's set of code will be improved much better than e.g. MIPS support code.
The real commoners were never out only to graze as much as they could, that's a myth which supports land privatisation. In the Linux kernel we've created a shared resource where everyone is trying to get as much as they can from it, not thinking about how to do so whilst sustaining the resource, which is what the commoners did.
TothC has to do with nations and pollution. If your country goes green then production will just move to another country where they are happier to pollute the air. The air will mix with yours in a short period of time. The same thing applies to corporate tax rates, labour laws, fishing , groundwater etc. Basically anything where there is no way to enforce that the worst actor does not benefit. With the grazing commons I am sure you are right that most people were responsible but all it took was one selfish farmer and the system would have problems.
@@michaelnurse9089 I'm not at all an expert on how the commons worked, but there are ways to deal with people who don't follow the rules like ostracisation etc without instituting private property, which doesn't even resolve resource depletion without a state to enforce it. I think we often conflate commons and commoning with what they call a common pool resource in the literature. What the implications for this kernel are, I'm uncertain.
How to solve the tragedy of the commons: Make it private. Instead of depending on public developers hire your own developers to fix the kernel. It will be fine since it must be FOSS. “But what about people who cannot pay” They wait in the public q because clearly their bug doesn’t affect as many user to justify hiring a private bug fixer.
This is why all the pointless distros kind of annoys me. Instead of making what we already have better they spin off and create something else that'll likely get abandoned. This is perhaps the single biggest issue with Linux as a whole.
The blog does talk about these issues as a result of Linux kernel being written in C. So the obvious question is can parts of linux kernel be written iin Rust? or Another Kernel be written in rust that is Binary compatible with linux? Or the third option is to just Pick up Mach Kernel and Base android and other servers on that, Apple does post regular updates to it. Being a microkernel atleast the security issues woudl be restricted to the respective service.
Companies are dumping poorly coded drivers hoping someone else will maintain it. Yeah if your code breaks compilation of the kernel and you keep not caring. Some maintainer will do those changes because people care for the use of this driver while the company ain't paying enough attention to it.
That Google report sounded more like a way to bolster the case for Fuscia honestly. All OSes get major bugs. All active large software projects have insufficient developers to cover all bugs. All software must triage bugs to allocate finite resources. The difference with Linux is that (a) everybody can see the code and the bug lists and (b) anybody can fix it (or pay someone to fix it) if they want to. That's really it. As for the repeated effort thing, many distributions use a common base kernel, eg RHEL based distros, debian or Ubuntu etc. Not all bugfixes need backporting either, because they often don't apply to older versions of the kernel or they apply to parts the vendor doesn't support or care about (a bug in, say, the legacy IDE driver is irrelevant to an ARM based android vendor for example). And then there's the convenient conflating of bugs with security flaws. A bug can be something as simple as "comment needs to be clarified". That's obviously not going to be a security issue. Other bugs are performance related or something not working at all that should, etc. Only a small number of those will have security implications (and some of those will have *intended* security implications too). Google is perfectly capable of hiring engineers to push security fixes upstream if it wants. Android and SoC vendors are perfectly capable of pushing drivers etc upstream to to help more modern kernels work on legacy hardware, and Google is capable of working with and pressuring vendors to make their OSes more easily updateable... That's a vendor issue not a Linux one. As for a kernel from 2018 being in the "latest and greatest" OS... Windows Server 2019 (current stable Windows Server version) is based on the 1809 (that's September 2018) Windows build, itself a patched version of the original Windows 10 kernel from 2015. There's newer semi annual builds but 2019 is still the main stable release out there and actively sold. Linux is in a pretty typical position for any software of its size and scale, and as an OS choice is only benefitting from its open, free, user serviceable nature.
Are there any operating systems that DO have enough engineers to fix all the bugs? No. They all have bugs. All operating systems could, in theory, have fewer bugs if they had more people working on fixing their bugs. In the Linux world, discussions tend to be more public, and that's a good thing. One of the discussions around every OS that is still "alive" is where it is pointed out, more or less constantly, that the organizations that depend on that OS should, out of intelligent self-interest, invest in ongoing maintenance of that OS.
IBM brought Red Hat and with it kernel development with Linus. The first thing they did was enforce a regime free thinking developers rejected and even pushed Linus out for a while. No wonder they don't want to work in a corporate straightjacket when Linux was all about creative freedom. God knows what constraints they're forcing developers to work under? It's only a matter of time a second kernel developer team appears free of corporate involvement. There are plenty of developers, just not ones willing to agree to a code of conduct or imposed moral or ethical standards or acceptance of woke preconditions affecting their working environment... Especially when many do it free of charge in their own free time...
@@daveofyorkshire301 I am not here to fight or anything, and I am not on anyone's side, I genuinely wanted to know more about this and when I searched for it all I got was IBM acquiring Red Hat, no kernel stuff, that is why if you don't want to explain anything, please give some sources so I can research myself
Hmm, 100 engineers, a little too round. Did Gary mention Fuchsia? "Fuchsia is an open source effort to create a production-grade operating system that prioritizes security, updatability, and performance." Hmm... With that said a S-imple, S-peed, S-ecure Core does seem the eventual future of OSs when technological convergence makes local hardware redundant due to further abstraction layers via network technology. Minimal Distros in Linux are currently Cloud OS uses. Google makes a statement in an incredibly stupid manner.
Speaking of needing engineers for the kernel, I have some questions regarding drivers. I hope I'm not too late on the video. #AskMrGary How are drivers for say, a mouse, network interface card or GPU made ? How does that code look ? Let's say that X computer component doesn't have Linux drivers. Is it next to impossible for an average/above average programmer (who knows C and Linux) to write its own driver ? I guess it depends a lot on the complexity of the component or the work it does.
Haven't actually _looked_ at any Linux for a very long time, but at least for USB stuff, most drivers are now outside the kernel, using special interfaces that were added for that particular purpose. Those drivers will look basically however their writers & maintainers want (though that may prevent them from being brought into the kernel in some cases).
Typically, writing a driver requires intimate knowledge of the hardware device, and vendors are often very protective of their own products so they aren't copied. This means that generally speaking, vendors prefer to write the drivers for their own products and then contribute them to the kernel. There are many examples but Intel and AMD of course are two of the largest and most important vendors. There are exceptions, though and the GPU vendors nVidia and AMD are examples where not only are there proprietary drivers, there are also community drivers like Nouveau for nVidia. Because the kernel has strict policies accepting only open sourced contributions, by having an alternative community driver in the kernel, you can install Linux on a machine and if you have for example an nVidia card, your machine will boot up the first time with graphics support using Nouveau and then you can keep that or switch to the proprietary nVidia drivers if you wish. If Nouveau disn't exist, the first time you boot up, you'd have to deal with text only computing and a lot of people can't deal with that. There are also reverse engineered drivers for products like Broadcom network cards... Broadcom won't publicly license their drivers so the kernel won't take them. But, an independent community works hard at creating Broadcom drivers on their own that's not subject to the Broadcom license for people to use freely. Nowadays, it's very convenient for drivers to be provided by the kernel, that makes it easy to detect and install proper drivers for most devices. Your question about "What if there aren't drivers provided by the kernel?" harkens back a few years (yes, not that long ago, maybe 10 years now?) when standard practice was for drivers to run in User Mode (as opposed to kernel mode). Yes, it could be an adventure. You'd start with maybe nothing else besides keyboard, text only monitor and hard drives working. You would then need to find the code for the drivers you needed for each device and compile the driver yourself. And, in those days because your network card didn't automatically work, you'd have to download at least the network driver code to another machine and "sneaker net" the code by floppy or in later years USB key from one machine to the other before you could compile the driver. It can be difficult but usually not impossible to compile a User Mode driver and load it into your Linux. You wouldn't even need to know how to program one line of code... Just have the necessary source code, a really good guide with step by step instructions that are easy to understand and a little bit of good luck not to make any mistakes.
Distros most often use a kernel version. But they do not in the majority of cases change the kernel. Thereby making this point moot. Your mixing up kernel and userspace. And even there the "big divide" is not so big. Most distros use XServer (Xorg), some may use Wayland. All distros use ALSA as the sound server. But then we get to WM/DE this is where most distros start to diverge. Still a majority use either KDE or Gnome.
@@Ilestun Good link, this was how it was back in the day before universal packages. But notice what he is saying here, the kernel team does not have this issue, because they don't change the userspace facing API's in the kernel very much. He said he would strangle anyone who tried to make a major change there. He says that the distro packages and it's maintainers are having to deal with a mess, and this was true before the universal package formats.
This is a "none story"! Linux has thousands of people contributing code, it's just a limited amount of paid "maintainers". Windows is a far worse situation, and who the hell would take advice from anyone at Google?
Gary the real tragedy is that non-open OS vendors push their crap pointing a finger at open source projects, which aren't hiding or ignoring bugs. You still get bugs in Windows and OSx and QNX; they just have no obligation to acknowledge them.
The sheer amount of effort in updating numerous Linux kernel versions and the further variants produced by individual vendors sounds analogous to Microsoft's problem of having to support multiple past versions of Windows on 32-bit and 64-bit processors from both Intel/AMD and ARM. One can understand why, for example, Microsoft wants to set hard expiry dates for previous Windows versions and persuade people to switch to Windows 10 (or 11). By comparison Apple seems to have an easier time by simply dumping old versions and moving on. Besides, it only has to support its own hardware.
Only applies to Linux kernel. It's not more bug free in every regard. Desktop experience on Linux is lackluster. Bugs are everywhere, stuff breaks all the time. Localisation settings often don't work properly (not to mention localisation/translation itself is unusable). If you use only English it's fine. Also the drivers are sometimes just not working. On my PC networking is not working despite being a common and supported chip. Also printer can get lost after the updates (it was cups update).
450 bugs a month does NOT mean that many bugs in the kernel that you are running. Kernel code supports a lot of different things but any one compiled kernel is only using fraction of those. I am talking about different hardware, architectures, file systems, system calls, etc.I imagine that most of the bugs are in parts that few people use so nobody is rushing to fix them.
I think of it more like Linux has a lot of unmaintained code and projects. Everybody wants to focus on the next big thing and old code gets neglected. This happens in all software.
I also blame the tone of the late 90s ans early 300s that performance trumped everything. Any regression in performance, even to address security concerns, was immediately backed out amd the patches mocked on the mailing list. Too much code was created in a fragile way with no regard for maintenance or security.
Linux is imperfect, but Google will save us. *sigh of relief* Nearly 30 years on, it's Pretty Good™ so far. I don't expect to live to see "ultimately." When is that, again?
Gary, what do you see as the Linux involvement as overall. I ask as I used and had several computers willing Linux back in the early 2000s. I also remember the Linux versions of computers being sold at places like Walmart and such (with Lindows and Redhat). There was much excitement and activity. It looked like Linux was finally becoming mainstream in the consumer market. Today though just like iOS jailbreaking there doesn't seem to be that much excitement and people talking about Linux. Am I right on this observation?
Over 95% of all Internet web servers run on Linux. Android which is a flavor of Linux runs on approx 70% of all phones, and iPhone which runs on a close cousin to BSD Unix runs on practically everything else. Nearly 100% of all network routers from small home routers to backbone Internet routers run on Linux. Nearly 100% of all heavy load PC servers run on Linux although most corporate servers run on Windows. Practically all scientific R&D is run on Linux. Practically all Internet servers that do anything other than web services (like DNS, FTP) run on Linux.
There is plenty of developers. The problem is they are all scattered between the various distro's. Ubuntu before they made Unity had the vast majority of the market and was in a great spot to enact a real ecosystem for everyone. Without common standards there is no way to get the OEM support that Linux needs. The "everyone can have their own distro" is incredibly harmfull to linux as a whole. We need one configurable distro that is maintained and checked over by the community. IDC if it's Ubuntu, Fedora, Opensuse, Arch...or whatever. Till everyone is onboard with one distro things will never change.
Question, is this why Android has monthly security patches in comparison to iOS? Because there's so many bug fixes going on with the Linux Kernel each month in comparison to iOS? Not that I'm saying one is better than another, just something I've always wondered. Thanks for your video!
Theres a bug(or a feature) in xkb that hasn't been fixed since 2004 when it was reported first. Since then users tried to patch it numerous times but the changes never made it into the release. This bug is very annoying to people who use 2 or more keyboard layouts(to type in different languages).
Could you do a video at some point about how people can get involved in helping maintain the Linux kernel? I have been using Linux for some years now, and I have always wanted to give back, but I never know where to start, and I also for sure know I do not have the programming skills to write the code to contribute to it. But I also know there are other ways people can get involved, and until I learn the language Linux uses, I would like to help however I can.
Backporting patches are almost always done by someone and released to all. So I don't necessarily feel like people are duplicating backporting efforts.
As for Google making these claims, excuse me for thinking they have a motive in this and it's not a benevolent one. They have had constant issues with taking open source projects and code, not supporting those projects, and then making them incompatible and inevitably trying to kill those projects for their proprietary software. And it just so happens they have a vested interest in their new Fuchsia OS. Coincidence? I think not.
I agree about being skeptical about the flurry of Google's sudden public announcements about Linux issues and security issues. Perhaps they are trying to draw attention away from the huge security nightmare that is Android. Or maybe they are trying to turn Chrome OS into a viable commercial project and charge licensing fees like Microsoft does with Windows. I am more than a bit cynical about Google's claims of there being an immediate need for 100+ developers. They make it sound like if you use Linux you are doomed.
You're right. I'm very suspicious of Google.
Well, google did think it was a good idea to fork the master branch like that was going to end well lol
@@aytviewer2421 yeah. And google could easily pay for 100 developers if they were needed. It'd be a drop in the bucket for them.
It’s like TH-cam flipping a video before you can read the comments
What I like about this channel is that Gary is an actual tech guy and not a tech youtuber (glorified marketers).
“Linux doesn’t have enough developers to sort out the bugs”.
And Windows does?????
That isn't the point. The development setup (Commercial vs Open source) is radically different. If MS wants more devs it can hire them.
That means Linux doesn't have enough developers to create more bugs.
@@GaryExplains If "That isn't the point' then DON'T make the comparison!
@@afriedrich1452 LOL
@@GaryExplains So can Redhat/IBM, so can Canonical, etc. In fact they do and release bug fixes regularly. It's not like every bit of Microsoft's stack is constantly being security vetted. They'll have bits they haven't looked at for years that are shot full of holes. If no one is looking, no one is fixing them. And the more servers you've got the most investment you should but in maintaining them. Patching 100 servers is hard, it'll mostly be a tiny team of systems administrators and some home grown scripts. Patching 10,000 isn't so difficult, you'd have the Ansible or Chef or Puppet, or whatever infrastructure to manage it.
Quite surprised Gary doesn't walk around the backyard while explaining it
I see what you did there...
@Him well tbh, I am not sure if he meant exactly this but Linux youtubers like Luke and DT often do videos reflecting on things while walking in their backyard. Its kind of a meme in the Linux youtube space.
@@JoelJosephReji well yeah, we all know what's going on with Linux TH-camr and the woods. They do love nature
well he wouldnt want to give up more of his privacy by showing and doing that we already have the house number and i've already sourced his accent
@@Anon1370 make sense
When technology looks like magic, everyone expects miracles
If this was twitter I'd retweet this.
magic is real. except in real world. we know that someone has to write its code and control mechanisms to be synced and programmed and prowided to user as a result.
really all the magic stories i heard. i was allways wondered how that would work. today i know that it can actually be done using software. think about that voice activated door on story that alibaba had opened by saying secret words :D which today actually works :D and actually can be applied to any magic you can imagine .
It doesn't really work without magic kick
💯
I'm absolutely writing that down. WISDOM!
Without Linux, there would be no Google! Without Linux, there would be no android! Without Linux, the internet would be a very dull place. Without Linux, you would pay a lot more money for Windows OS.
Linux still is, very important for smaller businesses (start-ups).
My first Linux distro was an early Slackware version. I am using Linux for about 25 years now. I had no viruses. Restarting the computer, because of a software bug, was extremely rare. Maybe it was once a year the case.
In the past, Window users were negative about Linux. Nowadays, it is the opposite. Window users who tried Linux are not going back to Windows. I love Linux. I love those prorammers, working so hard keeping Linux the best OS there is!
Be honest Gary! Tell ppl too how many monthly bugs Microsoft and Apple have in their software! Read once that Microsoft did repair a bug after 20 years.
Linux is a kernel from with you create OSs like Android or Ubuntu.
Linux is NOT an operating system.
@@Ilestun Yeah yeah, GNU/Linux. Actually XOrg/systemD/GNU/Linux. Actually KDE/Wayland/systemD/GNU/Linux. Actually...
We get it. We're talking about the kernel here.
@@Ilestun Linux is an Operating System. I don't know who told you otherwise but they're wrong.
@@SuperDraganco no you are 100% wrong Linux is the kernel. The distribution you are running is a collection of GNU tools and programs by the FSF written back in the day. A quote from Wikipedia : "GNU is an extensive collection of free software, which can be used as an operating system or can be used in parts with other operating systems. The use of the completed GNU tools led to the family of operating systems popularly known as Linux. Most of GNU is licensed under the GNU Project's own General Public License."
Oh God! This entire comment section is filled with logical fallacies of all kind!
Well Google... 100 dev, it's around 25 million a year. With the BILLIONS you make, feel free to fix your complaint then.
Read "The Mythical Man-Month" some day. TL,WR: 9 women can't make a baby in a single month.
@Andai 😂
Android even has the bigger issue that it always uses an outdated kernel. The SoC companies are forced to push out their SoCs every year, so they publish a low quality linux kernel fork. Which ends up to be outdated on arrival.
When you have a lot of servers serving the same set of websites, it's actually easier to apply the fixes without any downtime (provided that the site has been engineered well). You can just do a rolling reboot.
When you only have one laptop, and you are watching Gary Explains, you don't want to reboot until the video is finished.
😂 so true.
With Apple you have to keep buying a new computer for every x number of OS upgrades. Windows 11 most of us will have to get new motherboards, RAM, CPU and other components because of the TPM 2.0 and other things. Linux can work on the oldest hardware to the latest hardware and all the small computer boards like the raspberry pi. The cost to buy a license for the Windows Server is expensive. WINE is free and allows you to run I don’t know how many Windows programs. If all computers were running Linux instead of Windows or Mac OS we could make a serious dent on cutting down on e-waste.
I can guarantee Windows will walk back their requirements. They do it every time. Remember when Windows 7 was going to be the last 32-bit OS?
@@MyNameIsBucket You look at Linux and they don't need a TPM 2.0 and it is very secure. They just need to make better written code to protect the operating system. Maybe they can incorporate some principles from open source and make their OS more secure. Also there is so much more that users could do to make their computers more secure as well as their passwords.
What does TPM 2 have that TPM 1.2 doesn’t anyway?
I think you're seriously biased. My Apple MBP 2013 is running latest MacOS 11.5.2 It's not fast but runs OK with latest XCode and other things I need. Realistically hardware is updated more often than Apple drops support of it. I had 3 DELL laptops at work running Windows 10. You don't need to run Windows 11 because it's Windows 10 SE or something like that. Yes, Linux is great to run on some old, weird, underpowered, obsolete or custom hardware.
@@henson2k With Windows and Linux you can custom build your computers. Yes you buy can premade Windows computers but with custom built desktop and towers you can swap out a board, change the processor, change the ram, change the hard drive, change the video card. With Mac you can't do it unless you can afford the Mac Pro which is simply beyond my budget. Im still running a mac mini 2011 and a MacBook pro 2012. Im hoping to save up for the Mac mini 2018 though. I like the Mac OS especially mission control with two monitors. I can not get that feature with Windows 10 or Linux (Debian). I like the terminal program although I prefer iterm2 .With my MacBook pro 2012 and mac mini 2011 I can at least upgrade the memory and the hard drive. If Apple could have at least made the Mac mini's like the 2012 but with better hardware instead of instead of soldering ram and soldering hard drives. You are right I am biased I like building my machines from the ground up as much as I can.
There's no such thing as absolute security and that's a problem from reality itself. However, if all that work is not shared (upstreamed) it's wasted! That's a very important point!
Gary, will you make any video about fuchsia os and zircon kernel?
The world will be on chaos if all Linux stop working tomorrow morning.
Nah mate, the world stops turning, everything turns dark, communications fall, logistics, food chains crumble, the plundering starts, Zombie Devs roam the streets in search of Cheetos, everybody dies a horrible dan and...
You're right, chaos...
Well a couple weeks ago one of the largest DNS networks went down for under an hour and the whole internet practically ceased to exist in the north Western Hemisphere
People are using it because it doesn't do that.
@@henson2k , makes sense.
There will always be bugs. There will always be patches. There will always be engineers working on Linux. Nothing to fret about.
@gghhkm You meant many issues will be created exponentially ?
@gghhkm as are the people able to contribute to the linux kernel
btw. the linux kernel in fact is very tiny. It's only in the kernel/kernel subdirectory. The overwhelming majority of the kenel source code are drrivers. You could in fact remove the entire drivers directory and choose only those drivers for the hardware you actually have. Those drivers by the way mostly come from chip vendors, who indeed make mistakes with not actually testing some model of the IC and not another.
You would be left with something in the order of audacity in code complexity.
@gghhkm that means we will be stuck with rust although I saw a 5 minute video on it and I already didn't see a single thing I liked. It's not solving problems in the correct way at all. And it has a completely broken type system. If you can't name your types, you are doing something wrong.
What I think they should have done is progress the linux kernel towards a micro kernel architecture. You can already do most things in user space, it's time to deprecate the old.
The Truth.
yeah I'm confused about what the actual point of this video is, like he described the system of management that exists and then says it needs to exist, like, no shit? LOL Open source anything can have people fix things locally and they're under no obligation to send it "upstream" if they don't want to
Gary, can you please explain about the diligent work of how linux kernels are released and the process of how the bugs are identified? I know it is a big subject, that cannot be contained in a single video, but you could just give a gist of it atleast.
Same goes for Linux apps, I'm getting a nearly 50% success rate installing apps that actually work, without having to spend days trying to figure out to maybe get it to sort of work.
I think that one of the biggest downsides of Linux is that when people talk about it, it is often too technical for n00bs and intimidates them as they think it sounds extremely technical, and yet for daily tasks, it can actually be rather easy with more modern versions of Linux and Desktop environments like Gnome thanks to improvements over the past several years. I think that it's rare I ever see people doing side by side comparisons with other common OS's doing daily normal user stuff, like using a web browser, installing apps like Skype, Telegram, Spotify, Steam and things that people are familiar with on other platforms, scrolling over the top bar to switch workspaces to increase productivity between apps. I have found that once I got used to the simple things like GUI layout, the overall advantages of Linux over Windows just made it good enough that I now use Linux as my daily driver and removed Windows entirely after I went over a month without even booting Windows.
Basically dumb idiots trying to be especial, they're trying to exclude people instead of include
@@eltonmsn yeah, most of them just trying to be inferior to these n00bs. I mean, they're technically not wrong about those technological terms. But for n00bs, they really find it overwhelming to hear all of those things
Case in point 90% of all smartphone users run android linux
@@fuseteam No, we're talking about Linux desktop. The one that you install on PC as a daily driver
@@bigl9527 does it matter, point is Linux is not hard to use, anyone can use it. Android linux is so easy anyone can use it, desktop Linux can be just as easy ;)
The tragedy of the Enclosures. Always keep an eye open for those with vested interests.
There is some backporting, and it probably is done by more than one engineer sometimes, but Linux is all about standing on the shoulders of giants - far more than proprietary OS's. Linux is massively parallel development.
Interesting vision, but I can't see the relationship with the tragedy of the commons, which has more to do with the management of non-renewable resources, the Linux kernel is not a non-renewable resource, but a piece of software of enormous complexity that has to be gotisproved somehow. On the other hand and also important to emphasize is that the success of Linux is only a consequence of the possibility of having cheap hardware, a good kernel is useless if you do not have a place to run it.
Microsoft pours in millions into the salaries of their kernel. Yet it's bugs are pathetic than linux.
I wouldnt worry about it. All mature products reach a point where fix work introduces bugs at the same rate as they are fixed.
The problem with a bug fix is that it often introduces two more bugs. There is always a compromise between generally applicable and exploitable. Software that "does everything" is also necessarily full of bugs.
Which is why there should be a scaled back version of an OS that doesn't try to do everything.
Make the "commons" smaller.
Make the size of the "commons" dependent on user type, of which there should be more than two: root and normal.
If I don't need a media player, then let me select a "user type" that doesn't need the multitude of (buggy) files it calls.
See the article "Dynamic Linking Libraries" by Paul Lutus:
"Place the downloaded DLL files only in the same directory as the application that needs them."
@@artsmith1347 I agree. At least Linux users have the option to build only the components they need and even to optimize those components to a particular application... it just takes expertise and time. Someone with the expertise should offer massive online classes for a reasonable fee.
Well, I guess that's a good thing for those looking for a career in kernel development :p
I sure got excited
The thing that concerns me is that we seem to just keep adding features that we could likely go without. If you froze feature updates and just spent a year or two fixing security and bug problems, cleaning out old unimportant components that could lead to security threats, etc. That would likely make a more stable system. You might even be able to cut down on the bloat that exists. Everything seems to be getting so big and complicated for no real reason.
Focusing just on bug and security fixes would probably solve that issue, but it would also mean that Linux would fall behind feature-wise compared to other operating systems. Besides, there are Linux vendors (Google being the largest one in this case) for whom new features are more important than bug fixes. New features in Android draw more new customers (and keep old ones from switching to something else), but no one really cares about bug fixes because the average consumer doesn't even think about that.
@@thedoofguy5707 Fall behind? C'mon. Windows 11 just updated a look that could have been easily done with Linux 5-6 years ago. Functionality-wise, Linux only usually loses out in gaming compatibility and that's getting better - but do we sacrifice security for game compatibility? Seems like a bad trade to me.
I get that building adoption is important, but most people only use a fraction of the features that they even have. I think that the problem is that people like to see a list of more things, even if they don't use them. And I know that there are outliers to use-cases, heck I even used the old floppy compatibility still in Linux recently. (It's no longer supported for updates, but is still there.)
Can you think of a recent feature that came out that was a massive improvement on your computing experience?
@@sbrazenor2 theres' few system shortcut that i hope that will be inplement on linux just like windows have so that it ll be easier to navigate the system. No mouse & no terminal. Eg: super + x (open the system menu) -> u (select shut down) -> u (confirm ur choice) .
Other things that i can think about is face recognition or touch id for app or system authentication.
Next, it ll be super helpful if there a preview panel so i can see the content of document just by click on them and not having to rum them on any app as it will increase both cpu & ram usage.
Theres few thing that can still be added to show that it always good to have more features built in. World is getting more modern and people's workflow change along with it. Ease of use is needed. I hope that 1 day linux can be run, tweak etc fully at 100% based on gui only and not having to be relied in terminal unless the user want it as it help us non techy user and save our time from having to browse and memorize every single god damn command line.
As was mentioned already, Linux licensing requires anybody who add anything modify kernel to release it to the public so the effort is never duplicated unless they intentionally violate Linux license by not releasing the changes, in which case feel free to report it to gnu.
Now the second point, it was stupid of Google to use the old version of kennel in the first place and manually back port all the patches. Linus mentioned it somewhere that forking Linux is an enormous effort and that is another reason to get your changes accepted in the mainline kernel.
Explained. Thanks Gary for shedding light on this important issue. I'm a Linux desktop user. I love it but no software is perfect and this is a great illustration on some of the reasons.
The situation is even worse than simply too few programmers to fix bugs. There are too few programmers to spot deliberately introduced malware and back doors, much of which can be incredibly hard to identify. And before any self-proclaimed gods-of-open-source cast aspersions at my assertion from their lofty perches in the "I write 50 KLoC/year, no bugs" inner circle, let them riddle me this: how many years did HTTPS have a major flaw in its protocol logic that went undetected by the open source gurus out there?
Heck, remember Heartbleed? A gaping crater deep inside OpenSSL undetected for a decade.
The upbeat music in combination with penguins makes me wish I had more thumbs to give positive reviews. Great analogy of the Commons
All the problems brought up in the video only happens because Linux is open source and people can see the problems that are way worse in other OSs.
Yet AMD is on a hiring spree for Linux developers to help with the Steam Deck launch.
People are trying to ditch Google and Apple spying more than ever.
Linux maybe about to take a vast part of the market. Lag in certain areas is bound to happen.
Why doesn't Linux has a centralized monte-carlo stress test package that all Linux developers can use to test their bug fixes? If the bug fix doesn't survive the stress test it doesn't get in. This methodology was used by navigation teams for their spacecraft and ground software.
It's more than just stress testing. A lot of these bugs are in hardware drivers, and that's not such a simple test case, since you need physical platforms and there may be a requirement for human involvement to get things set up.
There could be some merit for these duplicate "downstream" use cases to switch to a hardware certification model where they focus on specific combinations, have a testing platform, and then always use the latest development head. They would then only release new kernels that pass on their certified platforms. Bug fixes would go straight to upstream, and the practice of maintaining an old kernel would die off. However this is a challenge, as RedHat, SUSE, Oracle, etc. are not going to want to provide restrictive lists of whitelisted hardware that is supported.
Part of this problem was created within the Android ecosystem, where hardware manufacturers wanted to get to market as fast as possible. To do that, they modified the Linux kernel to work with their device, and then *never* contributed those drivers upstream. This makes them a single point of failure/bottleneck in getting any kind of updates to end users. And if you didn't already know, most hardware companies abandon their released devices incredibly quickly. Google now mandates 2 years of support - and it was a coup to get it extended that long.
The Linux ecosystem has other duplication issues too, like multiple implementations of similar services that compete, multiple packaging and deployment mechanisms that compete, etc. It's both a benefit and a curse of the platform being open.
The problem of system errors occurs in Linux, Windows, Mac and so on.
The problem is not with the number of developers, but with the size of the OS.
Because an OS becomes too big, it becomes more difficult to detect errors in the system.
An OS should be minimal in my opinion, and only control hardware.
I think in the longer term this is just necessary to guarantee safety.
And that's what the Linux kernel does mostly iirc I once read that 70% of the Linux kernel is device drivers
The different developers doing the same task is because they chose to do that.. Android CHOOSES to use a 3 year old kernel (and modify it to add their own functions until they put stuff upstream)..
They do that because they want to. And they put in that extra effort.
It's nothing to do with core kernel Devs.
the one with the reboot is fixed with livepatch for servers
Linux fundation have to focus more on maintenance than politics
The majority of development is backed by corporations with special interests, no chance of that happening for the foreseeable future.
That's the kernel/os, I can only imagine the number of software engineers needed for other layers on top - eg Boeing software used for their spacecraft.
i think it is the other way around. linux has to many developers. while one side develops, the other side produces bugs. and if everyone is exhausted from the running around, one claims we can find the bugs. in the mean time apple, microsoft, and all the f* rest, are filing for the patents.
Finally, some one with some actual sense
linux problem with wide usage is that there are 100 main and even more sub distros. if i wanted to join linux family, choosing the first one is like turning the rice pot to chose one single rice, they all rice but each one a lil different that previous one...
This sounds like Google PR propaganda because they have a new OS is coming out soon. As long Linux is the way Linus wants its good enough for me.
Your ideal world is not everyone's ideal world, stop assuming that your mindset is the same as everyone else's.
Linux Desktop is the hated by even Linus. So avoid confusing the kernel with the OS. It's not a BSD project.
@@tcroyce8128 Where did he said he hated it though? I only recall him saying that debian didn't work for him in 2012 (which imo back then was legitimately annoying for certain hardware)
@@ChristopherGray00 That's being polite saying it didn't work for him. And he still deals with linux, not the other Utils that gets shipped along with distribution. The point of linux winning is total domination in server and in anything custom. We all know and love that. All but desktop. All the choices and variations yet still no significant market.
@@tcroyce8128 Yeah, it didn't work for him and he didn't like it, that is debian however, other distros can provide more hardware compatibility out the box, and the kernel has drastically improved with compatibility since.
If you think the kernel is understaffed you will be truly shocked when you look how dire the situation is in many user space projects.
Thoughtful points on how expansive the scope of engineering going on to maintain and improve the Linux Kernel across thousands of distros and platforms has become. Even as a relatively hard core and long time Linux fan, it's not something I had given much thought to. I think what this exposes is not a crisis, but a clear need to have a recognized commons resource for better orchestrating and coordinating the maintenance and development tasks. I think the challenge will be in keeping the commercial monopoly interests from trying to take over the show and control Linux from a top down authoritarian model, since that seems to be the only model they know.
Hi Gary, great explication video as always. Please make a video for the Dummy to explain them: Open Source and Free Software doesn't means I work for no money!
Thing is it's too complex and too big. Linux need to shrink so it can grow again.
What irritates me is the Linux developers and the Obsession with updates when most of them are useless. Updates can break things especially graphics drivers. If I want to side load Linux on a PC I use for Classic gaming with a GPU Nvidia thinks is legacy then those drivers should be carried toward no matter what Nvidia says about it. Keep the 307 and 340 drivers relevant.
your perspective is quite different. things like rebooting your pc being a pain point, i would never have guessed that.
good video. thanks
I suppose big part of this problem, where there's requirement to backport fixes, has to do with two things: #1 Lack of ABI backward compatibilities with older kernels (which would break userspace applications), #2 smartphone vendors often use proprietary kernel modules (drivers) that are not part of upstream kernel; if they decided to use newer kernels in their product, these drivers would need to be recompiled and tested.
*GARY!!!*
*Good Afternoon Professor!*
*Good Afternoon Fellow Classmates!*
Stay safe out there everyone!
MARK!
Commercial Linux has developed methods of patching running kernels to reduce rebooting.
Actually rolling out updates to replicated services like web servers should be automated. They just need to have the reboots serialised so it looks like a brief network outage.
The core argument about duplication of effort is true, yet most businesses value stability and validation other rapid deployment of bug fixes.
I wish FreeBSD/ NetBSD were more popular
Why so? Might try them if you tell me something interesting
@@AU-hs6zw They are complete operating systems unlike linux. Which means you don't glue pieces together to make an OS. The userland is made by the same people who make the kernels.
FreeBSD has ZFS, jails, DTrace. NetBSD runs on every hardware you can find. They are genetically Unix(Forked from 386BSD/ 4.4bsdlite2). It may not mean much to you but many people care.
Aside from that, it's always good to have competition. Many people think of GNU/Linux as the only OS of it's kind.
But if you are going to replicate them to look like your Linux desktop, it's not worth it.
@@devsirat would try them out soon 😀
Google can just hire more kernel devs to contribute... With all the Billions of dollars, surely you can, right?
Enginears from google and amazon having to fix bugs for their use..not really my problem
That's a fact!!!
From Montreal Canada hello! I'm pretty sure the b-roll you used with the police and firemen was from Montreal! Really like your videos keep up the good work.
Image if every Linux OS devs just came together and made 1 version of Linux OS. The fragmentation of linux is a really big problem since devs are creating the same things multiple times.
With the Steam Deck bringing Arch Linux to a bunch of new people, issues with the kernel could cause huge problems with bringing people into the Linux ecosystem.
And yet the Steam Deck is going to have a relatively limited variety of hardware, so it's set of code will be improved much better than e.g. MIPS support code.
Part of the problem is that many volunteer developers would rather work on something “cool” instead of necessary but mundane features & fixes.
The real commoners were never out only to graze as much as they could, that's a myth which supports land privatisation. In the Linux kernel we've created a shared resource where everyone is trying to get as much as they can from it, not thinking about how to do so whilst sustaining the resource, which is what the commoners did.
TothC has to do with nations and pollution. If your country goes green then production will just move to another country where they are happier to pollute the air. The air will mix with yours in a short period of time. The same thing applies to corporate tax rates, labour laws, fishing , groundwater etc. Basically anything where there is no way to enforce that the worst actor does not benefit. With the grazing commons I am sure you are right that most people were responsible but all it took was one selfish farmer and the system would have problems.
@@michaelnurse9089 I'm not at all an expert on how the commons worked, but there are ways to deal with people who don't follow the rules like ostracisation etc without instituting private property, which doesn't even resolve resource depletion without a state to enforce it. I think we often conflate commons and commoning with what they call a common pool resource in the literature. What the implications for this kernel are, I'm uncertain.
How to solve the tragedy of the commons:
Make it private.
Instead of depending on public developers hire your own developers to fix the kernel. It will be fine since it must be FOSS.
“But what about people who cannot pay”
They wait in the public q because clearly their bug doesn’t affect as many user to justify hiring a private bug fixer.
That will take us back to 1990's Microsoft, if you are old enough to remember that, son.
Some middle ground would be nice. A mere $5 or $10 per copy would allow attracting and rewarding mantainers.
This is why all the pointless distros kind of annoys me. Instead of making what we already have better they spin off and create something else that'll likely get abandoned. This is perhaps the single biggest issue with Linux as a whole.
Ha, ha, you fooled me about the sound. I thought my headphones stoped working 🙂
If this is the truth...stupidity is going to engulf the world.
Agree with the points discussed.
That's the issue, everyone wants free stuff, but they don't want to participate to maintain that stuff.
Free users are probably 90% of them lol
Hey Gary! Nice change of location! Random question, but what kind of car do you drive?
The blog does talk about these issues as a result of Linux kernel being written in C. So the obvious question is can parts of linux kernel be written iin Rust? or Another Kernel be written in rust that is Binary compatible with linux? Or the third option is to just Pick up Mach Kernel and Base android and other servers on that, Apple does post regular updates to it. Being a microkernel atleast the security issues woudl be restricted to the respective service.
Companies are dumping poorly coded drivers hoping someone else will maintain it. Yeah if your code breaks compilation of the kernel and you keep not caring. Some maintainer will do those changes because people care for the use of this driver while the company ain't paying enough attention to it.
I like the outdoor setting for this video
1:50 it also runs on your standart desktop pc with a gui one of them witch is nice is called KDE
LOL. That isn't the point I am making. 🤦♂️
That Google report sounded more like a way to bolster the case for Fuscia honestly.
All OSes get major bugs. All active large software projects have insufficient developers to cover all bugs. All software must triage bugs to allocate finite resources. The difference with Linux is that (a) everybody can see the code and the bug lists and (b) anybody can fix it (or pay someone to fix it) if they want to.
That's really it.
As for the repeated effort thing, many distributions use a common base kernel, eg RHEL based distros, debian or Ubuntu etc. Not all bugfixes need backporting either, because they often don't apply to older versions of the kernel or they apply to parts the vendor doesn't support or care about (a bug in, say, the legacy IDE driver is irrelevant to an ARM based android vendor for example).
And then there's the convenient conflating of bugs with security flaws. A bug can be something as simple as "comment needs to be clarified". That's obviously not going to be a security issue. Other bugs are performance related or something not working at all that should, etc. Only a small number of those will have security implications (and some of those will have *intended* security implications too).
Google is perfectly capable of hiring engineers to push security fixes upstream if it wants. Android and SoC vendors are perfectly capable of pushing drivers etc upstream to to help more modern kernels work on legacy hardware, and Google is capable of working with and pressuring vendors to make their OSes more easily updateable... That's a vendor issue not a Linux one.
As for a kernel from 2018 being in the "latest and greatest" OS... Windows Server 2019 (current stable Windows Server version) is based on the 1809 (that's September 2018) Windows build, itself a patched version of the original Windows 10 kernel from 2015. There's newer semi annual builds but 2019 is still the main stable release out there and actively sold.
Linux is in a pretty typical position for any software of its size and scale, and as an OS choice is only benefitting from its open, free, user serviceable nature.
Microsoft simply outsourced internal beta testing for windows 10 to its paying user base.
Are there any operating systems that DO have enough engineers to fix all the bugs? No. They all have bugs. All operating systems could, in theory, have fewer bugs if they had more people working on fixing their bugs. In the Linux world, discussions tend to be more public, and that's a good thing. One of the discussions around every OS that is still "alive" is where it is pointed out, more or less constantly, that the organizations that depend on that OS should, out of intelligent self-interest, invest in ongoing maintenance of that OS.
Bro Im going to contribute
Are you saying computers are getting too complex and becoming unable to support, especially in the case of Linux which is supposedly free
Linux seems to be divided into too many varieties, of itself; it becomes tedious to fix everything.
IBM brought Red Hat and with it kernel development with Linus. The first thing they did was enforce a regime free thinking developers rejected and even pushed Linus out for a while. No wonder they don't want to work in a corporate straightjacket when Linux was all about creative freedom. God knows what constraints they're forcing developers to work under?
It's only a matter of time a second kernel developer team appears free of corporate involvement. There are plenty of developers, just not ones willing to agree to a code of conduct or imposed moral or ethical standards or acceptance of woke preconditions affecting their working environment... Especially when many do it free of charge in their own free time...
can you please explain more
@@hpsmash77 its not a secret, they aren't hiding it... Its been reported widely.
@@daveofyorkshire301 I mean, what is the relationship of IBM and the kernel, not red haf.
@@hpsmash77 Do your own homework and don't contradict what you obviously don't know...
@@daveofyorkshire301 I am not here to fight or anything, and I am not on anyone's side, I genuinely wanted to know more about this and when I searched for it all I got was IBM acquiring Red Hat, no kernel stuff, that is why if you don't want to explain anything, please give some sources so I can research myself
Neither does anyone else, the bugs and flaws are endemic, more with closed platforms
From Montreal Canada! Thank you Gary for your great video's. BTW, that was a crime scheme in Montreal. SPVM cars.
I wonder if google will stop caring about Linux and move to Fuchsia for everything. Would be good to get a video on that 🙏
Not any time soon of course
They might be planning on moving chrome os from linux to fuchsia.
More flavours of Linux than there are engineers ;-)
Thanks GS.
Thanks for the insight Gary
Hmm, 100 engineers, a little too round. Did Gary mention Fuchsia?
"Fuchsia is an open source effort to create a production-grade operating system that prioritizes security, updatability, and performance."
Hmm...
With that said a S-imple, S-peed, S-ecure Core does seem the eventual future of OSs when technological convergence makes local hardware redundant due to further abstraction layers via network technology.
Minimal Distros in Linux are currently Cloud OS uses. Google makes a statement in an incredibly stupid manner.
Speaking of needing engineers for the kernel, I have some questions regarding drivers. I hope I'm not too late on the video.
#AskMrGary How are drivers for say, a mouse, network interface card or GPU made ? How does that code look ? Let's say that X computer component doesn't have Linux drivers. Is it next to impossible for an average/above average programmer (who knows C and Linux) to write its own driver ? I guess it depends a lot on the complexity of the component or the work it does.
Haven't actually _looked_ at any Linux for a very long time, but at least for USB stuff, most drivers are now outside the kernel, using special interfaces that were added for that particular purpose. Those drivers will look basically however their writers & maintainers want (though that may prevent them from being brought into the kernel in some cases).
Typically, writing a driver requires intimate knowledge of the hardware device, and vendors are often very protective of their own products so they aren't copied. This means that generally speaking, vendors prefer to write the drivers for their own products and then contribute them to the kernel. There are many examples but Intel and AMD of course are two of the largest and most important vendors. There are exceptions, though and the GPU vendors nVidia and AMD are examples where not only are there proprietary drivers, there are also community drivers like Nouveau for nVidia. Because the kernel has strict policies accepting only open sourced contributions, by having an alternative community driver in the kernel, you can install Linux on a machine and if you have for example an nVidia card, your machine will boot up the first time with graphics support using Nouveau and then you can keep that or switch to the proprietary nVidia drivers if you wish. If Nouveau disn't exist, the first time you boot up, you'd have to deal with text only computing and a lot of people can't deal with that.
There are also reverse engineered drivers for products like Broadcom network cards... Broadcom won't publicly license their drivers so the kernel won't take them. But, an independent community works hard at creating Broadcom drivers on their own that's not subject to the Broadcom license for people to use freely.
Nowadays, it's very convenient for drivers to be provided by the kernel, that makes it easy to detect and install proper drivers for most devices. Your question about "What if there aren't drivers provided by the kernel?" harkens back a few years (yes, not that long ago, maybe 10 years now?) when standard practice was for drivers to run in User Mode (as opposed to kernel mode). Yes, it could be an adventure. You'd start with maybe nothing else besides keyboard, text only monitor and hard drives working. You would then need to find the code for the drivers you needed for each device and compile the driver yourself. And, in those days because your network card didn't automatically work, you'd have to download at least the network driver code to another machine and "sneaker net" the code by floppy or in later years USB key from one machine to the other before you could compile the driver.
It can be difficult but usually not impossible to compile a User Mode driver and load it into your Linux. You wouldn't even need to know how to program one line of code... Just have the necessary source code, a really good guide with step by step instructions that are easy to understand and a little bit of good luck not to make any mistakes.
@@absalomdraconis Nowadays, all drivers including USB are typically distributed by the kernel.
@@tonysu8860 Hey, thanks for the detailed answer!
Far too many distros leading to dar too many libraries leading to a freaking mess. Linus explained it perfectly.
Distros most often use a kernel version. But they do not in the majority of cases change the kernel.
Thereby making this point moot.
Your mixing up kernel and userspace.
And even there the "big divide" is not so big.
Most distros use XServer (Xorg), some may use Wayland.
All distros use ALSA as the sound server.
But then we get to WM/DE this is where most distros start to diverge.
Still a majority use either KDE or Gnome.
@@Zandman26 He is called Linus Torvalds and he disagrees with you : th-cam.com/video/Pzl1B7nB9Kc/w-d-xo.html
@@Ilestun Good link, this was how it was back in the day before universal packages.
But notice what he is saying here, the kernel team does not have this issue, because they don't change the userspace facing API's in the kernel very much.
He said he would strangle anyone who tried to make a major change there.
He says that the distro packages and it's maintainers are having to deal with a mess, and this was true before the universal package formats.
This is a "none story"! Linux has thousands of people contributing code, it's just a limited amount of paid "maintainers". Windows is a far worse situation, and who the hell would take advice from anyone at Google?
Gary the real tragedy is that non-open OS vendors push their crap pointing a finger at open source projects, which aren't hiding or ignoring bugs. You still get bugs in Windows and OSx and QNX; they just have no obligation to acknowledge them.
They do acknowledge them. Can you give me some examples of what you claim?
The sheer amount of effort in updating numerous Linux kernel versions and the further variants produced by individual vendors sounds analogous to Microsoft's problem of having to support multiple past versions of Windows on 32-bit and 64-bit processors from both Intel/AMD and ARM. One can understand why, for example, Microsoft wants to set hard expiry dates for previous Windows versions and persuade people to switch to Windows 10 (or 11). By comparison Apple seems to have an easier time by simply dumping old versions and moving on. Besides, it only has to support its own hardware.
Linux is free, yet more bug free than windows...
Only applies to Linux kernel. It's not more bug free in every regard. Desktop experience on Linux is lackluster. Bugs are everywhere, stuff breaks all the time. Localisation settings often don't work properly (not to mention localisation/translation itself is unusable). If you use only English it's fine. Also the drivers are sometimes just not working. On my PC networking is not working despite being a common and supported chip. Also printer can get lost after the updates (it was cups update).
The filepicker having no thumbnail preview for 30 years is a sign of understaffing tho.
If they call it a WONTFIX, it's not a bug...
450 bugs a month does NOT mean that many bugs in the kernel that you are running. Kernel code supports a lot of different things but any one compiled kernel is only using fraction of those. I am talking about different hardware, architectures, file systems, system calls, etc.I imagine that most of the bugs are in parts that few people use so nobody is rushing to fix them.
I think of it more like Linux has a lot of unmaintained code and projects. Everybody wants to focus on the next big thing and old code gets neglected. This happens in all software.
I also blame the tone of the late 90s ans early 300s that performance trumped everything. Any regression in performance, even to address security concerns, was immediately backed out amd the patches mocked on the mailing list. Too much code was created in a fragile way with no regard for maintenance or security.
Linux is imperfect, but Google will save us. *sigh of relief*
Nearly 30 years on, it's Pretty Good™ so far. I don't expect to live to see "ultimately." When is that, again?
Gary, what do you see as the Linux involvement as overall. I ask as I used and had several computers willing Linux back in the early 2000s. I also remember the Linux versions of computers being sold at places like Walmart and such (with Lindows and Redhat). There was much excitement and activity. It looked like Linux was finally becoming mainstream in the consumer market. Today though just like iOS jailbreaking there doesn't seem to be that much excitement and people talking about Linux. Am I right on this observation?
Over 95% of all Internet web servers run on Linux.
Android which is a flavor of Linux runs on approx 70% of all phones, and iPhone which runs on a close cousin to BSD Unix runs on practically everything else.
Nearly 100% of all network routers from small home routers to backbone Internet routers run on Linux.
Nearly 100% of all heavy load PC servers run on Linux although most corporate servers run on Windows.
Practically all scientific R&D is run on Linux.
Practically all Internet servers that do anything other than web services (like DNS, FTP) run on Linux.
There is plenty of developers. The problem is they are all scattered between the various distro's. Ubuntu before they made Unity had the vast majority of the market and was in a great spot to enact a real ecosystem for everyone. Without common standards there is no way to get the OEM support that Linux needs. The "everyone can have their own distro" is incredibly harmfull to linux as a whole. We need one configurable distro that is maintained and checked over by the community. IDC if it's Ubuntu, Fedora, Opensuse, Arch...or whatever. Till everyone is onboard with one distro things will never change.
One thing to look after on these bugs is that bug may be attempt to make vulnerabilities to Linux and this has happened.
Question, is this why Android has monthly security patches in comparison to iOS? Because there's so many bug fixes going on with the Linux Kernel each month in comparison to iOS? Not that I'm saying one is better than another, just something I've always wondered. Thanks for your video!
Does the monthly patch include kernel updates as well? Kernel updates could be just optional
I think your base assumption might be wrong:
iOS 14.4.2 - 26 Mar 2021
iOS 14.5 - 26 Apr 2021
iOS 14.5.1 - 03 May 2021
iOS 14.6 - 26 May 2021
@@josephnevin yes, the monthly security patches include security fixes for kernel components, i don't think they are optional tho
Theres a bug(or a feature) in xkb that hasn't been fixed since 2004 when it was reported first. Since then users tried to patch it numerous times but the changes never made it into the release. This bug is very annoying to people who use 2 or more keyboard layouts(to type in different languages).
Sounds like overfishing the ocean.
With hundreds of distros no wonder shit happens.
If developers concentrate on two or three distros thy will be less bugs.
Could you do a video at some point about how people can get involved in helping maintain the Linux kernel? I have been using Linux for some years now, and I have always wanted to give back, but I never know where to start, and I also for sure know I do not have the programming skills to write the code to contribute to it. But I also know there are other ways people can get involved, and until I learn the language Linux uses, I would like to help however I can.