I have prepared Linux systems that run in 32MB of RAM (yes MB, not GB), with GUI and no swap. It wasn't meant for general usage (prepared using Yocto for medical devices), but gives an idea of what you can achieve. I felt like an artisan sculpting a gem. [EDIT] Clarification to address some random replies: The device was a medical device able to operate without interruptions for years if necessary, the last test I know it was operating for 6 months w/o interruption, but years was the target, and had to support OTA and operate with no swap. The UI was touch driven and responsive, IEC 62366-2 certified. I'm sure that there are other options in the market, but I doubt many tick so many boxes. And yes, a version of Windows for Devices, Windows Embedded or WIndows IOT (or whatever name it has today) was part of the platform set evaluated by the customer.
Ya.... and to point out, you built it for a singular model of hw, for a very very very selective set of things. Linux is awesome for that, but don't get it twisted..... that isn't compatible with you normal desktip user.
Linux has long had better memory management compared to not just Windows, but a number of other OSs. I know Linus and the Kernel crew were somewhat obsessive about memory management and making sure things used memory efficiently. At one point, I was working for a company in Montreal, and they had been running a Web based JAVA application on (at the time) Sun's Solaris, and were constantly have problems with stability, which we traced to memory. Back then, Sun Workstation memory was very expensive, so I said, let's try putting Linux on the same workstation, and see how the app runs since I knew that Linux was better with memory management. I prepared a Sun workstation that we had laying around as a spare with Linux, and all the appropriate software...and sure enough, it was totally stable, and our QA people tested it and they found it stable, and faster as well through their entire test suite. What I didn't tell them is that the workstation I had used had only half the memory the big server the software was going to run on! BTW, that's a nice view of the Parliamentry LIbarary in Ottawa in the background!
I was amused when toying with Illumos recently (a fork of OpenSolaris) in a VM with 4 GB RAM, just running "sudo pkg update" on a fresh install caused the package manager to run out of memory. Though it turns out it was partly my fault - Solaris wanted to use part of the hard disk as swap, but my VM was already running out of disk space. Still, you'd think 4 GB of memory would be enough memory to run the package manager. I note that it's written in Python, which I'm normally a big fan of, but obviously the pkg code needs a bit more work. Though admittedly this is all kind of an edge case, since if you really need to run the GUI version of Solaris in a production environment you'd want to run it on bare metal.
what a joke better memory management? this topic keep exists. Ram comsumption always equal to feature that os give. what a joke compare ram usage after reboot. Windows include good looking animation, anti-virus (bitdefender). if u look in ram usage of application. it totally no different. it just different in reboot. Ram already cheap. so spend 2GB ram for OS. its not a big problem at now for so much feature that OS can give.
This is yet another reason to love Linux, as it can make older hardware even those limited to to 4GB RAM usable, were as WIndows on the same hardware is a no go.
don't even need 4gbs or ram. if you download a minimum installation of arch linux for example and download a minimalistic window-manager or desktop-environment you can easily get away with 2gbs of ram.
I have an old Dell XPS 430 from 2009 lying around, on Windows 10 it seemed like it was dying. I install GNU/Linux in June & it's revived, now it runs Debian.
@@UltimusShadow. Debian is not bad, it just does not update fast enough for me, which is why I run Solus Linux which so far has been the most stable rolling release I've ran, and the only major issues I've had so far that were not my own fault, is I can't get the system fan in my 13in Mid-2012 Macbook pro to spin as fast as it should as MBPFan is not in the repos so it runs a little warm to the touch, but does not thermal throttle.
And that's with PopOs Gnome, which is kind of known as a ram hog in itself. I bet with KDE or Xfce the results would be even lower. But of course if we're only talking about raw ram usage numbers.
@@runed0s86 Because Pop!_OS what almost no other Linux based OS is. It's made by a company that installs it on their own computers to sell! That brings a quality of focus on the desktop Linux rarely seen among pet hobby distros. And it's improving rapidly every year.
If you have the ram for gnome, why not? Vanilla gnome is the best thing I’ve experienced in desktop environments. I prefer it over macos and kde. It definitely takes some time getting used to, but it is very much worth it. And yes, I am a (vanilla) gnome fanboy
I've seen others say Windows uses 4GB of RAM after boot but I've never seen it myself. I'm currently using 3.4GB out of 16GB, with two Firefox/TH-cam tabs open and 20hrs. uptime. It is worth noting I've seen both Windows and Linux use more RAM when more is available. With 4GB total Windows 10 will use 1.6GB after boot, compared to 2.4GB with 16GB total.
Yeah, it's weird. When I had just installed Windows 11 on my new PC it was using only 2-3 gigs of memory, but after installing all the apps and all the updates it started sitting at 5-6 gigs. Windows must be caching a crap ton of stuff. Honestly it doesn't matter much, unused RAM is useless RAM. And if that memory usage is for caching then I'm sure Windows can clear it when it actually needs to be used.
It will only use more than 4 GB of RAM on startup if it can. I am typing this right now, yes right now, on an HP Stream 7 with 1GB of Ram running Windows 10. Using more RAM On startup if available is a very good thing, it means the system is optimizing and prefetching programs you use so they run faster than better.
The way Windows reports used ram is interesting to say the least, as depending on if I enable or disable the onboard GPU on my CPU and which PCIe GPU (AMD or Nvidia both with 8GBS of Video ram) I fit I can get the same computer and Windows 10 build to report between 2GB's and 11GBs of used ram without running any application. This seems to be a result of how shared memory buffers for the GPUs both built in and PCIe are reported. As such the reported used or free ram may or may not reflect how much actual systems ram is being used and how efficient windows ram management is, and ,may be more down to how the GPU drivers are setup for the system you are looking at ;-)
High Windows RAM usage at bootup is most likely from Windows SuperFetch - the OS is filling up available RAM with whatever programs you would have last used, so they will open quicker.
interestingly Windows is really optimized on swapping to compensate that memory usage and prevents lockup or stall while Linux memory usage is low but when on high pressure memory usage and started to swap it tends to locks up (if you dont have the prelockd, nohang, le9 patch, etc..)
@@christianlockley2578 Rather than saying "Dude you know nothing", say "I disagree, this is why you are wrong..." and you go ahead and explain the right way of the architectural operations to prove him wrong.
@@DarkGT The problem is that what they said is just wrong. I agree that maybe saying "you don't know nothing" is harsh and quite frankly rude, but that still doesn't change the fact that the info is wrong, it's like if someone told you water boils at 50C, it just doesn't --there is nothing to explain.
A fun exercise. It should be said using more memory is not necessarily a bad thing. Memory if used to enhance speed it's a good thing, if a program is carrying unused baggage and making memory unavailable to others it's a bad thing.
I noticed the difference with a 2016 laptop a little over 2 years ago. I couldn't even get windows 8.1 to load up Blender. It worked perfectly fine when using Linux mint. True I wasn't gong to be doing heavy tasks with it. As it was a entry level laptop. But been able to load it in the first place said it all.
I completely agree on this video and I can even mention that even the CPU usage seems to be much lower in Linux than Windows 10. I notice on the battery life and fan usage. When I run windows I hear frequently the fan running and battery life last less than when I use Linux. I even tested this in a completely new install in Windows and same result.
Im currently using zram on a laptop with 2444mhz ram It doesnt really impact performance maybe 1-2% list but adds alot of virtual ram without killing my ssd 16 -> 24 with 1-2 ratio compression
Yes videos on memory are interesting, I remember being a developer in the early 90’s, and being particularly concerned about the impact of ‘page thrashing’ on the logistics system we were developing, but not truly understanding the concept and so feeling limited in my design decisions… long time ago, fun times 😁
From my humble point of view Gary's YT is the one of the most exciting and consistent tech channel available. Thanks for all the videos provided so far. This one is outstanding since I am happy user of great Linux since 2k. Thank you! Have a nice day!
The greatest improvement to PC performance I have ever experienced wasn't new HW. Deepin 15 BETA loaded to a ram disk that kept up with a Ryzen build with a GEN3 NVMe drive. IDK how but MX LINUX 21 can be loaded to boot to ram.
I am now waiting for a detailed video covering MacOS 12 and Windows 11. Their differences in handling multitasking and multiple displays and the advantages and disadvantages of each.
I thought he was joking in the end when he was saying that's the proof that Linux is more efficient. Who cares about how much memory is used, what we need are benchmarks with a small amount of ram and real applications.
Why would benchmarks with low RAM be useful? Both systems will start to swap and at that point benchmarks ate useless. The point is that Windows will arrive in this low RAM situation sooner than Linux.
@@GaryExplains Maybe the behavior of the programms would change, if there is no ram left. Isn't it even more efficient to use the avaiable ram and not let it sit around? I don't really think Windows would perform better but pure ram metrics are no proof for worse efficiency or performance. Furthermore, there are no "serious" Windows systems with less than 8GB Ram nowadays.
In general programs don't change their behavior when there is no more RAM, they just report an error and close. Also your "serious" Windows system comment is not helpful. I can go to any major retailer and buy Windows laptops with only 4GB. How you define "serious" is completely subjective. And even if we agree on a definition does that mean that average consumers aren't important?
Actually, yeah, it'd be cool to make a video on how memory is actually organised and how the structural difference impacts the performance, especially before the systems starts swapping (which is obviously going to happen sooner with the Windows system, even cheating on the swappiness on the Linux system to force it swap earlier). Basically, if you had an infinite amount of RAM on either system, is one system more performant than the other?
Thanks Gary. Nice vid, the only thing one might be asking is WHY does windows use more than linux for the same program? Are there decisions made like "lets load the whole lot into ram so things load quicker when the user wants them..."
I've been test driving ubuntu mate on a 2 GB Ram netbook, with a classical HD and an intel pentium for work this week; it works like a charm (general office stuff, browsing, video...) I have more powerful machines but it feels so damn great! Now I have an amazing battle computer.
interesting video as always Gary. can you please explain Android Entropy Threshold next time? i googled it yesterday but never have any conclusive explanation. so would you be kind enough to explain it next time?
Keep in mind that in Windows since XP, it only frees memory when it needs to, so if you open a ton of programs then close them, it won't clear everything, but if you open something else up that needs a lot of memory, it will clear out old memory as needed. Linux is starting to do more of this too, particularly in favor of filesystem cache, which will be cleared aggressively if something else needs that space.
8 channel ram is four times faster than dual channel ram. Quad channel ram is twice as fast as dual channel ram. Dual channel ram is faster than even a new NVMe storage drive and has way less latency. You guys are on a race to the bottom.
Great video Gary and pleased to see you used my distro of choice Pop. This is certainly a much fairer comparison than say using Puppy Linux. I would be interested in your thoughts on why say Firefox in Windows uses more RAM than Linux. To me this was the most confusing part, unless Firefox can offload more of it's work onto existing Linux processes and hence doesn't need to spawn new ones? Thanks for getting my brain thinking on a Tuesday...
Calculating memory usage is tricky. Most status reporting only count the user memory, but often doesn't count the kernel memory like page table increase and system resources. On the user side, shared memory be complicate because multiple application could share the same resources. I like using "zero"-ed memory as it reflects the memory used and if RAM was reduced below this line would result in perf loss.
There has to be a reason that individual apps request a lot more memory on Windows than Linux. That's the interesting question here. I seriously doubt you've properly isolated RAM usage.
Open source for personal use will be only for slow HW. Because the fan club goes crazy if I just mention we get one distro for Advanced HW. Remember the anger over W11 for newer HW. No one with the new HW had a right to get a new O/S made for faster HW.
one thing to mention is popOS is far from the lightest distro,you could take it ridiculously low with something like ubuntu MATE or especially those fancy window managers (though those systems are so minimal,it may get a bit unfair). this proves evermore that linux is definitive for older computers.
Something I think you totally missed is that, program memory usage doesn't just boil down to the OS itself, but also the C memory allocator it uses, which differs on both systems. On Windows it uses CRT, and on Linux it uses gblic/libc. But still, Linux most likely has the superior memory management either way.
He also had some interesting picks of programs to test. Firefox is a well known, (infamous at this point), memory whore on Windows. Only Goggle Chrome is worse for filling up memory needlessly I believe.
It also completely miss specifying what was the amount of Ram on each system. The more RAM the system has, the more the OS/program might want to use, to improve general performance. Un-intuitively, not in use RAM is wasted RAM. When there are many programs running and close to maximum memory usage overall, this is where the memory management really takes place. The analyze here falls fairly short. I am not siding with Windows or Linux or MacOS, I am just pointing that analyzing memory usage is a lot more complex.
Very nice and informative video! I have a secondary PC with a quad core s775 Xeon and 8Gb of Ram. I only use it 4-5 times a month for internet browsing,watching video and in the odd case for some emergency Zoom call from work. It's been a couple of months that I am considering installing Linux (instead of Win10) on that machine to give it some more breathing room. My main concer is driver support and general compatibility of the hardware as it is quite old. After watching your video I am a step closer to making that decision.
not quite sure why I clicked on this since I knew the result from the get go, but I am glad I did. I learned the hard way that windows is a memory hog, that is by making a server out of an old prebuilt. Surprise to no one ran out of memory eventually and as a quick fix switched to linux. I did end up adding more ram to the machine but I like having the overhead that comes from linux as well, since I will definitely keep expanding the software that runs on the machine.
Thanks. I kept getting the terms "pig" and "hog" mixed up. I know one refers to software that uses a lot of memory, while the other refers to software that uses a lot of storage, but I could never remember which is which, and I'm not sure younger programmers still use that slang.
Untill recently, I was using thinkpad with 4GB RAM. It's ridiculous, that it come with windows preinstalled, since just the fresh install was enough to get swapping. Out of couriosity, I've tested how it runs, and we're talking like 10+ seconds to open the menu bar.
Thumbs up on this vid and anothe r thumbs up on a videonon the MMU and virtual memory management, etc. Also, having worked on the IBM AS/400 or iSeries or Series i, I was always impressed with their hardware addressing via verticle and horizontal microcode with the OS never addressing or seeing the physical. A very unique computer architecture.
Gary, This is a very challenging topic to do a complete comparison. The assumption that using more memory is bad is something I'm not completely sold on. Swapping pages to disk is bad in that it slows things down but if a system optimizes performance within it's the total system memory available, that seems preferable? What about a comparison of running tasks on Win vs Linux with insufficient memory and counting the page faults/page swaps? If you can show that Linux pages less given the same physical memory, then I would find the comparison more compelling. I'm a fan of linux myself, started in the 1980's with HP-UX, so I'm not being critical of either OS, just thinking about what other factors effect these systems memory management schemes.
Using more memory can't improve performance (outside of caching, which generally happens at an OS level, not per process level). Using more memory will cause more swapping, which as you correctly said is bad for performance.
@@GaryExplains I've been fooled before by lazy de-allocation schemes. Optimum overall performance doesn't always prefer cleaning up immediately, there is no penalty for using more memory until you run out. Perhaps you accounted for that. I would not be surprised to find your results are indeed the full story but I am always looking for the technical loopholes.
So Gary, when did you come and visit Ottawa Canada? You were quite clearly doing your video from the National Art Gallery, with the Parliament buildings in the background. 😉
I think you forgot to mention something when talking about booting an OS. Unused memory is WASTED MEMORY. You don't benefit from not using the available memory when the OS can use it for something, this is not the same as hard drive storage. Also the memory is dynamically allocated, Windows will use it when nothing else is using it, the minute something else needs it Windows will reduce it's own footprint as best as it can.
@@GaryExplains I think it needs to be clear that its not very beneficial ie wasteful for an OS booting up to not be able to use all the RAM that it has available.
Linux also works on the same philosophy, so it isn't that Windows does this and Linux doesn't. Plus the rest of the tests also show that Windows uses more RAM, and there I am measure the process memory usage, which doesn't include file caching etc.
@@GaryExplains I get that and don't dispute this one bit, but it's considered obvious that Windows is bloated and uses 'too much RAM', based on how I see people talk in both online and physical communities. And while it's generally true context i think is still important, I guess I've been worn out from trying to explain this to too many people regarding unused RAM, so many videos mention how much RAM is used when Windows is idle which is actually fine.
But as I said, the rest of the tests also show that Windows uses more RAM, and there I am measure the process memory usage, which doesn't include file caching etc.
I kind of expected these results, but not by such a margin. Interesting. Proud to have switched to Ubuntu linux, even though I am still counted as a Windows user, as the laptop came with Windows.
Thanks Gary that was interesting. I think a very interesting comparison also would’ve been comparing them to MacOS because MacOS is Unix based but a very modified version that I expect to be much heavier than any Linux distribution out there. It also has an X86_64 version for intel processors and an arm version for the new M processors from apple. That video would be especially interesting because with the M processors the memory is on chip and shared between the CPU and GPU cores, and not upgradable. Also very expensive to get a higher processor configuration to benefit from the larger memory (especially if you want 32 GB like me).
Interesting video as lack of RAM is often the killer of an old PC. I got a old thing with Windows 10 and Mint Xfce and it was interesting for me to compare. Actually it was not that different, every time I start two or three modern apps (TH-cam on Opera amongst other things) I run out of RAM and swap begins. Windows need some fine tuning (disabling Windows Defender, Update and stuff) before it runs OK but after that, it's not that bad and respond actually quite well. On heavy usage, once Linux swapping blocked me completely and Windows did not, but I don't (yet) make generalities out of it RAM usage it interesting, but swap management and the way applications repond to it is even more interesting, in my mind
I've set up a Thinkpad T460 with Linux, and got it to run Monster Hunter World. I would not recommend playing Monster Hunter World on a Thinkpad T460, but if your really want to or have to, you can. Tip: Using the lowest resolution has the best performance. Take a mental note of where the resolution settings are, as you won't be able to read them when you go to change them again.
I was using OS/2 for about 10 years, before switching to Linux. I have never run Windows as my main OS on my own computers. BTW, I used to do 3rd level OS/2 support at IBM Canada, but was already using it for 5 years before I started that job.
Well I have a very different experience with Linux than you guys, because it was the worst OS experience I had, crashed an awful lot with recent hardware and unstable as hell.
@@victorhugofranciscon7899 whenever there is new hardware released you need to wait a little longer on Linux to get the necessary driver support but this delay is improving.
Interesting the difference between running the same programs on windows vs Linux - Given that a string is a string and an int is an int regardless of OS, I would have thought that they'd be much closer. I would have thought that the difference would have mostly beeen due to different window decorators, which would have been minimal. Is it the underlying graphics libraries which account for the large difference? Obviously, the OS is the part where I knew there was a vast difference - I've created a custom Linux OS based around ICEWM for a Raspberry PI which boots to desktop in around 100MB.
In fairness he did use POP which if I recall correctly uses Cosmic (I think) which is based off Gnome which is the heaviest of all Linux desktops and it still came on top compared to Windows, not only that tool kits are shared which is why Firefox looks exactly the same on the 2 platforms and so on. So even with the disadvantage of having a Gnome based desktop :D
I think it might be due in part to how Linux share memory between processes and how applications are build against libraries, in Linux you tend to build against the system libraries (so the same library for all the applications), in Windows you tend to distribute the libraries you need (so several versions of the same library to serve several applications). Linux also only loads into memory the parts of an executable that are needed (in tech lingo, it maps the file into memory), and under memory pressure it does not send it to swap, just discards the data in RAM and if needed loads it again from disk. Probably Windows does something similar, but I don't know up to what extend. One thing that caught my attention is when I have used Blender, Linux seems a 20% faster, in a task that is basically GPU load... there should be more magic happening in the memory subsystem.
I'd like to see a dynamic, real-time visual representation of the memory pages, colour coded by PID to see the randomness of the distribution. Is this even a thing?
By using a window manager on the linux machine you can further reduce the ram used. My system, no joke, idles at around 250MB of ram used. And this is not using some cut down distro. This is using POP OS with i3WM
@@GaryExplains I'm not saying this had to be included in the video, the video covers everything needed. Putting this info in probably would confuse a lot of people. Just pointing out that with Linux you get the freedom to choose!
Years ago I found an interesting bug in Windows NT 4.0 Server. I was porting a rather large Unix application to NT. The computer I had was a real beast. I don't remember how many megabytes of RAM it had, but it was a lot for the day (maybe 512 MB? For comparison I think my home PC had about 16 MB of RAM at the time.). But the hard drive was a little more limited, so I didn't allocate much space for the page file (swap space in Unix lingo), maybe just 2 MB. But apparently when I started up the programs that made up the application, Windows would start trying to swap out existing stuff to the pagefile to make room for more stuff in RAM even though it didn't need to since it had loads of RAM. As soon as the small page file was full, performance would hit a wall and it would start showing errors about being out of memory even though there was no way it had used up 512 MB (or whatever it was). But if I wasted a full 512 MB of disk space for a pagefile that would never get used much, everything ran just fine.
1) Yes, you can use a swap partition on a Linux but it's deprecated due to - the cost of partition border crossing. - not having any benefits. - partitions wasting some disk space. - pretty sure something else I forgot. 2) The RAM amount directly affects how much is used for caching, especially on Linux. Linux is much less hesitant using RAM for caching, which is especially notable on a firewall distro cuz they'll use it all for performance (but will give you what you ask for).
@rautamiekka Do you have a link to show that swap partition are deprecated? Also, you know that Linux isn't hesitant or eager to swap, it depends on the value you set for swappiness.
Would be nice to explain what exactly you compared. Memory used by a program is not a single simple number. In Windows Task Manager you can see columns like Working set, Active private working set, Private working set, Shared working set, Commit size, Paged pool, NP pool. Which one of them did you use? How are you sure it's the right parameter to compare with Linux? In Linux top will also give several numbers: VIRT, RES, and SHR. If you compare different things you get different results, but that's not informative at all for a comparison.
Right. You can safely ignore most of this video by the fact that we already know Windows and Ubuntu perform similarly in actual usage metrics. So it's likely these numbers don't mean what he thinks they mean.
I second this. First thing that got me was the 4GB+ memory usage on reboot on W11. I run W11 too and booted up with a few programs and stuff running in the background (antivirus, anydesk etc nothing huge) and the actual physical memory in use is roughly 3.3GB. The system commit which is at about 6GB corresponds with my page file also being 6GB. So maybe he used that number. Speaking under correction :D
Another thing to note is that he hasn't tried testing Windows and Linux with different amounts of total RAM. This does matter, as Windows does somewhat modify its behaviour - how conservative to be with RAM - based on the total RAM. If it sees 4GB total RAM, then it behaves more conservatively than if it sees 32GB of total RAM. It's not a vast difference, mind you, but there is a "less RAM = be more conservative, more RAM = stretch out a bit more and prioritise performance instead, as there's room to expand into" slightly modified behaviour in there, which further brings these results into question when you've not tested multiple RAM configurations (on both systems) to see if this does modify OS behaviour. If it sees more RAM, then it'll use more. Cache up more stuff. Prefetching stuff. So if you run Windows on a more RAM-limited system, then it uses less and behaves more conservatively about what to keep around in RAM. (This is why some people are saying "when I boot Windows, it's not taking about 4GB of RAM, as we see in these results". Yeah, because Windows has seen the total RAM and is being more conservative with its usage. Try it on a machine with, say, 32GB of RAM and it'll be less "precious" about those bytes and instead prioritises better performance, by trying to load and prefetch more of the OS into RAM - because you can afford it and this'll make it perform better.)
well, besides caching, the services and how many services that are startted up will have an impact. A typical Windows install, in my opinion, will tend to have far more services than a typical Linux install. And the question becomes - how essential (really) are a lot of those Windows services?
if they were used by thirdy party applications, then they would be essential, but if that was the case, then opening new applications would have less aditional ram usage on windows than on linux, that simply isnt the case, quite the opposite, so those services arent that usefull. the main issue is that microsoft dont have any competition, no matter how much windows is bad, it wont lose marketshare, no matter how much microsoft invest into improving it , they wont gain aditional marketshare since they already dominate it.
I used to use Black Viper's list of unnecessary services to disable them but it's definitely not as bad as it used to be. I also have a 5800X with 32GB RAM so I don't spend a lot of time on this. Also, that's with Windows, Rockstor, and BunsenLabs running on Proxmox.
I'm not sure if this is just about less ram = better. Maybe superfetch uses more ram but makes the user experience better (for example)? I'm no expert and I am not sure how either Win11 or PopOS uses and manages memory but maybe win11 is intentionally using more memory for good not evil? I know that an integrated GPU uses memory for graphics but I don't know how this is allocated or managed by the two OS (is that strictly set in BIOS or does the OS dynamically change ram allocated to the iGPU as required? I would find a video that looks at these things and explains the difference between how operating systems manage memory really interesting, follow up video maybe?
Of course I knew that Windows would be worse than Linux at memory management but I'm surprised at how much worse it is. I would be curious to see what architectural differences cause it to be so bad.
It's amazing how much RAM LibreOffice is using, compared with MS Office. One opened document in LibreOffice uses 142 MB while one document opened in MS Office uses only 46 MB. That's on a Windows 10 PRO computer, with 16 GB of RAM and a Ryzen 5 2600. Sure, LibreOffice is free, but also it behaves and looks like something from 2010.
I've used a modern Linux version on a 128MB RAM VPS for a large part of the last decade. To be fair, part of the OS ram usage was probably not counted because it was run using OpenVZ containerization, but it was running several sites through apache and other services I used daily. Eventually, I upgraded to a 512MB VPS when I started running slightly heavier applications. None of that is even imaginable using windows, so I wouldn't say there's much of a competition between the two...
@@GaryExplains You're assuming I was trying to do a comparison, but I was only providing a data point. The thing is, that data point blows Windows out of the water all by itself. That's because you wouldn't even expect a Windows machine to run stable if it had 128MB available straight after boot. And because you wouldn't be taken seriously if you said that a heavy application like Apache runs using only 128MB of RAM on Windows. And because Windows itself uses far more more RAM than that, even if we allow Windows to not count 50% of the RAM it uses, using that as a very generous estimation of how much it needs to run its GUI. (Also, note that the comparison by itself wouldn't be unfair - it depends on what you're comparing. If I'm comparing running an OS using the minimum amount of RAM, I don't have to handicap one OS because the other OS forces you to use desktop even on servers.)
This is pretty interesting, especially when you configure a new system. On windows just get twice the RAM and you should be fine. Since memory is dirt cheap nowadays, this isn't really as big an issue as it was earlier (at least for a normal user, power users may feel different) but you should know what you need for each system.
Yes, it is. I use Linux Mint with Gnome desktop in a 9-year-old PC and it works faster than any modern full equip Windows. It's a shame a free OS works better than a paid OS. Go download Linux Mint if you want a Windows-like OS and personalize it as you want. The first view is like... this OS looks really old, but once you personalize it and add colours, change folders, change the pointer and stuff, you realize you don't want to go back to Windows. Do you want to use a Windows programme? No worries, use Wine or install a Virtual Machine in your Linux. It's been a pleasure to help deceived people.
@@GaryExplains Yes, you're right. There are some programmes that don't work properly. You can have a partition and use Windows only when using specific programmes. Why? Because Windows is slow and not secure at all. You can have an antivirus if you want, but it doesn't recognize all of the viruses out there. Since I use Linux, I don't dare to put my passwords in Windows and I only use this SO to play a game.
One important piece you missed was that Windows uses page files to do memory defragmentation. Even if you have enough physical memory free to load a memory intensive program, sometimes you will get out of memory errors because Windows can't defragment the memory properly without a page file. This is an issue I came across, I had 32GB of physical memory, only about 4GB was in active use (fresh reboot) and opened a program that allocated about 8GB-12GB of memory, so total usage would have been around 16 of 32GB available. But the program would crash with out of memory errors because it couldn't allocate a full contiguous block of memory and Windows won't defragment memory without a page file!
That makes no sense. I think you have misunderstood something. All physical memory is fragmented, that is the whole point. The contiguous memory block would have been in virtual memory not physical and adding a swap file won't "defrag" the memory. Memory fragmentation in the virtual memory is indeed a thing, but it doesn't work as you describe.
@@GaryExplains maybe the underlying cause is different, but the symptoms are as follows: 1. Turn page file off 2. Run program 3. Get out of memory error (memory usage doesn't even approach using all physical memory!) 4. Turn page file on 5. Run program 6. No error (memory usage still same, not even close to using full physical memory) If not using page file to do memory defragmentation to allocate a contiguous block of memory, I don't see another explanation for getting out of memory errors under these circumstances. There is plenty of memory available, so the only thing I can see that would prevent allocation of new memory is that the memory it tried to allocate is used by another program or outside of the addressable space (impossible given the program is 64-bit). This issue affected two games for me too, Cyberpunk 2077 and Halo MCC (specifically Halo 2 and Halo 3). Again, 32GB of physical memory and fresh reboot with only about 4-5GB of memory used at time of launching those games.
The only thing swap is used for is to free physical memory. So looking into this a bit deeper there are system calls on Windows and on Linux that ask the OS for contiguous, nonpaged physical memory. For example, on Windows, MmAllocateContiguousMemory() allocates a block of nonpaged memory that is contiguous in physical address space. Using calls like this would force the OS to swap out any pages that get in the way of a contiguous block. Later they can be swapped back in at a different address. In that sense, it is the memory defragmentation you are describing. Microsoft note that "When physical memory is fragmented on a computer that has a large amount of RAM, calls to MmAllocateContiguousMemory, which require the operating system to search for contiguous blocks of memory, can severely degrade performance." Calls like this are normally reserved for drivers. It would be interesting to understand why it happens with the games you describe. It could be the GPU driver I guess.
@@GaryExplains this describes basically what I had in mind, maybe I used the incorrect terminology. I guess I could use something like RAMMap and VMMap to see if these titles allocate contiguous blocks (particularly if memory is constrained) - might give some insight into if this is really the cause or if it's something more obtuse.
I’m actually going through the opposite issue, Linux using an unfathomable amount of ram, forcing me to return to my default OS (windows and Mac OS respectively )
that sounds a bit strange to me as a Linux user for the last 20 years but without knowing what version of Linux you are using and what desktop and other programs are installed and what hardware you have it is impossible to know what may be wrong Linux systems does not normally take up very much ram my own system for instance uses 2 GB of ram out of 8 running Ubuntu Mate on a used Lenevo laptop from 2012 the most ram is used by Firefox in your case there might be some programs running in the background that you are not aware of and may be turned of to free up memory one thing you can do is to use the command prompt and type in the word top to see a list of all running program and to see if there is a program that is taking up a lot of memory on my other laptop that is also running Ubuntu mate I have Microsoft teams installed (for work) and whenever used it puts itself in autostart and takes up a lot of memory and have to be removed manually
I compared the RAM-usage between Windows and Linux a bit over 2 years ago when I still dual-booted. I observed the same, Linux really does use less RAM for the same tasks. I also compared it for gaming, on Linux with wine, in Windows itself and Linux native. Linux with wine and Windows native have roughly the same memory-usage which is what I would expect. For Linux native both the RAM-usage and graphics card memory usage were much lower for the same game if running Linux native compared to Windows or Linux with wine. That shocked me the most, I would expect the memory-usage for games to be roughly the same but it is not. I would love a deep dive in which it gets explained. I wonder if the guy from David's Garage could do that.
It would be interesting to do a deep dive to understand why there is the difference overall (on bootup say) as well as individual program differences. Windows is a much larger OS (in terms of code base), there may be more processes running - many of which may not be doing very much that are useful most of the time, but philosophically Windows is designed to support many legacy edge cases - that comes with baggage. I know the video mentioned the issue of account for free memory from available memory, tricky to quantify. If you have much more code running, then those .dlls are memory mapped so are "using" memory virtual memory pages, however they are available to be used by any program that needs those pages. Also Windows desktop OS is very geared up for all the legacy frameworks to support the legacy applications, particularly for desktop/video. It's a bit difficult to compare the two when Linux doesn't have the same legacy heritage (baggage!)
The first place to start is: Linux + minimal version or "core". Then you have more control or visibility over what is actually in the background and if it's needed. People might argue that with modern computers it's not such a problem as there's plenty of RAM etc. However it's REALLY ANNOYING having useless stuff running as well as lots of it. I always liked minimal desktops in Linux and there was a great comparison graph with Enlightenment at the top as lightest compared to say KDE or Unity etc. One could go further and ditch a full desktop and include components only eg windows manager (openbox) and other components bolted together too. Whereas with Windows: 1) It's recommended to reinstall it periodically 2) You can go to github and get a "debloat script" That is all before this test which is doing a like-for-like performance comparison idling background RAM use and application like-for-like comparison and in both Windows is worse! It makes me wonder if I should go for MacOS on Apple Silicon for my next computer where presumably hardware-os-apps are all more efficient and work together more efficiently AND also have the modern desktop apps that are in fact useful ? But I wonder how easy is it on MacOS to cull the desktop down if one wants? Maybe the Macbook Air M2 is the device to go for coming up.
I miss the information on how long did e.g. the monk render take on Windows vs Linux. Right now, it seems a bit pointless to just compare the RAM usage. Did they take approximately the same time, or did one of them finish faster?
It isn't "a bit pointless to just compare the RAM usage" when the video is called "Windows vs Linux RAM Usage". This video isn't about performance. It also isn't about cream cakes which is why I don't mention them either.
Hi Gary , good content but I think you over simplified the analysis , just because a program uses less memory RAM in one OS , it doesn`t mean that OS is more efficient. Example , I can make a program that uploads everything into Ram or tweak an OS Memory Manager to keep as much information possible in RAM with the purpose to be faster.
I`m not saying the conclusion will be different ( I think Linux is better ! ) but , we need to analyze more metrics in how the Ram is used to conclude about efficiency ! Keep the good work :)
Way back in the dark ages, it was common to do all sorts of tricks to minimize memory. However, those tricks that saved memory often had poorer performance. Back then do did what you had to, just to get something to run in the very limited hardware of the day.
It would be interesting to see if there is any noticeable performance difference when running on the same hardware with different amounts of memory being used.
I have prepared Linux systems that run in 32MB of RAM (yes MB, not GB), with GUI and no swap. It wasn't meant for general usage (prepared using Yocto for medical devices), but gives an idea of what you can achieve. I felt like an artisan sculpting a gem.
[EDIT] Clarification to address some random replies: The device was a medical device able to operate without interruptions for years if necessary, the last test I know it was operating for 6 months w/o interruption, but years was the target, and had to support OTA and operate with no swap. The UI was touch driven and responsive, IEC 62366-2 certified. I'm sure that there are other options in the market, but I doubt many tick so many boxes. And yes, a version of Windows for Devices, Windows Embedded or WIndows IOT (or whatever name it has today) was part of the platform set evaluated by the customer.
My main computer runs a fully functional Linux desktop full of all the apps I use, and it only uses around 160Mb of RAM at boot
You sure you really mean Mb and not MB? That's just 4MB
Ya.... and to point out, you built it for a singular model of hw, for a very very very selective set of things.
Linux is awesome for that, but don't get it twisted..... that isn't compatible with you normal desktip user.
@@atemoc 160 mb? Awesome. What distro do you use?
@@casperes0912 they probably meant MiB
Linux has long had better memory management compared to not just Windows, but a number of other OSs. I know Linus and the Kernel crew were somewhat obsessive about memory management and making sure things used memory efficiently.
At one point, I was working for a company in Montreal, and they had been running a Web based JAVA application on (at the time) Sun's Solaris, and were constantly have problems with stability, which we traced to memory. Back then, Sun Workstation memory was very expensive, so I said, let's try putting Linux on the same workstation, and see how the app runs since I knew that Linux was better with memory management. I prepared a Sun workstation that we had laying around as a spare with Linux, and all the appropriate software...and sure enough, it was totally stable, and our QA people tested it and they found it stable, and faster as well through their entire test suite. What I didn't tell them is that the workstation I had used had only half the memory the big server the software was going to run on!
BTW, that's a nice view of the Parliamentry LIbarary in Ottawa in the background!
I was amused when toying with Illumos recently (a fork of OpenSolaris) in a VM with 4 GB RAM, just running "sudo pkg update" on a fresh install caused the package manager to run out of memory. Though it turns out it was partly my fault - Solaris wanted to use part of the hard disk as swap, but my VM was already running out of disk space. Still, you'd think 4 GB of memory would be enough memory to run the package manager. I note that it's written in Python, which I'm normally a big fan of, but obviously the pkg code needs a bit more work. Though admittedly this is all kind of an edge case, since if you really need to run the GUI version of Solaris in a production environment you'd want to run it on bare metal.
I somewhat disagree. Imo Windows handles drive caching way better if one has much memory.
what a joke better memory management? this topic keep exists. Ram comsumption always equal to feature that os give. what a joke compare ram usage after reboot. Windows include good looking animation, anti-virus (bitdefender). if u look in ram usage of application. it totally no different. it just different in reboot. Ram already cheap. so spend 2GB ram for OS. its not a big problem at now for so much feature that OS can give.
Linux only good for specific job. like run web server, vpn server only and so on. for big multi tasking. no reason can beat windows.
@@davidstephen7070 ok
This is yet another reason to love Linux, as it can make older hardware even those limited to to 4GB RAM usable, were as WIndows on the same hardware is a no go.
OLDER hw limited to 4 GIGABYTES?!?! Said by a Commodore64 fan?!?
Something is not right here
@@georgebetrian676 bruh
don't even need 4gbs or ram. if you download a minimum installation of arch linux for example and download a minimalistic window-manager or desktop-environment you can easily get away with 2gbs of ram.
I have an old Dell XPS 430 from 2009 lying around, on Windows 10 it seemed like it was dying. I install GNU/Linux in June & it's revived, now it runs Debian.
@@UltimusShadow. Debian is not bad, it just does not update fast enough for me, which is why I run Solus Linux which so far has been the most stable rolling release I've ran, and the only major issues I've had so far that were not my own fault, is I can't get the system fan in my 13in Mid-2012 Macbook pro to spin as fast as it should as MBPFan is not in the repos so it runs a little warm to the touch, but does not thermal throttle.
And that's with PopOs Gnome, which is kind of known as a ram hog in itself. I bet with KDE or Xfce the results would be even lower. But of course if we're only talking about raw ram usage numbers.
I don't understand why people still use popos. It's in the same league as hannah montanna os. It's a meme. Just use MX or Mint!
@@runed0s86 both mx and mint looks like its from 2004..pop is modern and works perfectly on my nvidia rtx3070
@@runed0s86 Because Pop!_OS what almost no other Linux based OS is. It's made by a company that installs it on their own computers to sell! That brings a quality of focus on the desktop Linux rarely seen among pet hobby distros. And it's improving rapidly every year.
If you have the ram for gnome, why not? Vanilla gnome is the best thing I’ve experienced in desktop environments. I prefer it over macos and kde. It definitely takes some time getting used to, but it is very much worth it. And yes, I am a (vanilla) gnome fanboy
A fresh KDE Plasma on x86_64 is at around *400-500 MB.* Not a desktop designed specifically to be lightweight, either.
I've seen others say Windows uses 4GB of RAM after boot but I've never seen it myself. I'm currently using 3.4GB out of 16GB, with two Firefox/TH-cam tabs open and 20hrs. uptime. It is worth noting I've seen both Windows and Linux use more RAM when more is available. With 4GB total Windows 10 will use 1.6GB after boot, compared to 2.4GB with 16GB total.
Yeah, it's weird. When I had just installed Windows 11 on my new PC it was using only 2-3 gigs of memory, but after installing all the apps and all the updates it started sitting at 5-6 gigs. Windows must be caching a crap ton of stuff. Honestly it doesn't matter much, unused RAM is useless RAM. And if that memory usage is for caching then I'm sure Windows can clear it when it actually needs to be used.
It will only use more than 4 GB of RAM on startup if it can. I am typing this right now, yes right now, on an HP Stream 7 with 1GB of Ram running Windows 10. Using more RAM On startup if available is a very good thing, it means the system is optimizing and prefetching programs you use so they run faster than better.
The way Windows reports used ram is interesting to say the least, as depending on if I enable or disable the onboard GPU on my CPU and which PCIe GPU (AMD or Nvidia both with 8GBS of Video ram) I fit I can get the same computer and Windows 10 build to report between 2GB's and 11GBs of used ram without running any application.
This seems to be a result of how shared memory buffers for the GPUs both built in and PCIe are reported. As such the reported used or free ram may or may not reflect how much actual systems ram is being used and how efficient windows ram management is, and ,may be more down to how the GPU drivers are setup for the system you are looking at ;-)
High Windows RAM usage at bootup is most likely from Windows SuperFetch - the OS is filling up available RAM with whatever programs you would have last used, so they will open quicker.
would love that memory (virtual) allocation video! :D
i don't know much about virtual memory so i second that
its about 3 or 4 commands on linux, really easy, look up Arch Wiki.
It isn't about commands, it is about how virtual memory works at the kernel level.
@@GaryExplains yeah ive been wondering how that worked for a long time, would be nice to make a video on that.
@@vyrsh0 for what? isn't it enabled by default in the kernel?
interestingly Windows is really optimized on swapping to compensate that memory usage and prevents lockup or stall while Linux memory usage is low but when on high pressure memory usage and started to swap it tends to locks up (if you dont have the prelockd, nohang, le9 patch, etc..)
Well in windows, the memory management is in kernel level.
While in Linux its in the software itself. For security its saver, but performance...
@@syarifairlangga4608 dude you know nothing about what you speek of
@@christianlockley2578 Rather than saying "Dude you know nothing", say "I disagree, this is why you are wrong..." and you go ahead and explain the right way of the architectural operations to prove him wrong.
Honestly more distros need to start shipping a functional OOM killer...
@@DarkGT The problem is that what they said is just wrong. I agree that maybe saying "you don't know nothing" is harsh and quite frankly rude, but that still doesn't change the fact that the info is wrong, it's like if someone told you water boils at 50C, it just doesn't --there is nothing to explain.
A fun exercise. It should be said using more memory is not necessarily a bad thing. Memory if used to enhance speed it's a good thing, if a program is carrying unused baggage and making memory unavailable to others it's a bad thing.
I noticed the difference with a 2016 laptop a little over 2 years ago. I couldn't even get windows 8.1 to load up Blender. It worked perfectly fine when using Linux mint. True I wasn't gong to be doing heavy tasks with it. As it was a entry level laptop. But been able to load it in the first place said it all.
I have 16gb of memory on my laptop running Arch Linux. It still amazes me that I can use nearly all of it just by opening up PyCharm and Firefox.
I completely agree on this video and I can even mention that even the CPU usage seems to be much lower in Linux than Windows 10. I notice on the battery life and fan usage. When I run windows I hear frequently the fan running and battery life last less than when I use Linux. I even tested this in a completely new install in Windows and same result.
I think system calls in windows are more complex than in Linux
Does zram / dump on linux have significant impact, and how does that compare to pagesys on windows.
Can you please do a video on that?
Im currently using zram on a laptop with 2444mhz ram
It doesnt really impact performance maybe 1-2% list but adds alot of virtual ram without killing my ssd
16 -> 24 with 1-2 ratio compression
2-1*
Yes videos on memory are interesting, I remember being a developer in the early 90’s, and being particularly concerned about the impact of ‘page thrashing’ on the logistics system we were developing, but not truly understanding the concept and so feeling limited in my design decisions… long time ago, fun times 😁
Yes, I want that video on virtual memory. Thanks for this one!
From my humble point of view Gary's YT is the one of the most exciting and consistent tech channel available. Thanks for all the videos provided so far. This one is outstanding since I am happy user of great Linux since 2k. Thank you! Have a nice day!
The greatest improvement to PC performance I have ever experienced wasn't new HW. Deepin 15 BETA loaded to a ram disk that kept up with a Ryzen build with a GEN3 NVMe drive. IDK how but MX LINUX 21 can be loaded to boot to ram.
Can we do Linux vs FreeBSD? Is there even a difference in memory usage?
I’m curious about how this compares to MacOS
I am now waiting for a detailed video covering MacOS 12 and Windows 11. Their differences in handling multitasking and multiple displays and the advantages and disadvantages of each.
I thought he was joking in the end when he was saying that's the proof that Linux is more efficient. Who cares about how much memory is used, what we need are benchmarks with a small amount of ram and real applications.
Why would benchmarks with low RAM be useful? Both systems will start to swap and at that point benchmarks ate useless. The point is that Windows will arrive in this low RAM situation sooner than Linux.
@@GaryExplains Maybe the behavior of the programms would change, if there is no ram left. Isn't it even more efficient to use the avaiable ram and not let it sit around? I don't really think Windows would perform better but pure ram metrics are no proof for worse efficiency or performance. Furthermore, there are no "serious" Windows systems with less than 8GB Ram nowadays.
In general programs don't change their behavior when there is no more RAM, they just report an error and close. Also your "serious" Windows system comment is not helpful. I can go to any major retailer and buy Windows laptops with only 4GB. How you define "serious" is completely subjective. And even if we agree on a definition does that mean that average consumers aren't important?
@@GaryExplains You are right, seems like I am in my high end bubble.
@@perschistence2651 Your not alone. If you pay through the nose for that high end hardware, you'll want to use it.
Can't stop enjoying your content tbh
Actually, yeah, it'd be cool to make a video on how memory is actually organised and how the structural difference impacts the performance, especially before the systems starts swapping (which is obviously going to happen sooner with the Windows system, even cheating on the swappiness on the Linux system to force it swap earlier). Basically, if you had an infinite amount of RAM on either system, is one system more performant than the other?
Thanks Gary. Nice vid, the only thing one might be asking is WHY does windows use more than linux for the same program? Are there decisions made like "lets load the whole lot into ram so things load quicker when the user wants them..."
I've been test driving ubuntu mate on a 2 GB Ram netbook, with a classical HD and an intel pentium for work this week; it works like a charm (general office stuff, browsing, video...) I have more powerful machines but it feels so damn great! Now I have an amazing battle computer.
interesting video as always Gary. can you please explain Android Entropy Threshold next time?
i googled it yesterday but never have any conclusive explanation. so would you be kind enough to explain it next time?
Keep in mind that in Windows since XP, it only frees memory when it needs to, so if you open a ton of programs then close them, it won't clear everything, but if you open something else up that needs a lot of memory, it will clear out old memory as needed. Linux is starting to do more of this too, particularly in favor of filesystem cache, which will be cleared aggressively if something else needs that space.
No, it isn't quite as you describe.
It would be interesting to see some speed benches. How does memory used effects speed.
Agree!
8 channel ram is four times faster than dual channel ram. Quad channel ram is twice as fast as dual channel ram. Dual channel ram is faster than even a new NVMe storage drive and has way less latency. You guys are on a race to the bottom.
@@maxhughes5687 That's not what I meant. If an app is using more memory in one OS, is it as fast as the one running on OS using less memory?
@@woodcat7180 YES I would say the one using more memory is as fast as the one using more memory.
@@maxhughes5687 😁, edit: LESS!
Great video Gary and pleased to see you used my distro of choice Pop. This is certainly a much fairer comparison than say using Puppy Linux. I would be interested in your thoughts on why say Firefox in Windows uses more RAM than Linux. To me this was the most confusing part, unless Firefox can offload more of it's work onto existing Linux processes and hence doesn't need to spawn new ones? Thanks for getting my brain thinking on a Tuesday...
Calculating memory usage is tricky. Most status reporting only count the user memory, but often doesn't count the kernel memory like page table increase and system resources. On the user side, shared memory be complicate because multiple application could share the same resources. I like using "zero"-ed memory as it reflects the memory used and if RAM was reduced below this line would result in perf loss.
I know it is tricky, I even say so in the video!
This was a very informative video. Thanks Gary.
You could make another video showing the processor load in each system.
There has to be a reason that individual apps request a lot more memory on Windows than Linux. That's the interesting question here. I seriously doubt you've properly isolated RAM usage.
It is useful to see some numbers put to the usual claims. Great work Gary.
I would make some smart Linux FTW comment but I'm slightly distracted by the Canadian National Gallery background. I like it!!!
Linux wins 💥💥💥
not linux user.... but watching proprietary getting blown away by open source gives u a different kinda feels 😁
Open source for personal use will be only for slow HW. Because the fan club goes crazy if I just mention we get one distro for Advanced HW. Remember the anger over W11 for newer HW. No one with the new HW had a right to get a new O/S made for faster HW.
Im glad i found your channel very thorough and persice
It also depends on which distro youre using for linux. I use a few distros that use 64mb min and a few that use less than 300mb
Well since I was trying to compare like-with-like, I didn't use a niche distro.
one thing to mention is popOS is far from the lightest distro,you could take it ridiculously low with something like ubuntu MATE or especially those fancy window managers (though those systems are so minimal,it may get a bit unfair).
this proves evermore that linux is definitive for older computers.
Thank you, this was very illuminating!
I'm glad I switched to POP OS nearly two years ago.
TH-cam recomends me so many random videos so why has it never shown your channel i will (now) sub to.
Something I think you totally missed is that, program memory usage doesn't just boil down to the OS itself, but also the C memory allocator it uses, which differs on both systems. On Windows it uses CRT, and on Linux it uses gblic/libc.
But still, Linux most likely has the superior memory management either way.
He also had some interesting picks of programs to test. Firefox is a well known, (infamous at this point), memory whore on Windows. Only Goggle Chrome is worse for filling up memory needlessly I believe.
It also completely miss specifying what was the amount of Ram on each system. The more RAM the system has, the more the OS/program might want to use, to improve general performance. Un-intuitively, not in use RAM is wasted RAM. When there are many programs running and close to maximum memory usage overall, this is where the memory management really takes place. The analyze here falls fairly short. I am not siding with Windows or Linux or MacOS, I am just pointing that analyzing memory usage is a lot more complex.
@@jeanjasinczuk7543 it's also a meaningless endeavor, since the vast majority of computer users never come close to running out of free RAM
Very nice and informative video!
I have a secondary PC with a quad core s775 Xeon and 8Gb of Ram. I only use it 4-5 times a month for internet browsing,watching video and in the odd case for some emergency Zoom call from work.
It's been a couple of months that I am considering installing Linux (instead of Win10) on that machine to give it some more breathing room. My main concer is driver support and general compatibility of the hardware as it is quite old.
After watching your video I am a step closer to making that decision.
Thanks Gary, very informative!!
not quite sure why I clicked on this since I knew the result from the get go, but I am glad I did.
I learned the hard way that windows is a memory hog, that is by making a server out of an old prebuilt. Surprise to no one ran out of memory eventually and as a quick fix switched to linux. I did end up adding more ram to the machine but I like having the overhead that comes from linux as well, since I will definitely keep expanding the software that runs on the machine.
Thanks.
I kept getting the terms "pig" and "hog" mixed up. I know one refers to software that uses a lot of memory, while the other refers to software that uses a lot of storage, but I could never remember which is which, and I'm not sure younger programmers still use that slang.
Untill recently, I was using thinkpad with 4GB RAM. It's ridiculous, that it come with windows preinstalled, since just the fresh install was enough to get swapping. Out of couriosity, I've tested how it runs, and we're talking like 10+ seconds to open the menu bar.
Thumbs up on this vid and anothe r thumbs up on a videonon the MMU and virtual memory management, etc.
Also, having worked on the IBM AS/400 or iSeries or Series i, I was always impressed with their hardware addressing via verticle and horizontal microcode with the OS never addressing or seeing the physical. A very unique computer architecture.
Gary,
This is a very challenging topic to do a complete comparison. The assumption that using more memory is bad is something I'm not completely sold on. Swapping pages to disk is bad in that it slows things down but if a system optimizes performance within it's the total system memory available, that seems preferable? What about a comparison of running tasks on Win vs Linux with insufficient memory and counting the page faults/page swaps? If you can show that Linux pages less given the same physical memory, then I would find the comparison more compelling. I'm a fan of linux myself, started in the 1980's with HP-UX, so I'm not being critical of either OS, just thinking about what other factors effect these systems memory management schemes.
Using more memory can't improve performance (outside of caching, which generally happens at an OS level, not per process level). Using more memory will cause more swapping, which as you correctly said is bad for performance.
@@GaryExplains I've been fooled before by lazy de-allocation schemes. Optimum overall performance doesn't always prefer cleaning up immediately, there is no penalty for using more memory until you run out. Perhaps you accounted for that. I would not be surprised to find your results are indeed the full story but I am always looking for the technical loopholes.
So Gary, when did you come and visit Ottawa Canada? You were quite clearly doing your video from the National Art Gallery, with the Parliament buildings in the background. 😉
😂
I think you forgot to mention something when talking about booting an OS. Unused memory is WASTED MEMORY. You don't benefit from not using the available memory when the OS can use it for something, this is not the same as hard drive storage. Also the memory is dynamically allocated, Windows will use it when nothing else is using it, the minute something else needs it Windows will reduce it's own footprint as best as it can.
No I didn't forget that at all. I even included a small segment in the video about the difference between free memory and available memory.
@@GaryExplains I think it needs to be clear that its not very beneficial ie wasteful for an OS booting up to not be able to use all the RAM that it has available.
Linux also works on the same philosophy, so it isn't that Windows does this and Linux doesn't. Plus the rest of the tests also show that Windows uses more RAM, and there I am measure the process memory usage, which doesn't include file caching etc.
@@GaryExplains I get that and don't dispute this one bit, but it's considered obvious that Windows is bloated and uses 'too much RAM', based on how I see people talk in both online and physical communities. And while it's generally true context i think is still important, I guess I've been worn out from trying to explain this to too many people regarding unused RAM, so many videos mention how much RAM is used when Windows is idle which is actually fine.
But as I said, the rest of the tests also show that Windows uses more RAM, and there I am measure the process memory usage, which doesn't include file caching etc.
That is awesome video
Would like to see a video on virtual memory.
I kind of expected these results, but not by such a margin. Interesting. Proud to have switched to Ubuntu linux, even though I am still counted as a Windows user, as the laptop came with Windows.
Thanks Gary that was interesting.
I think a very interesting comparison also would’ve been comparing them to MacOS because MacOS is Unix based but a very modified version that I expect to be much heavier than any Linux distribution out there. It also has an X86_64 version for intel processors and an arm version for the new M processors from apple. That video would be especially interesting because with the M processors the memory is on chip and shared between the CPU and GPU cores, and not upgradable. Also very expensive to get a higher processor configuration to benefit from the larger memory (especially if you want 32 GB like me).
MacOS is unix not unix based it is certified unix dumby.
Interesting video as lack of RAM is often the killer of an old PC. I got a old thing with Windows 10 and Mint Xfce and it was interesting for me to compare. Actually it was not that different, every time I start two or three modern apps (TH-cam on Opera amongst other things) I run out of RAM and swap begins. Windows need some fine tuning (disabling Windows Defender, Update and stuff) before it runs OK but after that, it's not that bad and respond actually quite well. On heavy usage, once Linux swapping blocked me completely and Windows did not, but I don't (yet) make generalities out of it
RAM usage it interesting, but swap management and the way applications repond to it is even more interesting, in my mind
A video for how virtual memory management is done would be MOST welcome.
It is here th-cam.com/video/4e18yybPo1E/w-d-xo.html 👍
I've set up a Thinkpad T460 with Linux, and got it to run Monster Hunter World.
I would not recommend playing Monster Hunter World on a Thinkpad T460, but if your really want to or have to, you can.
Tip: Using the lowest resolution has the best performance. Take a mental note of where the resolution settings are, as you won't be able to read them when you go to change them again.
In the spirit of unused RAM is wasted RAM, Windows will incorporate ads in the file explorer. Brought to you by Carls Jr.
Been using Linux for 2 years before that I used Windows for over 15 years and have never regretted it. Linux is the way to go!
I was using OS/2 for about 10 years, before switching to Linux. I have never run Windows as my main OS on my own computers.
BTW, I used to do 3rd level OS/2 support at IBM Canada, but was already using it for 5 years before I started that job.
Well I have a very different experience with Linux than you guys, because it was the worst OS experience I had, crashed an awful lot with recent hardware and unstable as hell.
@@victorhugofranciscon7899 whenever there is new hardware released you need to wait a little longer on Linux to get the necessary driver support but this delay is improving.
Nice. Tq Gary.
Great Video.
Can you also do a memory comparison for a server setup ?
file server / web server / ...
Where one does not need a GUI.
Interesting the difference between running the same programs on windows vs Linux - Given that a string is a string and an int is an int regardless of OS, I would have thought that they'd be much closer. I would have thought that the difference would have mostly beeen due to different window decorators, which would have been minimal. Is it the underlying graphics libraries which account for the large difference? Obviously, the OS is the part where I knew there was a vast difference - I've created a custom Linux OS based around ICEWM for a Raspberry PI which boots to desktop in around 100MB.
No, there is difference in some C/C++ Integer types between OSs.
For example: the "long" int type is 64 bit in linux, and 32 bit in windows.
In fairness he did use POP which if I recall correctly uses Cosmic (I think) which is based off Gnome which is the heaviest of all Linux desktops and it still came on top compared to Windows, not only that tool kits are shared which is why Firefox looks exactly the same on the 2 platforms and so on. So even with the disadvantage of having a Gnome based desktop :D
@@presentarmsonlinux
Problems with Gnome?
I think it might be due in part to how Linux share memory between processes and how applications are build against libraries, in Linux you tend to build against the system libraries (so the same library for all the applications), in Windows you tend to distribute the libraries you need (so several versions of the same library to serve several applications). Linux also only loads into memory the parts of an executable that are needed (in tech lingo, it maps the file into memory), and under memory pressure it does not send it to swap, just discards the data in RAM and if needed loads it again from disk. Probably Windows does something similar, but I don't know up to what extend.
One thing that caught my attention is when I have used Blender, Linux seems a 20% faster, in a task that is basically GPU load... there should be more magic happening in the memory subsystem.
@@xrafter It depends of the 64-bit memory model used by the distribution.
Windows 11: "Hold my beer"
PC: "please no"
I'd like to see a dynamic, real-time visual representation of the memory pages, colour coded by PID to see the randomness of the distribution. Is this even a thing?
By using a window manager on the linux machine you can further reduce the ram used. My system, no joke, idles at around 250MB of ram used. And this is not using some cut down distro. This is using POP OS with i3WM
Obviously there are lightweight Linux distros, but that wasn't the point of the video.
@@GaryExplains I'm not saying this had to be included in the video, the video covers everything needed. Putting this info in probably would confuse a lot of people. Just pointing out that with Linux you get the freedom to choose!
Doesn't surprise me at all. I've used LINUX for over 10 years to revive older machines that would run circles around the windows counterparts.
Video on Virtual Memory would be awesome
Years ago I found an interesting bug in Windows NT 4.0 Server. I was porting a rather large Unix application to NT. The computer I had was a real beast. I don't remember how many megabytes of RAM it had, but it was a lot for the day (maybe 512 MB? For comparison I think my home PC had about 16 MB of RAM at the time.). But the hard drive was a little more limited, so I didn't allocate much space for the page file (swap space in Unix lingo), maybe just 2 MB. But apparently when I started up the programs that made up the application, Windows would start trying to swap out existing stuff to the pagefile to make room for more stuff in RAM even though it didn't need to since it had loads of RAM. As soon as the small page file was full, performance would hit a wall and it would start showing errors about being out of memory even though there was no way it had used up 512 MB (or whatever it was). But if I wasted a full 512 MB of disk space for a pagefile that would never get used much, everything ran just fine.
I daily drive Linux Mint on 4GB RAM and it just flies. It's so stable I got no errors or crashes that gave me a bsod like on Windows.
Curious about pagefile vs swap usage in windows and Linux
1) Yes, you can use a swap partition on a Linux but it's deprecated due to
- the cost of partition border crossing.
- not having any benefits.
- partitions wasting some disk space.
- pretty sure something else I forgot.
2) The RAM amount directly affects how much is used for caching, especially on Linux. Linux is much less hesitant using RAM for caching, which is especially notable on a firewall distro cuz they'll use it all for performance (but will give you what you ask for).
I'm pretty sure swap partitions are still the main, with swap files being used less.
@rautamiekka Do you have a link to show that swap partition are deprecated? Also, you know that Linux isn't hesitant or eager to swap, it depends on the value you set for swappiness.
Would be nice to explain what exactly you compared. Memory used by a program is not a single simple number. In Windows Task Manager you can see columns like Working set, Active private working set, Private working set, Shared working set, Commit size, Paged pool, NP pool. Which one of them did you use? How are you sure it's the right parameter to compare with Linux? In Linux top will also give several numbers: VIRT, RES, and SHR. If you compare different things you get different results, but that's not informative at all for a comparison.
Right. You can safely ignore most of this video by the fact that we already know Windows and Ubuntu perform similarly in actual usage metrics. So it's likely these numbers don't mean what he thinks they mean.
I second this. First thing that got me was the 4GB+ memory usage on reboot on W11. I run W11 too and booted up with a few programs and stuff running in the background (antivirus, anydesk etc nothing huge) and the actual physical memory in use is roughly 3.3GB. The system commit which is at about 6GB corresponds with my page file also being 6GB. So maybe he used that number. Speaking under correction :D
Another thing to note is that he hasn't tried testing Windows and Linux with different amounts of total RAM.
This does matter, as Windows does somewhat modify its behaviour - how conservative to be with RAM - based on the total RAM. If it sees 4GB total RAM, then it behaves more conservatively than if it sees 32GB of total RAM.
It's not a vast difference, mind you, but there is a "less RAM = be more conservative, more RAM = stretch out a bit more and prioritise performance instead, as there's room to expand into" slightly modified behaviour in there, which further brings these results into question when you've not tested multiple RAM configurations (on both systems) to see if this does modify OS behaviour.
If it sees more RAM, then it'll use more. Cache up more stuff. Prefetching stuff.
So if you run Windows on a more RAM-limited system, then it uses less and behaves more conservatively about what to keep around in RAM.
(This is why some people are saying "when I boot Windows, it's not taking about 4GB of RAM, as we see in these results". Yeah, because Windows has seen the total RAM and is being more conservative with its usage. Try it on a machine with, say, 32GB of RAM and it'll be less "precious" about those bytes and instead prioritises better performance, by trying to load and prefetch more of the OS into RAM - because you can afford it and this'll make it perform better.)
@@klaxoncow true. Windows aggressively pushes things out of RAM to keep it exactly at no more than 80% usage if possible, for example.
@@klaxoncow I agree with all you have said here. In a lot of cases unused RAM is wasted RAM (can't remember who said that lol). Cheers mate!
Great video,thank you....
Pls compare windows and mac memory management 🙏
well, besides caching, the services and how many services that are startted up will have an impact. A typical Windows install, in my opinion, will tend to have far more services than a typical Linux install. And the question becomes - how essential (really) are a lot of those Windows services?
The answer is not essential at all, just a lot of microsoft bloatware
if they were used by thirdy party applications, then they would be essential, but if that was the case, then opening new applications would have less aditional ram usage on windows than on linux, that simply isnt the case, quite the opposite, so those services arent that usefull.
the main issue is that microsoft dont have any competition, no matter how much windows is bad, it wont lose marketshare, no matter how much microsoft invest into improving it , they wont gain aditional marketshare since they already dominate it.
I used to use Black Viper's list of unnecessary services to disable them but it's definitely not as bad as it used to be. I also have a 5800X with 32GB RAM so I don't spend a lot of time on this. Also, that's with Windows, Rockstor, and BunsenLabs running on Proxmox.
I'm not sure if this is just about less ram = better. Maybe superfetch uses more ram but makes the user experience better (for example)? I'm no expert and I am not sure how either Win11 or PopOS uses and manages memory but maybe win11 is intentionally using more memory for good not evil? I know that an integrated GPU uses memory for graphics but I don't know how this is allocated or managed by the two OS (is that strictly set in BIOS or does the OS dynamically change ram allocated to the iGPU as required? I would find a video that looks at these things and explains the difference between how operating systems manage memory really interesting, follow up video maybe?
Yup. Typing this right now on a 1gb RAM tablet that runs Windows 10. And that includes having youtube open on a web browser on that tablet.
Interested in virtual memory video. Thanks.
There was quite a lot of interest, so I published it quite quickly. Here it is: th-cam.com/video/4e18yybPo1E/w-d-xo.html
Thank You!
Virtual Memory video, yes please!
Of course I knew that Windows would be worse than Linux at memory management but I'm surprised at how much worse it is. I would be curious to see what architectural differences cause it to be so bad.
It's amazing how much RAM LibreOffice is using, compared with MS Office. One opened document in LibreOffice uses 142 MB while one document opened in MS Office uses only 46 MB. That's on a Windows 10 PRO computer, with 16 GB of RAM and a Ryzen 5 2600. Sure, LibreOffice is free, but also it behaves and looks like something from 2010.
I've used a modern Linux version on a 128MB RAM VPS for a large part of the last decade. To be fair, part of the OS ram usage was probably not counted because it was run using OpenVZ containerization, but it was running several sites through apache and other services I used daily. Eventually, I upgraded to a 512MB VPS when I started running slightly heavier applications. None of that is even imaginable using windows, so I wouldn't say there's much of a competition between the two...
Except you weren't running a desktop, not a fair comparison.
@@GaryExplains You're assuming I was trying to do a comparison, but I was only providing a data point. The thing is, that data point blows Windows out of the water all by itself. That's because you wouldn't even expect a Windows machine to run stable if it had 128MB available straight after boot. And because you wouldn't be taken seriously if you said that a heavy application like Apache runs using only 128MB of RAM on Windows. And because Windows itself uses far more more RAM than that, even if we allow Windows to not count 50% of the RAM it uses, using that as a very generous estimation of how much it needs to run its GUI.
(Also, note that the comparison by itself wouldn't be unfair - it depends on what you're comparing. If I'm comparing running an OS using the minimum amount of RAM, I don't have to handicap one OS because the other OS forces you to use desktop even on servers.)
Indeed I was assuming that you were trying to do a comparison, since that is the context of the video. My bad, I guess.
This is pretty interesting, especially when you configure a new system. On windows just get twice the RAM and you should be fine. Since memory is dirt cheap nowadays, this isn't really as big an issue as it was earlier (at least for a normal user, power users may feel different) but you should know what you need for each system.
Would love to see a video on Virtual Memory
My question
Yes, it is. I use Linux Mint with Gnome desktop in a 9-year-old PC and it works faster than any modern full equip Windows. It's a shame a free OS works better than a paid OS. Go download Linux Mint if you want a Windows-like OS and personalize it as you want. The first view is like... this OS looks really old, but once you personalize it and add colours, change folders, change the pointer and stuff, you realize you don't want to go back to Windows. Do you want to use a Windows programme? No worries, use Wine or install a Virtual Machine in your Linux.
It's been a pleasure to help deceived people.
Unfortunately Abode Creative Cloud doesn't work with Wine and isn't practical with a VM.
@@GaryExplains Yes, you're right. There are some programmes that don't work properly. You can have a partition and use Windows only when using specific programmes. Why? Because Windows is slow and not secure at all. You can have an antivirus if you want, but it doesn't recognize all of the viruses out there. Since I use Linux, I don't dare to put my passwords in Windows and I only use this SO to play a game.
One important piece you missed was that Windows uses page files to do memory defragmentation. Even if you have enough physical memory free to load a memory intensive program, sometimes you will get out of memory errors because Windows can't defragment the memory properly without a page file. This is an issue I came across, I had 32GB of physical memory, only about 4GB was in active use (fresh reboot) and opened a program that allocated about 8GB-12GB of memory, so total usage would have been around 16 of 32GB available. But the program would crash with out of memory errors because it couldn't allocate a full contiguous block of memory and Windows won't defragment memory without a page file!
That makes no sense. I think you have misunderstood something. All physical memory is fragmented, that is the whole point. The contiguous memory block would have been in virtual memory not physical and adding a swap file won't "defrag" the memory. Memory fragmentation in the virtual memory is indeed a thing, but it doesn't work as you describe.
@@GaryExplains maybe the underlying cause is different, but the symptoms are as follows:
1. Turn page file off
2. Run program
3. Get out of memory error (memory usage doesn't even approach using all physical memory!)
4. Turn page file on
5. Run program
6. No error (memory usage still same, not even close to using full physical memory)
If not using page file to do memory defragmentation to allocate a contiguous block of memory, I don't see another explanation for getting out of memory errors under these circumstances. There is plenty of memory available, so the only thing I can see that would prevent allocation of new memory is that the memory it tried to allocate is used by another program or outside of the addressable space (impossible given the program is 64-bit).
This issue affected two games for me too, Cyberpunk 2077 and Halo MCC (specifically Halo 2 and Halo 3). Again, 32GB of physical memory and fresh reboot with only about 4-5GB of memory used at time of launching those games.
The only thing swap is used for is to free physical memory. So looking into this a bit deeper there are system calls on Windows and on Linux that ask the OS for contiguous, nonpaged physical memory. For example, on Windows, MmAllocateContiguousMemory() allocates a block of nonpaged memory that is contiguous in physical address space. Using calls like this would force the OS to swap out any pages that get in the way of a contiguous block. Later they can be swapped back in at a different address. In that sense, it is the memory defragmentation you are describing. Microsoft note that "When physical memory is fragmented on a computer that has a large amount of RAM, calls to MmAllocateContiguousMemory, which require the operating system to search for contiguous blocks of memory, can severely degrade performance." Calls like this are normally reserved for drivers. It would be interesting to understand why it happens with the games you describe. It could be the GPU driver I guess.
@@GaryExplains this describes basically what I had in mind, maybe I used the incorrect terminology.
I guess I could use something like RAMMap and VMMap to see if these titles allocate contiguous blocks (particularly if memory is constrained) - might give some insight into if this is really the cause or if it's something more obtuse.
I’m actually going through the opposite issue, Linux using an unfathomable amount of ram, forcing me to return to my default OS (windows and Mac OS respectively )
that sounds a bit strange to me as a Linux user for the last 20 years but without knowing what version of Linux you are using and what desktop and other programs are installed and what hardware you have it is impossible to know what may be wrong
Linux systems does not normally take up very much ram my own system for instance uses 2 GB of ram out of 8 running Ubuntu Mate on a used Lenevo laptop from 2012 the most ram is used by Firefox
in your case there might be some programs running in the background that you are not aware of and may be turned of to free up memory
one thing you can do is to use the command prompt and type in the word top
to see a list of all running program and to see if there is a program that is taking up a lot of memory
on my other laptop that is also running Ubuntu mate I have Microsoft
teams installed (for work) and whenever used it puts itself in autostart
and takes up a lot of memory and have to be removed manually
I compared the RAM-usage between Windows and Linux a bit over 2 years ago when I still dual-booted. I observed the same, Linux really does use less RAM for the same tasks. I also compared it for gaming, on Linux with wine, in Windows itself and Linux native. Linux with wine and Windows native have roughly the same memory-usage which is what I would expect. For Linux native both the RAM-usage and graphics card memory usage were much lower for the same game if running Linux native compared to Windows or Linux with wine. That shocked me the most, I would expect the memory-usage for games to be roughly the same but it is not. I would love a deep dive in which it gets explained. I wonder if the guy from David's Garage could do that.
You want vram used as much as possible. The same as wanting your GPU usage at 100%. It IS MEANT TO BE USED AND OPTIMIZED.
I think a video on virtual memory would be nice.
How much time did the tests take? It is ok to Windows use more memory if the time to compile was faster the Linux.
"Linux is more efficient, water is wet" is my new phrase now, thanks Gary, this is HILARIOUS
"RAM is faster, speed is king" is my new phrase now, this is elementary.
It would be interesting to do a deep dive to understand why there is the difference overall (on bootup say) as well as individual program differences. Windows is a much larger OS (in terms of code base), there may be more processes running - many of which may not be doing very much that are useful most of the time, but philosophically Windows is designed to support many legacy edge cases - that comes with baggage. I know the video mentioned the issue of account for free memory from available memory, tricky to quantify. If you have much more code running, then those .dlls are memory mapped so are "using" memory virtual memory pages, however they are available to be used by any program that needs those pages. Also Windows desktop OS is very geared up for all the legacy frameworks to support the legacy applications, particularly for desktop/video. It's a bit difficult to compare the two when Linux doesn't have the same legacy heritage (baggage!)
Chrome listening silently about memory talks between Windows & Linux and finally commenting: "Amateurs!" 🤣
The first place to start is: Linux + minimal version or "core". Then you have more control or visibility over what is actually in the background and if it's needed. People might argue that with modern computers it's not such a problem as there's plenty of RAM etc. However it's REALLY ANNOYING having useless stuff running as well as lots of it.
I always liked minimal desktops in Linux and there was a great comparison graph with Enlightenment at the top as lightest compared to say KDE or Unity etc. One could go further and ditch a full desktop and include components only eg windows manager (openbox) and other components bolted together too.
Whereas with Windows: 1) It's recommended to reinstall it periodically 2) You can go to github and get a "debloat script"
That is all before this test which is doing a like-for-like performance comparison idling background RAM use and application like-for-like comparison and in both Windows is worse!
It makes me wonder if I should go for MacOS on Apple Silicon for my next computer where presumably hardware-os-apps are all more efficient and work together more efficiently AND also have the modern desktop apps that are in fact useful ? But I wonder how easy is it on MacOS to cull the desktop down if one wants?
Maybe the Macbook Air M2 is the device to go for coming up.
I miss the information on how long did e.g. the monk render take on Windows vs Linux. Right now, it seems a bit pointless to just compare the RAM usage. Did they take approximately the same time, or did one of them finish faster?
It isn't "a bit pointless to just compare the RAM usage" when the video is called "Windows vs Linux RAM Usage". This video isn't about performance. It also isn't about cream cakes which is why I don't mention them either.
Yes, more videos please.
Hi Gary , good content but I think you over simplified the analysis , just because a program uses less memory RAM in one OS , it doesn`t mean that OS is more efficient. Example , I can make a program that uploads everything into Ram or tweak an OS Memory Manager to keep as much information possible in RAM with the purpose to be faster.
Yes, but I am using real world programs that do real world things, not some test program.
I`m not saying the conclusion will be different ( I think Linux is better ! ) but , we need to analyze more metrics in how the Ram is used to conclude about efficiency ! Keep the good work :)
Way back in the dark ages, it was common to do all sorts of tricks to minimize memory. However, those tricks that saved memory often had poorer performance. Back then do did what you had to, just to get something to run in the very limited hardware of the day.
Virtual memory video 👍🏽👍🏽
It would be interesting to see if there is any noticeable performance difference when running on the same hardware with different amounts of memory being used.