@@deher9110 Its not clickbait just because it made you wanna see it, that's just good marketing Clickbait has to be false. Literally "bait" for something else
@@pumbi69 he put google drive as a drive and then assigned a pagefile to it, which means it used google drive as ram. but it crashed because of google drive's security stuff. then he actually did it with their own server that didnt have that security stuff in it
A video every day is cool and maybe it's just a division in audience but I'd rather slightly fewer videos if it meant more quality. These past videos just feel like they took a random click bait like title and then figured out how to pad it into a 10 minute video. That beginning google drive served no purpose just to pad, at least find another cloud service which allows random reads and writes. But the end result was really "swap is sometimes somewhat useful but otherwise, use ram". Who is that aimed at
5:50 You might be able to do swap on network drive if you increase /proc/sys/vm/min_free_kbytes to reserve enough buffer memory to not hang while trying to allocate buffers for pushing bytes to network drive. Maybe also increase /proc/sys/vm/swappiness and /proc/sys/vm/watermark_scale_factor to make kernel more aggressive pushing the data to swap. And you could also try increasing /proc/sys/vm/page-cluster to somewhere in range 6-10 so move bigger blocks at once - trying to move 4 KB blocks with random access would hit hard. Swapping to remote network drive could make sense on L6 or L7 level. Before that you should swap to zram, SSD, HDD and fall back network drive only after every other swap device is full. Just set slower swap storage with lower priority and it will not use if there's any free space on any higher priority storage. Of course, as you correctly figured out, getting anything back from network drive would hurt you really bad. Like getting data back may take something like 100 KB/s.
There are still use cases for virtual memory, but anymore if I'm going to be doing something that is that ram intensive that I need that much ram, I'll just use the dated machine with maxed out ram and a 1 tb ssd as a swap device. But, I also know not to expect snappy performance when using a machine to process 100 gig scenes.
@@DareDevilPhil Anymore ram is cheep and once you have 16 gig or so you are pretty much set for most games and applications. Unless you are getting really deep into it, and even then very seldom do you need to load it all into memory at once. Unless you are dealing with some poorly coded applications. Or are working with several raw video frames at a time or some crap like that. If I'm baking texture maps or doing large scene renders I'll use my older machines just because sometimes it is nice to toss 500 gig of server ram at a problem and be done with it.
When I was doing my masters degree one of the teachers actually mentioned that one of the other teachers did this back in the 80's when he was a student. You could request RAM from a mainframe server that was running at the college. Latency was bad but at least you got RAM.
@@GavinFromWeb They didn't used that anymore when I was there in around 2005. But we had another cool thing. Our lab computers had a boot loader where you could select which OS you wanted to run. The image would then get downloaded from a server and you would have a fresh OS installed in less than two minutes. It would be a native install, not a virtual machine. This was all developed at the university, way before any of this stuff was commercially available.
Back then, if you had a class requiring mainframe/mini access, you got a weekly budget in dollars to spend on processing time. Something like $25 and cpu time cost about 50¢/hr.
Memcachedb does this! It's lower latency than spinners usually. Now that we have not-expensive NVMe ssds and expensive RAM I imagine datacenters use ssds instead though.
I did something like this in my PHD. Several things you might want to try: 1. Using cgroup to explicitly control processes can eliminate most of the crashes and it improves the performance of the system overall. 2. Using Intel Optane (even the low-end 16G model) as swap is much faster than swapping to a local SSD. A lot of larges models that needs tens of TBs of memory relies on Optanes. 3. The performance of swapping inside a VM is better than swapping outside a VM.
as someone who spent days days optimizing his swap i can tell you this is correct but reality is you should not in any way hope for swap for actualy doing tasks its a waiting place at best. this days its main uses are VMs and hibernation
@@bigpod Yes! Swap is definitely going to be slow no matter how you optimizes it. Swap is accessed in the size of memory pages (2K-4K) while CPUs access memory in the size of bytes. Which means even you "swap to a memory", it is still going to be slow. But sometimes, the goal of swap is to simply makes it possible to run some programs (especially those programs that needs 10TB+ memory).
"Using cgroup to explicitly control processes" - what do you mean by that? Do you mean setting per-process- (or per-cgroup-) RAM limits, so to delegate the memory management to the applications instead? If so, wouldn't that be the same as turning swap completely off, in other words, do you suggest we shouldn't use swap at all? And of course, what if those processes actually *need* the memory and then ... die? Also, why do cgroups improve the system performance overall? Just curious, thanks for your input! :-)
@@entropyxu4044 while yes it can allow you to run those programs like SAP Hanna for example (being the most apt example) performance of such app will most likely be horrible (and is horrible) and will most likely only work for small examples(where 10TB+ is not needed) or sparsly accessed programs. but lets remember there are fairly small amount of such programs(databases and research stuff) where many of those programs can be relegated to a server room or even a computing cluster or in some cases even supercomputing clusters(colloquially known as supercomputers)
You might have missed the "vm.swappiness" kernel variable... It kinda controls how likely the kernel is to move data into the swap file/partition. (btw, no - i'm not recommending actually using swap as part of your system memory, I'm just adding a little thing you could've tested)
This reminds me of the scammy “memory enhancement software” in early days, some of them actually used LZ77 compression for user space memory, but most of them only changed the size of the page file.
I used some of them back pre-pentium days. Well it wasn't a perfect solution, at least it gave your hardware a little more breathing room until the next generation came around. Also could be used for games because that's what I did as well. That was the ones that actually use compression
Scientific tasks are actually an area where swap, especially tiered swap, really shines. Specifically if the modeling software needs to generate a huge dataset but only rarely needs to look back at data from near the start of the simulation. I sometimes do computational fluid dynamics (CFD) on systems where the working set of data reaches into the low 100GB range, on a system with 32 GB of ram. 20GB is enough to hold the volatile working set easily, so I actually configure it to use the next 12GB as compressed ram swap (zram), which manages a 3:1 ratio, and trades CPU time to still have low latency. This gets me to 56GB of effective ram, before touching the next storage system. For a while I actually then layered my GPUs VRAM in, as it is higher performance than my SSD, and it's a 16GB card, with zram on top of it that's an extra 45GB (leave a bit to actually drive your display), for a total of 101GB before you have to touch a disk. (Unfortunately, the GPU userspace memory system has gotten a bit unstable, so I had to drop this level). Data which doesn't compress well, or the oldest data once the compressed space is full, gets written out to disk, needing between 10 and 30 GB for most of my projects. If you are using the system this way, you need to set up memory cgroups or you're in for a bad time.
@@Wyld1one Unfortunately it's the kind of prolem that is rather resistant to automation (and if you're working in a domain where it's needed, you likely know what to do). There are lots of knobs to tweak, and what is sensible behavior in one case may be exactly wrong in another. For example, if you don't have swap, and run out of memory, the OOM killer picks the program most likely to be leaking or hogging memory and kills it. This keeps the system responsive, but obviously killing a semi-random program can be bad (you can give the OOM-k hints so it gets the right program, but it's fiddly). On the other hand, if you try to have the system just swap, and slam it with too heavy of a load, the system will go unresponsive for potentially hours as it tries to figure out what can be swapped out or thrashes the CPU. It likely *will* recover, unless there is a runaway program, but you're often better off hard resetting the computer at that point as a reboot is much faster. That said, there is a decent way to configure a machine for general use, when swap is present, that I think *should* be default for user-friendly distros. If you push the user-level programs into a memory (c)ontrol group, which is configured to use say 95% of the system memory max, then if the user launches something that tries to use too much memory, the system will go unresponsive, but the tty login (or ssh) will remain available. Trouble is c-groups are relatively new, so adoption is still rather slow.
I wonder how much performance you are leaving "on the table" though. CFD calculations are mostly ram speed limited so making ram painfully slow by replacing it with compressed SSD sounds like big loss, much bigger than anyone reading your comment would expect (in your comment it sounds like good replacement). You would need to test it but it looks like upgrading to 64 or 128 gb of ram (if possible) would give you massive advantage.
@@cyjanek7818 Depends a lot on the particulars of the project, and the machine used. If you are ram-speed limited, you can actually get better performance from compressed zram, as the compression is done in CPU caches, so the data sent to and from ram is reduced. That obviously trades more CPU time, but if the calculation can only effectively use a portion of your CPU (in the obvious case if it is single threaded), you can get a bit extra effective I/O performance by devoting a couple of cores to reducing the bandwidth used. Even then, you do pay a pretty significant *latency* hit, so it's only useful with certain workloads. Specifically, when you are doing time-dependant CFD, and the program doesn't manage tiered storage itself, it will keep data from the first time slice around until the end, even though it may well never need to reference it again. In this case, compressing that data and swapping it out will keep the program happy, and give good performance. The swap algorithm itself picks the least recently used data (disk cache or program memory) to compress/write out, so it will naturally pick the unused/old pages. This can be fine tuned by giving the cgroup a low point to *start* swapping, so that it never ends up with significant memory pressure. It's also important to tune the amount of compressed vs non-compressed memory so that the actual working set has enough memory. For my last project, that required keeping at least 24 gb of normal memory for the program, and then 24gb of compressed high speed ram, with the balance written out to nvme, compressed. I worked out those numbers by turning on some extra metrics to observe how often data was getting pulled *back* from swap. It would read data back from the compressed ram at the start of each new "frame", but not within a frame. It only read data back from the nvme storage at the end. Would having an extra $200 of ram sped the process up? Maybe by a couple minutes, out of a week of computation. At some point I may get some extra ram just long enough to compare, but I honestly don't think it would make that big a difference. Also, the same argument will hold the whole way up. 32gb of ram using zram lets you do CFD needing up to 100gb of virtual memory. 128gb of ram would let you reach 512gb projects without needing to go to threadrippers. 2tb eypc platforms would let you have an 8tb simulation.
You could’ve set the swappiness to something like 99 instead of the default 60. That would’ve put stuff into swap at 1% ram usage (instead of the default 40%)
Poping out for 15 minutes to get milk 🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛
I remember setting up a page file back in windows 95 on an extra 500Mb drive. It actually sped up my system quite a bit. But this was back when 32Mb RAM was considered a good system.
Do you remember Windows 7 came with a feature to use a pendirive as a virtual memory. Maybe it worked the same way. You should still be able to do it. If i remember right it requires minimum 4gb capacity pendirves to work(could be wrong).
and I actually think there needs te be a follow up video . By just moving the pagefile to the secondary ssd in windows there would be a significant improvement to having it on the same disk as windows. But why stop at one ssd ? Shouldn't it be possible to instead of one ssd use a raid 0 of multiple ssd's ? And instead of using sata ssd's maybe use m.2 pci express gen4 since I am pretty sure Linux has acces to motherboard with a lot of m.2 slots.
@@Gurj101 You're referring to ReadyBoost which does work in a similar way. Any size will do, as long as it has a couple hundred megs of space, though you need at least a gig to see any benefit. You can only use now it if you have a 100% hard drive system though. If you have any ssds installed at all it'll hide the feature - since swapping to the slowest ssd will always be faster than the fastest usb drive. It's still around though, even on Windows 11
Way back in the MS-DOS days, there was a trick for having "expanded memory" keep the bulk of the stuff in upper RAM compressed and also do a swap file if needed. If made programs that needed it go very slow indeed. There also was a trick mostly done by Borland, that allowed you to make a program where most of the program didn't load from disk unless needed. Stubs for all your subroutines in the main program and kept a list of the ones it currently had in RAM. If a subroutine was needed but wasn't in RAM one of the ones you haven't used in a while was removed from RAM and replaced with the one you needed. The result worked surprisingly quickly because you could often break a program into parts that mostly didn't interact.
Nowadays programs are placed into memory by mmap. Basically you tell the OS I want this file accessible from RAM. It doesn’t actually load it into RAM though, but it promises it will be there when you need it. Then, when you touch a part of the file (for example by executing code from that file), the memory management unit in the CPU will trap the execution, pass control to the OS, which will load in a “page” (usually 4Kb) of the file to RAM and the let the program continue executing. If a 4kb page hasn’t been accessed for a long time, the OS might unload it to free up RAM. If a part of a file is not used, it doesn’t need to be loaded either. And if another program has already loaded a program library, both programs can share that library in memory (it doesn’t need to be kept in memory twice).
@@nsa3967 zram is great. It does take a while to initialize on boot though, so on a system with "enough" ram I would still advise against it. And of course there is some overhead due to the compression, but that runs in hardware these days so its not that big of a deal if you need more RAM.
DOS had only 640k addressable memory and the extended memory was not compressed at all. DOS had no swap, real time OS-es can not have a swap by design. Stop misinforming people. The dynamic load in Borland was a thing but it was just annoying mostly, especially when running from floppy disks.
At 5:50 -- This setting can be tweaked around by the "Swapiness" value. It essentially tells the kernel how likely it should use Swap. Since Linux also does eager swapping to clear out *actual* RAM, this is quite a handy feature.
Reminds me of "no replacement for displacement"; at the level of performance we expect today, you can't really cheat your way out of using the correct hardware.
@@flandrble It's funny you bring this up, because compressed pages are literally equivalent to turbochargers. Sure, you will get more available RAM space and little performance hit in optimal situations, but there ain't such thing as a free lunch, so you're paying for that extra space by having to spend CPU cycles decompressing it on a page fault. Sure, if you have lots of RAM to begin with, on average it will probably actually speed up your system. However, compressing pages on a RAM-constrained system will tank your performance because you'll just end up thrashing at lower utilization and hitting the expensive decompression path a lot instead of using all your available RAM as it is. There's no replacement for more place to store ones and zeroes. More RAM is more better, and (unlike going with a bigger engine) it does not result in any significant disadvantage outside of higher cost. This is very similar to the argument for/against turbos. A turbo can deliver much of the same power in a smaller engine as a bigger engine could, but it has drawbacks such as having to wait a bit for all the power to be immediately available. If you have a very small engine that you're always redlining, slapping a turbo on it will not help you much. However, they are useful for squeezing extra performance out of engines when they're optimally specced out.
8:00 you can see this in action if you have 8 GB of ram and while playing a game you tab out to another program like your browser. It will often hang or slow down for a few moments because the OS has moved the browser's stuff (especially background, inactive tabs) to the page file to free up the ram for your game. This kind of stuttering when multitasking is easily solved by adding more ram and is why pc builders recommend 16 GB even though you could play most games with less.
@@yesed You should just swap to Brave. My system admittedly only has 4 cores and I don't play games on it. I do use it for circuit design simulation though which is a pretty heavy load. 16 gigs and I've never seen it use the swap file. To the extent that I turned off the swap about 12 months ago. Haven't had a crash yet.
My experience with network-based swap is usually like: - The swap is provided by some userspace software (like sshfs, and here I've heard of smb, not sure if it's completely in-kernel), - This software gets swapped, - Unable to unswap pages which are needed to unswap pages.
Part 2 idea: Zram. It compresses less used stuff in the memory in real time with less CPU overhead than you'd imagine, giving you an effective ~2.5x the system memory. It really works!
@@MiniRockerz4ever ZRAM is a tradeoff, you're saving ram at the expense of CPU usage as CPU needs to constantly zip and unzip the data in RAM. AND RPi 400 is more CPU-limited
Very non-deterministic, and it burns cpu at a time when the cpu tends to be busy. Not recommended. It's actually better to just page the data to a fast SSD these days, with as low a CPU overhead as possible. There have also been attempts in the past to have compressed swap (effectively compressed memory when the swapfile is on a tmpfs filesystem). NeXT famously had compressed swap, and it was a disaster. These days you just want to configure a SSD partition and point swap at it. A swapfile works too but isn't quite as deterministic (in linux its about as fast as a partition, but it goes through a filesystem layer that has to record all the block ranges on the underlying raw device, which is a bit risky because the filesystem can't reallocate those swapfile blocks. You have to trust that the filesystem implements it properly). Another reason to use a swap partition instead of a swap file is that you can TRIM the swap partition at boot time. Thus your swap partition serves as extra SSD space for wear leveling since 99% of the time your machine doesn't actually have to use much swap. Highly deterministic on all fronts and very desirable. -Matt
Funny thing is, zram actually expects the compression to be half or even a third (so double or triple the space) That is, if you had 8GB of RAM, you could have 16GB of zram-swap. It will be slower, but way better than storage-swap. And I really recommend Zstd as your comp alg, it's fast and decent
@@shinyhappyrem8728 those files are already compressed and they cant be compressed further, even a high level zstd can only compress with a ratio of 99/98%
There were programs that we installed on our system in the '90s such as "virtual ram" that worked without crashing your system. I imagine the single processor systms and operating systms of those days were so slow that it was not as big of deal (they did mention that while improving your multitasking, your computer would take a speed bump.). In those days we were used to waiting for things to load, so it was quite tolerable.
I remember when I was trying to get an an oldish mini PC that only had 2 gigs of RAM to not lock up all the time, I configured Windows to swap to a 16GB mSATA SSD that I had for some reason whose only function was to act as the page file. It worked wonders and that computer basically never crashed after that.
It would have been nice to mention that having a swap allows you enable hibernation by writing the contents of your ram to your drive so that it can cut power to the DIMM slots. It has been helpful for me on my laptop.
I am an IT and telecommunications student and these videos are so much more interesting to me than regular reviews and which gaming laptop has better speakers. Looking forward to lab content.
Cool to see more content involving linux! When linux runs out of memory it is supposed to invoke the "OOM killer" which just ends a process forcefully. I'd expect desktop linux to freeze or dump you to tty in some edge cases. I think there must be a bug or some kind of problem, because it should not be crashing at all.
in the 'dump to tty' case, I would also expect the display manager get restarted automatically by systemd. So you might see the tty for a couple of seconds, but then get the login screen.
@@YeaSeb. I agree, for some reason default OOM killer is useless in most cases. It tries its "best" but actually makes it worse by allowing all caches to be disabled. When it takes >10 minutes to just switch to a VT you know that recovery is not really an option.
And windows is supposed to do that as well in oom but I have gotten bluesceeen so no you are wrong not always the case thing is oom killer probably needs ram but if everything is full including swap it will crash. I work with 100s of gigabyte of memory at work for my software development.
Its more difficult to accomplish than people think. It is fairly easy for the kernel to deadlock on the auxiliary kernel memory allocations that might be required in order to accomplish something that relieves user memory. Paging is a great example. To page something out the kernel has to manage the swap info for the related page, must allocate swap space, might have to break-up a big-page in the page table, might have to issue the page write through a filesystem or device that itself needs to temporarily allocate kernel memory in order to operate properly. The pageout daemon itself has to be careful to avoid deadlocking on resources that processes might be holding locked while waiting for memory to become available. The list is endless. Avoiding a low-memory deadlock in a complex operating system is not easy. This is yet another good reason why swap space should not be fancy. Swap directly to a raw partition on a device. Don't compress, don't run through a filesystem (though linux does a good job bypassing the filesystem when paging through a swapfile), don't let memory get too low before the pager starts working, don't try to page to a swapfile over the network (NFS needs to allocate and free temporary kernel memory all over the place). The list goes on. Those of us who work on kernels spend a lot of time trying to make these mechanism operate without having to allocate kernel memory. It is particularly difficult to do on linux due to the sheer flexibility of the swap subsystem. The back-off position is to try to ensure that memory reserves are available to the kernel that userland can't touch. But even so, it is possible (even easy) for the kernel to use up those resources without resolving the paging deadlock. Regardless of that, paging on linux and the BSDs has gotten a lot better over the years and is *extremely* effective in modern day. To the point where any workstation these days with decently configured swap space can leave chrome tabs open for months (slowly leaking user memory the whole time) without bogging the machine down. Even if 25GB of memory winds up being swapped out over that time. As long as you have the swap space configured, it works a lot better than people think it does. -Matt
Seems useful in a pinch for edge-case workloads, but I imagine constantly hitting an SSD swap space would dramatically reduce the SSD's lifespan, while DRAM basically lasts forever.
I used a cheap DRAM-less Kingston A400 120GB just for Windows pagefile for a year and after that as a OS drive. It still works fine but according to CrystalDiskInfo the health status is only 64%, it has 18TB host reads, 19TB host writes, 37TB NAND writes, 20k power on hours and 161 power on count
I have used a 750 GB NVME SSD as "extra RAM" for probably 3 years by now, at times using it hard (i.e. it writing 200-300 MB/s for hours straight several times per week) and I have not noticed any problems yet. I was thinking it would probably start breaking after a few months but no, still going strong. Getting that SSD was the best hardware investment I've ever made, that amount of RAM would have cost a fortune.
9:36 this is probably why we're collectively moving to zram over normal swap on Linux because from the looks of it compressed ram is still better than swap (outside of hibernate).
@@zenstrata The amount of work it would require to get actual information from raw RAM data makes it not worth it anyway (even if they did send data to google which they didn't cause it didn't work x))
You can do amazing things with the Linux virtual memory subsystem. Fun facts: if you need a fucked-up amount of swap (I have no data on 10tb of network swap though!) you can make the performance degrade a little more smoothly by increasing the swappiness. Basically if you know you are going to be needing swap no matter what, you can instruct the kernel to proactively swap when memory pressure is low. It will slow down sooner, but it will be a smoother decline rather than "wow it just locked up for 2 hours immediately after I started using a bunch of RAM" If you have a bunch of different slow media, you can put a swap partition or file on each of them and assign them the same priority in /etc/fstab. The kernel will automatically stripe swap data across them. So basically a RAID 0 array just for your swap. You can also tell the kernel to keep certain files or entire directories cached in RAM, which does something very similar to profile-sync-daemon or anything-sync-daemon, just with fewer steps.
@@Iandefor I found another comment about adjusting the 'min_free_kbytes' parameter to resolve page-fault related freezes helpful. My "new" computer has 8GB of DDR3 ECC RAM (maxed out essentially). I had been blaming occasional freezes on the possibility of double bit errors: until I learned today that Linux is supposed to be able to recover from that if kernel memory is unaffected. 8GB is adequate enough that I don't want to buy new until the chip shortages are resolved.
Ok, hear me out: 6 workstations, 1 set of RAM. Setup a server with a massive amount of RAM, setup a RAM disk on the server, share that RAM disk to the workstations using an inifiband network (lower latency than ethernet) and then setup a swap space on the network drive.
At this point you basically just do what a lot of retail stores do with their till systems (even if most the staff don't actually know it), and just have every workstation just be a client device that accesses a virtual machine on the same server.
It has been tested but basically the answer is... its a waste of money. Pageouts to swap are asynchronous, so latency is well absorbed. Pageins do read-ahead, so latency is fairly well absorbed (random pageins are the only things that suffer).... but honestly, unless the system is being forced to pagein tons of data, the difference wont be that noticable. The latency of the page-fault itself winds up being the biggest issue and you get that both ways. Now 'optane as memory' (verses 'optane as swap device') is a different beast entirely because there is no page fault. The CPU essentially stalls on the memory operation until the optane module can bring the data in from the optane backing store. So latency in this case is far lower. Still not nearly as low as main memory, of course, but significantly faster than making the operating system take a page fault. -Matt
@@junkerzn7312 That's interesting. And as optane is insanely expensive that result is believeable. If a workload depends on insane amount of ram, last generation memory is quite affordable. (My photogrammetry server is still using DDR3) Fast ssd or optane might be more worth it if installed as zfs cache on hard drives
@@jimbo-dev the difference is the 'insane amounts of memory' part. With Pmem 200 series and a dual CPU config, you can have 6TB of optane as memory in a single machine, assuming 12 slots per cpu. You could then also have your 768GB of DDR4 per cpu. Optane is expensive, but I think generally it's less than the equivalent in DDR4.
This is a really good video for an introduction to swap space. One thing worth mentioning is another factor reducing the use of swap space besides cheaper RAM is the prevalence of SSD. When your swap space was on a HDD it was slow but didn't have any side affects other than a loss of 16 - 32GB of capacity to a swap partition, however since swap experiences a lot of random reads and writes there is a good chance it'll impact the lifespan of an SSD it's put on.
fun fact: you can also do the opposite and mount a specified amount of RAM as a temporary filesystem (tmpfs) to use it as a disk. You could try to install a small game on it which should have blazing fast loading times :-)
My ASUS motherboard had a similar program for Windows bundled on the CD. It automatically syncs data to the hard drive, but keeps it in RAM when the system is running. However, most games don't really benefit from a ramdisk beyond what you can achieve with an SSD as you are typically CPU limited at that point anyway.
if you do lots of high memory usage tasks, I always recommend having about as much swap as you have ram for workstation configurations. modern operating systems are actually really good about what gets swapped and what remains in ram and it prevents systems from crashing when you run out of main system memory. my workstation has 128gb of ram, so I have 128gb of swap (spread out over my 3 nvme ssd's). this has improved my memory handling drammatically because stuff that isn't used RIGHT NOW gets swapped out leaving the remaining memory free for applications that require the lowest possible latency and highest possible bandwidth.
There is also another somewhat funky configuration you can apply to swap. You can basically also swap to RAM, which sounds completely stupid, but if you combine that with compression, it actually makes some sense. Assuming a good compression ratio, which can be as much as 50% in many typical workloads, you can basically end up with a fast (you only pay for compression/decompression) swap space and can technically squeeze more stuff into RAM, giving you "extra RAM for free".
A lot of distros do this by default. Fedora uses zram which is almost exactly what you described with a few small differences. It's worth a read (and setting up on distros that don't have it on by default.)
ACTUALLY... In the distant future you might be able to download pretty much anything over a network that feeds it into your 3d matter printer. So you could download as much memory as you need for your new PC.
I remember Windows had some feature that allowed USB drives to be used as extra RAM. I remember using it back in the day, when "Spider-Man 3" game was released and it did help a little bit, by giving me maybe like 2 FPS to original 20-something I was getting in the game.
Am, I expected to see more from you guys. I used 2Tb of swap (which was placed on NAS server using NFS protocol) a few years ago (seems it was 2018) and it wasn't something surprising for me. Another thing is that you shouldn't recommend to turn off the swap, on the contrary I expect that you should know that linux kernel is smart enough for not using SWAP when it is not necessary, i.e. it always speeds up your system and it will never work slower than without using swap. Also swap is used for hibernation, so I recommend to leave it enabled (but I don't recommend to use network storage for swapping because it can crash your system since connection will be dropped for some reason, e.g. during hibernation).
One big advantage of swap is when you have stuff that takes (lots of) RAM but doesn't actually use it to often. (Think VMs) There the performance hit isn't to dramatic since it's infrequently accessed anyways.
If you tried cod warzone on a swap you would see incredibly low framerates. I had a friend where warzone was hitting the swap file and tanking his fps. Bumping his ram from 8 to 16 gigs worked wonders to fix it.
The computer I'm using right now has only 8GB. It seems to work fine. I can record streaming audio, print documents, and have a bunch of browser windows open simultaneously, along with a number of other running programs, all without a problem. I do a lot of work on my computer. I currently have 20 browser instances running with multiple tabs on each, genealogy software (2 instances), a couple of file folders open, and Winamp shuffle playing a 200+ playlist of 70s hits. After a couple of weeks like this, I will finally gag the memory and have to save and close everything for a reboot. I guess I could go months between reboots with 32GB of RAM. I built the computer myself over 10 years ago, so I suppose I could add more RAM. I did replace the hard drive with an SSD. I've been at this long enough that my very first hard drive was 20MB and cost about $700.
This is seriously a great introduction to understanding the memory model that computers use-and way more entertaining than the 400-level computer architecture course I had to sit through when I was in college. Nice work!
@@iMasterchris the keyword is “introduction”. The specifics of caching, paging, and virtual addressing really don’t need to be included here; most programmers barely spend any time thinking about them unless they’re doing kernel/embedded/high-performance development
When I first go on the Internet in 1988, it was on Sun 3/50 workstations. We were running them diskless. They had no hard drive (or local storage of any kind besides RAM) at all. All files were served over Ethernet running at the amazing 10 Mbit/second. 10BaseT, which means subnets all shared that same 10Mbit/second (with collision loss and too). They DID have swap space. ON THE NETWORK. Swap was required back then, not optional. Swap had to at least match the memory. So swapfiles were network mounted. If these systems ran out of memory, they could continue to run, swapping over the network. The system was basically unusable at this point, but if a user was desperate because they had unsaved work, we would spend the half hour to run the commands needed to kill off processes until the machine became usable again.
Swaps are very important when you compile with all threads and use pipes, or when your app starts writing more and more data without stopping. I recommend people have their swap partition as big or twice the size or RAM, it's simply a good practice that'd save you more crashes than frames.
@@hawkanonymous2610 if you spend 5 minutes to look at statistics, most gamers have either 8 or 16 gb or RAM, but I'm also counting servers and virtual machines, so I assume the average is 8. Obviously if you have 32 gb of RAM or more you shouldn't be mirroring all of your memory, I'm not assuming people's lack of common sense.
You will never see this... But to prevent Linux from crashing with swap you need to configure the reserved portion of system memory that the Kernel will keep free no matter how many programs try to allocate memory. These parameters are controlled through sysctl, involved settings include lowmem_reserve_ratio and user_reserve_ratio.
is it generally necessary to alter these properties? If so, why aren't they the default? In Linux kernel world, I have generally learned to trust the default parameters.
@@philuhhh It isn't unless you try to open half a dozen programs at once even though your memory is already full. Which is what the benchmark / stresstest they have is doing. These settings have performance implications as you can't fully utilize all of system memory anymore, so not having these as defaults outweighs the niche usecase of people opening 10 memory heavy apps at once with full system memory. Everyone is free to configure the memory subsystem for their usecase which is one of many things what makes Linux so flexible.
What Linus might didn't know: swap does not extend your memory it only works when the system is able to swap. See if your active programs (so called processes) need more than 8 GB of ram - let's say compiling or gaming they allocate on the heap. If this can't be swapped out the process crashes (or your system) so having Problem with chrome tabs? They get swapped to disk.. having all occupied by one process - like video editing - nope it will crash
It does... I only have 8gb of ram. My games crash and it says "windows ran out of system memory." Games used to run on more ram... Now I can see the ram filling up to 99% when I play and then sometimes hits 100 and crashes. Also when I close/crash the Avengers game I see the ram usage go to 20%, that means everything else has been swapped/paged out of system memory. Then the ram gets filled back upto a normal 50-60% usage that I generally see in everyday usage. With chrome or edge open it reaches 92-93% usage but never goes beyond that, so I assume its swapping my tabs. Pov: I had 12 gb of ram but one stick stopped working. Now I am left with 8gb.
0:15 Oh you just increased your swap partition. Meh. Windows uses a Page file, linux uses either a swapfile or a swap partition. On old spinning disks,you could set the swap partition to be on the outer edge of the disk, where read/writes are done faster than the center of the disk. Or so I've been told.
As an aside I remember a time when hard drives were slow enough that compressing data transfers was useful for speed as well as space ...and there was a time when compressing RAM was a reasonable way to get more memory 'space'.
The intention of SSD swap is to avoid system crashes when running out of RAM. The intention is not to replace RAM, because it is much slower. If not even crashes can be avoided by enabling SSD swap (like in the video), I can see no point in using it at all.
9:00 what is actually happening is that the system is using extra available RAM to cache the drive access - because there is extra available to do that. It won't try to cache the drive access using the swap file, because the swap file is ON the drive (or at least a similarly slow device).
The section on memory hierarchy is something we cover on our Higher (year... 12? Dunno how it compares to US/Canadian schools) course, and nails it in such a nice, short, compact format that I'm putting it into our class resources. Thanks!
Just make sure when minimizing your swap space/page file/whatever that you leave enough memory to dump error information. I think for windows it's 800 MB which is barely anything out of your 128 gig boot drive and isn't enough for windows to use if it somehow runs out of ram there's also the consideration that you don't want to constantly running swap read/writes on an SSD but I'm not sure that's as much of a problem nowadays than before. Either way if you are actually running out of system memory (and not say, trying to play 4K 120 fps on a GPU where that simply isn't going to happen) then please consider upgrading the memory I have an odd configuration of 64+32 = 96 gigs of ram. But that was because I was using ffmpeg instead of a more efficient program to combine 4K mp4 files. I don't use that program any more so i guess technically i don't or didn't ever need that much ram. shrug
read/write errors/corruption/destruction can still pop up from ssd. It's really about how you write to what the gold standard is large arm cache with further dram cache, but there's virtual slc cache on tlc and qlc, and the question of corrupted eprom is hopefully solved. Flash drives were a typical example of a device where os style read writes used to over hit singular sections, like the usb firmware part, and fry the device while the rest is good. Better firmware that acts like auto trim and treats tlc/qlc like slc has fixed some of the wear leveling issues
One point that I feel should be touched on at the end about forgoing the swap file is that swap is used to hibernate your system. RAM is considered volatile memory, because without power you can't guarantee what's stored there after the computer is powered off. When your computer hibernates, it writes state information and dumps the information stored in ram to the swap file before fully powering off. While some people prefer not to use a swap file, they are also the same people who don't want to hibernate their computer, and know enough to make this decision. If you don't know whether you want/should hibernate or not, you lose very little by having a swap file until you learn enough to feel confident with your decision one way or the other.
I have activated another 8 GB vRAM in addition to my 8 GB RAM. With Samsung cell phones, this is easy to do with the software. (Original) With my s21 I was able to double my ram.
SWAP should just be considered as a RAM backup - so that if you max out a server’s ram, for example, it makes use of SWAP instead of crashing. But if a server is regularly needing its SWAP, it’s time to upgrade the RAM (or reduce the servers processes, optimise RAM utilisation, etc)
Using NFS instead of SMB would definitely help a lot with latency on smaller files. That said - it would still lose badly to any local SSD due to the latency involved in any network protocol, so there probably isn't any way to make such a thing viable.
A mate of mine many years back made a really dumb library, Basically it made used SDL2 to make an OpenGL Context, and use an extension (I think it was ARB_Pinned_Memory or something like that), and overwrite libc's malloc with his own malloc. The end result being that if the libc malloc failed, it would then try to allocate a buffer in VRAM instead.
With Linus it's never "click-bait". It's more like "click-ahhhh..ok..makes sense"
Its click bait tho🧐
@@deher9110 Its not clickbait just because it made you wanna see it, that's just good marketing
Clickbait has to be false. Literally "bait" for something else
@@stitchfinger7678 he said he would use Google drive and proceeded to make a video about pagefile
@@pumbi69 he put google drive as a drive and then assigned a pagefile to it, which means it used google drive as ram. but it crashed because of google drive's security stuff. then he actually did it with their own server that didnt have that security stuff in it
A video every day is cool and maybe it's just a division in audience but I'd rather slightly fewer videos if it meant more quality. These past videos just feel like they took a random click bait like title and then figured out how to pad it into a 10 minute video. That beginning google drive served no purpose just to pad, at least find another cloud service which allows random reads and writes. But the end result was really "swap is sometimes somewhat useful but otherwise, use ram". Who is that aimed at
with the current prices of GPUs, I’d love to download one
account buyer
Xbox Cloud, Stadia, etc
facts
*GeForce now enters the chat*
You can stream one so…
5:50 You might be able to do swap on network drive if you increase /proc/sys/vm/min_free_kbytes to reserve enough buffer memory to not hang while trying to allocate buffers for pushing bytes to network drive. Maybe also increase /proc/sys/vm/swappiness and /proc/sys/vm/watermark_scale_factor to make kernel more aggressive pushing the data to swap. And you could also try increasing /proc/sys/vm/page-cluster to somewhere in range 6-10 so move bigger blocks at once - trying to move 4 KB blocks with random access would hit hard.
Swapping to remote network drive could make sense on L6 or L7 level. Before that you should swap to zram, SSD, HDD and fall back network drive only after every other swap device is full. Just set slower swap storage with lower priority and it will not use if there's any free space on any higher priority storage. Of course, as you correctly figured out, getting anything back from network drive would hurt you really bad. Like getting data back may take something like 100 KB/s.
We went from always having virtual memory to "what's virtual memory?" within like a decade. Wild
There are still use cases for virtual memory, but anymore if I'm going to be doing something that is that ram intensive that I need that much ram, I'll just use the dated machine with maxed out ram and a 1 tb ssd as a swap device.
But, I also know not to expect snappy performance when using a machine to process 100 gig scenes.
Says more about today's average user's knowledge level than the computer hardware to be honest.
@@DareDevilPhil Anymore ram is cheep and once you have 16 gig or so you are pretty much set for most games and applications. Unless you are getting really deep into it, and even then very seldom do you need to load it all into memory at once.
Unless you are dealing with some poorly coded applications. Or are working with several raw video frames at a time or some crap like that.
If I'm baking texture maps or doing large scene renders I'll use my older machines just because sometimes it is nice to toss 500 gig of server ram at a problem and be done with it.
Your C drive is defaulted to a Page File on Windows. It still has virtual memory...
@@CheapSushi unless you.. change the settings from default?
When I was doing my masters degree one of the teachers actually mentioned that one of the other teachers did this back in the 80's when he was a student.
You could request RAM from a mainframe server that was running at the college.
Latency was bad but at least you got RAM.
That’s actually pretty damn cool. I wonder if that college you went to still has that.
@@GavinFromWeb They didn't used that anymore when I was there in around 2005.
But we had another cool thing. Our lab computers had a boot loader where you could select which OS you wanted to run. The image would then get downloaded from a server and you would have a fresh OS installed in less than two minutes. It would be a native install, not a virtual machine.
This was all developed at the university, way before any of this stuff was commercially available.
Back then, if you had a class requiring mainframe/mini access, you got a weekly budget in dollars to spend on processing time. Something like $25 and cpu time cost about 50¢/hr.
What's the point of RAM if the latency is bad?
Is the use for that that if a program requires more ram than you have?
Memcachedb does this!
It's lower latency than spinners usually.
Now that we have not-expensive NVMe ssds and expensive RAM I imagine datacenters use ssds instead though.
So even LINUS knows the pillow is over priced! I knew it!
We all know!
@BZE3255Officials yes?
@@Rabbit_lol. yes??
@@brysonwestmoreland7359 i changed my name
@@brysonwestmoreland7359 yes???
I did something like this in my PHD. Several things you might want to try:
1. Using cgroup to explicitly control processes can eliminate most of the crashes and it improves the performance of the system overall.
2. Using Intel Optane (even the low-end 16G model) as swap is much faster than swapping to a local SSD. A lot of larges models that needs tens of TBs of memory relies on Optanes.
3. The performance of swapping inside a VM is better than swapping outside a VM.
as someone who spent days days optimizing his swap i can tell you this is correct but reality is you should not in any way hope for swap for actualy doing tasks its a waiting place at best. this days its main uses are VMs and hibernation
@@bigpod Yes! Swap is definitely going to be slow no matter how you optimizes it. Swap is accessed in the size of memory pages (2K-4K) while CPUs access memory in the size of bytes. Which means even you "swap to a memory", it is still going to be slow. But sometimes, the goal of swap is to simply makes it possible to run some programs (especially those programs that needs 10TB+ memory).
I don't understand a single thing between these Einstein's but I'm here anyways
"Using cgroup to explicitly control processes" - what do you mean by that? Do you mean setting per-process- (or per-cgroup-) RAM limits, so to delegate the memory management to the applications instead? If so, wouldn't that be the same as turning swap completely off, in other words, do you suggest we shouldn't use swap at all? And of course, what if those processes actually *need* the memory and then ... die? Also, why do cgroups improve the system performance overall?
Just curious, thanks for your input! :-)
@@entropyxu4044 while yes it can allow you to run those programs like SAP Hanna for example (being the most apt example) performance of such app will most likely be horrible (and is horrible) and will most likely only work for small examples(where 10TB+ is not needed) or sparsly accessed programs.
but lets remember there are fairly small amount of such programs(databases and research stuff) where many of those programs can be relegated to a server room or even a computing cluster or in some cases even supercomputing clusters(colloquially known as supercomputers)
You might have missed the "vm.swappiness" kernel variable... It kinda controls how likely the kernel is to move data into the swap file/partition.
(btw, no - i'm not recommending actually using swap as part of your system memory, I'm just adding a little thing you could've tested)
I actually needed swap memory a couple of years ago, when my 32 GB of RAM weren't enough for a complex database job.
Yeah I'm surprised he didn't talk about the swappiness variable
@@waldolemmer he personally doesnt, but he has a team of writers and well, anthony :P
Yes. And he needs to set the command to 100 - for aggressive swapping! Muahaha.. Keeping in context with the video lol
with full real memory utilization, it very likely swap would be used without any variable. Why to use all these synthetics and settings cheating?
This reminds me of the scammy “memory enhancement software” in early days, some of them actually used LZ77 compression for user space memory, but most of them only changed the size of the page file.
I used some of them back pre-pentium days.
Well it wasn't a perfect solution, at least it gave your hardware a little more breathing room until the next generation came around. Also could be used for games because that's what I did as well.
That was the ones that actually use compression
Scientific tasks are actually an area where swap, especially tiered swap, really shines. Specifically if the modeling software needs to generate a huge dataset but only rarely needs to look back at data from near the start of the simulation.
I sometimes do computational fluid dynamics (CFD) on systems where the working set of data reaches into the low 100GB range, on a system with 32 GB of ram. 20GB is enough to hold the volatile working set easily, so I actually configure it to use the next 12GB as compressed ram swap (zram), which manages a 3:1 ratio, and trades CPU time to still have low latency. This gets me to 56GB of effective ram, before touching the next storage system. For a while I actually then layered my GPUs VRAM in, as it is higher performance than my SSD, and it's a 16GB card, with zram on top of it that's an extra 45GB (leave a bit to actually drive your display), for a total of 101GB before you have to touch a disk. (Unfortunately, the GPU userspace memory system has gotten a bit unstable, so I had to drop this level). Data which doesn't compress well, or the oldest data once the compressed space is full, gets written out to disk, needing between 10 and 30 GB for most of my projects.
If you are using the system this way, you need to set up memory cgroups or you're in for a bad time.
It needs to be automatic as well. If a user doesent know about it, it won't be used.
@@Wyld1one Unfortunately it's the kind of prolem that is rather resistant to automation (and if you're working in a domain where it's needed, you likely know what to do). There are lots of knobs to tweak, and what is sensible behavior in one case may be exactly wrong in another. For example, if you don't have swap, and run out of memory, the OOM killer picks the program most likely to be leaking or hogging memory and kills it. This keeps the system responsive, but obviously killing a semi-random program can be bad (you can give the OOM-k hints so it gets the right program, but it's fiddly). On the other hand, if you try to have the system just swap, and slam it with too heavy of a load, the system will go unresponsive for potentially hours as it tries to figure out what can be swapped out or thrashes the CPU. It likely *will* recover, unless there is a runaway program, but you're often better off hard resetting the computer at that point as a reboot is much faster.
That said, there is a decent way to configure a machine for general use, when swap is present, that I think *should* be default for user-friendly distros. If you push the user-level programs into a memory (c)ontrol group, which is configured to use say 95% of the system memory max, then if the user launches something that tries to use too much memory, the system will go unresponsive, but the tty login (or ssh) will remain available. Trouble is c-groups are relatively new, so adoption is still rather slow.
I wonder how much performance you are leaving "on the table" though. CFD calculations are mostly ram speed limited so making ram painfully slow by replacing it with compressed SSD sounds like big loss, much bigger than anyone reading your comment would expect (in your comment it sounds like good replacement).
You would need to test it but it looks like upgrading to 64 or 128 gb of ram (if possible) would give you massive advantage.
@@cyjanek7818 Depends a lot on the particulars of the project, and the machine used. If you are ram-speed limited, you can actually get better performance from compressed zram, as the compression is done in CPU caches, so the data sent to and from ram is reduced. That obviously trades more CPU time, but if the calculation can only effectively use a portion of your CPU (in the obvious case if it is single threaded), you can get a bit extra effective I/O performance by devoting a couple of cores to reducing the bandwidth used. Even then, you do pay a pretty significant *latency* hit, so it's only useful with certain workloads.
Specifically, when you are doing time-dependant CFD, and the program doesn't manage tiered storage itself, it will keep data from the first time slice around until the end, even though it may well never need to reference it again. In this case, compressing that data and swapping it out will keep the program happy, and give good performance. The swap algorithm itself picks the least recently used data (disk cache or program memory) to compress/write out, so it will naturally pick the unused/old pages. This can be fine tuned by giving the cgroup a low point to *start* swapping, so that it never ends up with significant memory pressure. It's also important to tune the amount of compressed vs non-compressed memory so that the actual working set has enough memory.
For my last project, that required keeping at least 24 gb of normal memory for the program, and then 24gb of compressed high speed ram, with the balance written out to nvme, compressed. I worked out those numbers by turning on some extra metrics to observe how often data was getting pulled *back* from swap. It would read data back from the compressed ram at the start of each new "frame", but not within a frame. It only read data back from the nvme storage at the end.
Would having an extra $200 of ram sped the process up? Maybe by a couple minutes, out of a week of computation. At some point I may get some extra ram just long enough to compare, but I honestly don't think it would make that big a difference. Also, the same argument will hold the whole way up. 32gb of ram using zram lets you do CFD needing up to 100gb of virtual memory. 128gb of ram would let you reach 512gb projects without needing to go to threadrippers. 2tb eypc platforms would let you have an 8tb simulation.
You could’ve set the swappiness to something like 99 instead of the default 60. That would’ve put stuff into swap at 1% ram usage (instead of the default 40%)
That isn't what the swappiness value means. It's much more complex than that...
At 1% _swap_ usage?
I love that 'swappiness' is an official term that grown Linux users are required to use.
Perhaps also the vm.vfs_cache_pressure to a high value, like 200
@@TheFrantic5 No one discusses Swap on a daily basis. You either use it or you don't. I personally don't bother with it.
LTT is a god at roasting 6:33
The way his expression never changed
While he casually mentions where everyones dad is.
Poping out for 15 minutes to get milk 🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛🥛
I remember setting up a page file back in windows 95 on an extra 500Mb drive. It actually sped up my system quite a bit. But this was back when 32Mb RAM was considered a good system.
Just not having it on your main drive is a bit boost
Do you remember Windows 7 came with a feature to use a pendirive as a virtual memory. Maybe it worked the same way. You should still be able to do it. If i remember right it requires minimum 4gb capacity pendirves to work(could be wrong).
not that anybody would need to do it since most systems have 8gb ram these days
and I actually think there needs te be a follow up video . By just moving the pagefile to the secondary ssd in windows there would be a significant improvement to having it on the same disk as windows. But why stop at one ssd ? Shouldn't it be possible to instead of one ssd use a raid 0 of multiple ssd's ? And instead of using sata ssd's maybe use m.2 pci express gen4 since I am pretty sure Linux has acces to motherboard with a lot of m.2 slots.
@@Gurj101 You're referring to ReadyBoost which does work in a similar way. Any size will do, as long as it has a couple hundred megs of space, though you need at least a gig to see any benefit. You can only use now it if you have a 100% hard drive system though. If you have any ssds installed at all it'll hide the feature - since swapping to the slowest ssd will always be faster than the fastest usb drive. It's still around though, even on Windows 11
Way back in the MS-DOS days, there was a trick for having "expanded memory" keep the bulk of the stuff in upper RAM compressed and also do a swap file if needed. If made programs that needed it go very slow indeed.
There also was a trick mostly done by Borland, that allowed you to make a program where most of the program didn't load from disk unless needed. Stubs for all your subroutines in the main program and kept a list of the ones it currently had in RAM. If a subroutine was needed but wasn't in RAM one of the ones you haven't used in a while was removed from RAM and replaced with the one you needed. The result worked surprisingly quickly because you could often break a program into parts that mostly didn't interact.
zram
Nowadays programs are placed into memory by mmap. Basically you tell the OS I want this file accessible from RAM. It doesn’t actually load it into RAM though, but it promises it will be there when you need it. Then, when you touch a part of the file (for example by executing code from that file), the memory management unit in the CPU will trap the execution, pass control to the OS, which will load in a “page” (usually 4Kb) of the file to RAM and the let the program continue executing. If a 4kb page hasn’t been accessed for a long time, the OS might unload it to free up RAM. If a part of a file is not used, it doesn’t need to be loaded either. And if another program has already loaded a program library, both programs can share that library in memory (it doesn’t need to be kept in memory twice).
if you are referring to "lh" or "loadhigh" back in the day, that was to use the ram above 640k, it was not compressed.
@@nsa3967 zram is great. It does take a while to initialize on boot though, so on a system with "enough" ram I would still advise against it. And of course there is some overhead due to the compression, but that runs in hardware these days so its not that big of a deal if you need more RAM.
DOS had only 640k addressable memory and the extended memory was not compressed at all. DOS had no swap, real time OS-es can not have a swap by design. Stop misinforming people. The dynamic load in Borland was a thing but it was just annoying mostly, especially when running from floppy disks.
At 5:50 -- This setting can be tweaked around by the "Swapiness" value. It essentially tells the kernel how likely it should use Swap. Since Linux also does eager swapping to clear out *actual* RAM, this is quite a handy feature.
Reminds me of "no replacement for displacement"; at the level of performance we expect today, you can't really cheat your way out of using the correct hardware.
their is a replacement for displacement. Look at my PFP 😏
@@Jtzkb without a massively oversized heavy engine weighing you down
@@Jtzkb Shoving a big fat turbo in because natural aspiring an engine is just dumb if you want horses
except there is, it's called compression, and often it means access is even faster as there's less to read/write in compressed form.
@@flandrble It's funny you bring this up, because compressed pages are literally equivalent to turbochargers. Sure, you will get more available RAM space and little performance hit in optimal situations, but there ain't such thing as a free lunch, so you're paying for that extra space by having to spend CPU cycles decompressing it on a page fault. Sure, if you have lots of RAM to begin with, on average it will probably actually speed up your system. However, compressing pages on a RAM-constrained system will tank your performance because you'll just end up thrashing at lower utilization and hitting the expensive decompression path a lot instead of using all your available RAM as it is. There's no replacement for more place to store ones and zeroes. More RAM is more better, and (unlike going with a bigger engine) it does not result in any significant disadvantage outside of higher cost.
This is very similar to the argument for/against turbos. A turbo can deliver much of the same power in a smaller engine as a bigger engine could, but it has drawbacks such as having to wait a bit for all the power to be immediately available. If you have a very small engine that you're always redlining, slapping a turbo on it will not help you much. However, they are useful for squeezing extra performance out of engines when they're optimally specced out.
8:00 you can see this in action if you have 8 GB of ram and while playing a game you tab out to another program like your browser. It will often hang or slow down for a few moments because the OS has moved the browser's stuff (especially background, inactive tabs) to the page file to free up the ram for your game. This kind of stuttering when multitasking is easily solved by adding more ram and is why pc builders recommend 16 GB even though you could play most games with less.
I would say at todays standards, 16gigs I have are not quite enough anymore, chrome likes it too much
@@yesed You could just close chrome while playing if you experience a bottleneck. But I also would recommend 32 Gigs nowadays to be sure.
I think scaling it in case of needs above 64 GB for Virtual Machines that do procesess on them own for hours/days.
@@yesed You should just swap to Brave.
My system admittedly only has 4 cores and I don't play games on it. I do use it for circuit design simulation though which is a pretty heavy load. 16 gigs and I've never seen it use the swap file. To the extent that I turned off the swap about 12 months ago. Haven't had a crash yet.
Ok finally some explaination at what the hell is goin on on my PC with some heavy RAM games.
Linus doesn't need stock videos or photos to explain something because his crew does it for him and it's brilliant
The editing, extra graphics, and cute live-action, few-second vignettes really elevate this video to the next level - well done LTT!
The animated explanation of ram was awesome 👍.
How much did you pay for your channel?
So realistic cgi
around 7k i think
Bots all around
Fr.
Unbelievable,
Crowning,
Kudos to
You for this
Outstanding
Understanding of the
Broadcast
Of
The Great Linus
My experience with network-based swap is usually like:
- The swap is provided by some userspace software (like sshfs, and here I've heard of smb, not sure if it's completely in-kernel),
- This software gets swapped,
- Unable to unswap pages which are needed to unswap pages.
Part 2 idea: Zram. It compresses less used stuff in the memory in real time with less CPU overhead than you'd imagine, giving you an effective ~2.5x the system memory. It really works!
Literally SoftRAM but real.
Didn't work out too good with Raspberry Pi 400, but then again I was running latest beta Bullseye OS version.
@@MiniRockerz4ever ZRAM is a tradeoff, you're saving ram at the expense of CPU usage as CPU needs to constantly zip and unzip the data in RAM. AND RPi 400 is more CPU-limited
It's been in windows and macos for quite some years ... I wonder if it's useful these days though
Very non-deterministic, and it burns cpu at a time when the cpu tends to be busy. Not recommended. It's actually better to just page the data to a fast SSD these days, with as low a CPU overhead as possible. There have also been attempts in the past to have compressed swap (effectively compressed memory when the swapfile is on a tmpfs filesystem). NeXT famously had compressed swap, and it was a disaster.
These days you just want to configure a SSD partition and point swap at it. A swapfile works too but isn't quite as deterministic (in linux its about as fast as a partition, but it goes through a filesystem layer that has to record all the block ranges on the underlying raw device, which is a bit risky because the filesystem can't reallocate those swapfile blocks. You have to trust that the filesystem implements it properly).
Another reason to use a swap partition instead of a swap file is that you can TRIM the swap partition at boot time. Thus your swap partition serves as extra SSD space for wear leveling since 99% of the time your machine doesn't actually have to use much swap. Highly deterministic on all fronts and very desirable.
-Matt
Three words: swap on zram. Set the limit to 100/150% RAM capacity, set swappiness to 200, get more RAM from your RAM.
I have swap on zram on my Pi Zero 2 and it made it so much more responsive and usable
Funny thing is, zram actually expects the compression to be half or even a third (so double or triple the space)
That is, if you had 8GB of RAM, you could have 16GB of zram-swap.
It will be slower, but way better than storage-swap. And I really recommend Zstd as your comp alg, it's fast and decent
So what happens if you try to load 16 GB of MP3s/JPGs/MKVs into 8 GB of zram'd RAM?
@@shinyhappyrem8728 those files are already compressed and they cant be compressed further, even a high level zstd can only compress with a ratio of 99/98%
@@shockwaverc1369: I know, that's why I asked about them. The program ought to handle cases like that somehow.
8:16 love that they used icons of what you'd use in linux instead of word, chrome and photoshop lol
The editing on this video was crazy good. Well done, Dennis!
But its not high frame rate despite tons of expensive equipments ltt has
@@mzamroni This is not a fast paced race or something that needs 60FPS or more, 30 is enough for LTT videos.
@@chiroyce 60 fps looks better. End of story. No, Mr. Editor. It's better 😂
@@chiroyce watch gamers nexus recent videos for comparison
the computer SFX made me feel like i was watching bill nye lmao
Lol
hi
There were programs that we installed on our system in the '90s such as "virtual ram" that worked without crashing your system. I imagine the single processor systms and operating systms of those days were so slow that it was not as big of deal (they did mention that while improving your multitasking, your computer would take a speed bump.). In those days we were used to waiting for things to load, so it was quite tolerable.
I remember when I was trying to get an an oldish mini PC that only had 2 gigs of RAM to not lock up all the time, I configured Windows to swap to a 16GB mSATA SSD that I had for some reason whose only function was to act as the page file. It worked wonders and that computer basically never crashed after that.
It would have been nice to mention that having a swap allows you enable hibernation by writing the contents of your ram to your drive so that it can cut power to the DIMM slots. It has been helpful for me on my laptop.
And every OS and filesystem I'm aware of now defaults to using a file for swap. You can still use a swap partition, I do.
is it me or the lighting and colouring on this video is ON POINT today? it looks better than normal
I am an IT and telecommunications student and these videos are so much more interesting to me than regular reviews and which gaming laptop has better speakers. Looking forward to lab content.
Couldn’t imagine the system performance on cloud RAM especially on glitchy Wi-Fi connections 😂
8:34 There's a few notes in that music that is just like one of the default alarms on my phone. It's giving me anxiety.
Hey, I remember seeing this as an idea a few months ago on a linux subreddit. So cool to see you guys actually give it a try!
This is exactly why I love Linux, you can take full control of your computer, heck even delete you GUI or your bootloader.
I bought it, I wanna own it. That's why I go with linux on every pc I own
Yea problem is when you do this on accident xD
Just like Linus did when installing Steam?
@@ignoolio12nera96 he’s literally illiterate. It told him many times not to do it. It’s entirely his fault
sudo rm -rf /
This is now my favorite LTT videos
Cool to see more content involving linux!
When linux runs out of memory it is supposed to invoke the "OOM killer" which just ends a process forcefully. I'd expect desktop linux to freeze or dump you to tty in some edge cases. I think there must be a bug or some kind of problem, because it should not be crashing at all.
in the 'dump to tty' case, I would also expect the display manager get restarted automatically by systemd.
So you might see the tty for a couple of seconds, but then get the login screen.
nohang does a pretty good job at it, the default oom killer doesn't do much killing 'til after you hang.
@@YeaSeb. I agree, for some reason default OOM killer is useless in most cases.
It tries its "best" but actually makes it worse by allowing all caches to be disabled.
When it takes >10 minutes to just switch to a VT you know that recovery is not really an option.
And windows is supposed to do that as well in oom but I have gotten bluesceeen so no you are wrong not always the case thing is oom killer probably needs ram but if everything is full including swap it will crash. I work with 100s of gigabyte of memory at work for my software development.
Its more difficult to accomplish than people think. It is fairly easy for the kernel to deadlock on the auxiliary kernel memory allocations that might be required in order to accomplish something that relieves user memory. Paging is a great example. To page something out the kernel has to manage the swap info for the related page, must allocate swap space, might have to break-up a big-page in the page table, might have to issue the page write through a filesystem or device that itself needs to temporarily allocate kernel memory in order to operate properly. The pageout daemon itself has to be careful to avoid deadlocking on resources that processes might be holding locked while waiting for memory to become available. The list is endless. Avoiding a low-memory deadlock in a complex operating system is not easy.
This is yet another good reason why swap space should not be fancy. Swap directly to a raw partition on a device. Don't compress, don't run through a filesystem (though linux does a good job bypassing the filesystem when paging through a swapfile), don't let memory get too low before the pager starts working, don't try to page to a swapfile over the network (NFS needs to allocate and free temporary kernel memory all over the place). The list goes on.
Those of us who work on kernels spend a lot of time trying to make these mechanism operate without having to allocate kernel memory. It is particularly difficult to do on linux due to the sheer flexibility of the swap subsystem. The back-off position is to try to ensure that memory reserves are available to the kernel that userland can't touch. But even so, it is possible (even easy) for the kernel to use up those resources without resolving the paging deadlock.
Regardless of that, paging on linux and the BSDs has gotten a lot better over the years and is *extremely* effective in modern day. To the point where any workstation these days with decently configured swap space can leave chrome tabs open for months (slowly leaking user memory the whole time) without bogging the machine down. Even if 25GB of memory winds up being swapped out over that time. As long as you have the swap space configured, it works a lot better than people think it does.
-Matt
Seems useful in a pinch for edge-case workloads, but I imagine constantly hitting an SSD swap space would dramatically reduce the SSD's lifespan, while DRAM basically lasts forever.
This ^^^
agreed,and with how cheap RAM is nowadays this seems even stupider.
I used a cheap DRAM-less Kingston A400 120GB just for Windows pagefile for a year and after that as a OS drive. It still works fine but according to CrystalDiskInfo the health status is only 64%, it has 18TB host reads, 19TB host writes, 37TB NAND writes, 20k power on hours and 161 power on count
SSD read/write lifetimes are not that short these days. There's no realistic harm in using SSDs as swap space.
I have used a 750 GB NVME SSD as "extra RAM" for probably 3 years by now, at times using it hard (i.e. it writing 200-300 MB/s for hours straight several times per week) and I have not noticed any problems yet. I was thinking it would probably start breaking after a few months but no, still going strong. Getting that SSD was the best hardware investment I've ever made, that amount of RAM would have cost a fortune.
9:36 this is probably why we're collectively moving to zram over normal swap on Linux because from the looks of it compressed ram is still better than swap (outside of hibernate).
Title Correction
We ACTUALLY downloaded more unusable RAM
You could try changing the swappines. So that the system pushes earlier and more aggressively to the swap.
@@chy4e431 the amount of people like that who either didn't watch or don't understand the video but are commenting is giving me cancer.
@@zenstrata The amount of work it would require to get actual information from raw RAM data makes it not worth it anyway (even if they did send data to google which they didn't cause it didn't work x))
@@zenstrata it's no problem to encrypt the swap file, at least on linux and macos.
@@zenstrata did you even watch the whole video?
You can do amazing things with the Linux virtual memory subsystem.
Fun facts: if you need a fucked-up amount of swap (I have no data on 10tb of network swap though!) you can make the performance degrade a little more smoothly by increasing the swappiness. Basically if you know you are going to be needing swap no matter what, you can instruct the kernel to proactively swap when memory pressure is low. It will slow down sooner, but it will be a smoother decline rather than "wow it just locked up for 2 hours immediately after I started using a bunch of RAM"
If you have a bunch of different slow media, you can put a swap partition or file on each of them and assign them the same priority in /etc/fstab. The kernel will automatically stripe swap data across them. So basically a RAID 0 array just for your swap.
You can also tell the kernel to keep certain files or entire directories cached in RAM, which does something very similar to profile-sync-daemon or anything-sync-daemon, just with fewer steps.
Of course, if you have like, any money, most of these tricks are useless, but not everybody can just buy more RAM!
@@Iandefor I found another comment about adjusting the 'min_free_kbytes' parameter to resolve page-fault related freezes helpful.
My "new" computer has 8GB of DDR3 ECC RAM (maxed out essentially). I had been blaming occasional freezes on the possibility of double bit errors: until I learned today that Linux is supposed to be able to recover from that if kernel memory is unaffected.
8GB is adequate enough that I don't want to buy new until the chip shortages are resolved.
Ok, hear me out: 6 workstations, 1 set of RAM. Setup a server with a massive amount of RAM, setup a RAM disk on the server, share that RAM disk to the workstations using an inifiband network (lower latency than ethernet) and then setup a swap space on the network drive.
I think Supercomputers do this, no?
@@CheapSushi I think each node in SuperComputer have it's own ram, because convienience
At this point you basically just do what a lot of retail stores do with their till systems (even if most the staff don't actually know it), and just have every workstation just be a client device that accesses a virtual machine on the same server.
It would be interesting to test if Intel's Optane would perform better in this test, since Intel claims such low latency and high iops on that memory
According to intel it's 10% the latency (or speed?) of RAM
It has been tested but basically the answer is... its a waste of money. Pageouts to swap are asynchronous, so latency is well absorbed. Pageins do read-ahead, so latency is fairly well absorbed (random pageins are the only things that suffer).... but honestly, unless the system is being forced to pagein tons of data, the difference wont be that noticable. The latency of the page-fault itself winds up being the biggest issue and you get that both ways.
Now 'optane as memory' (verses 'optane as swap device') is a different beast entirely because there is no page fault. The CPU essentially stalls on the memory operation until the optane module can bring the data in from the optane backing store. So latency in this case is far lower. Still not nearly as low as main memory, of course, but significantly faster than making the operating system take a page fault.
-Matt
@@junkerzn7312 That's interesting. And as optane is insanely expensive that result is believeable. If a workload depends on insane amount of ram, last generation memory is quite affordable. (My photogrammetry server is still using DDR3)
Fast ssd or optane might be more worth it if installed as zfs cache on hard drives
@@jimbo-dev the difference is the 'insane amounts of memory' part. With Pmem 200 series and a dual CPU config, you can have 6TB of optane as memory in a single machine, assuming 12 slots per cpu. You could then also have your 768GB of DDR4 per cpu.
Optane is expensive, but I think generally it's less than the equivalent in DDR4.
@@junkerzn7312 Been tested where? Because Optane seems like the perfect swap for huge database workloads even with lots of memory.
This is a really good video for an introduction to swap space. One thing worth mentioning is another factor reducing the use of swap space besides cheaper RAM is the prevalence of SSD.
When your swap space was on a HDD it was slow but didn't have any side affects other than a loss of 16 - 32GB of capacity to a swap partition, however since swap experiences a lot of random reads and writes there is a good chance it'll impact the lifespan of an SSD it's put on.
You can mount your VRAM as a device and use it as swap space, or just a handy super fast temp directory.
Really? Cool, should try that
1:37
"To understand that, we need to talk about..."
Pannenkoek fans: *trembling*
@Iaonnis oh no
I like the water glass, people bumping into each other etc. Visual analogies. Would be neat to see more to explain technical details 😀
fun fact: you can also do the opposite and mount a specified amount of RAM as a temporary filesystem (tmpfs) to use it as a disk. You could try to install a small game on it which should have blazing fast loading times :-)
I think they’ve done that before, every time you shut down your PC will “uninstall” the game tho. Since RAM gets cleared
@@TheBurritoLord oh, I missed that. Yeah that's the downside of it as it's volatile storage hehe
They also had a PCI RAM card that came with a battery, iirc.
Yeah I read that someone installed crysis 3 on a RTx 3090’s vram
My ASUS motherboard had a similar program for Windows bundled on the CD. It automatically syncs data to the hard drive, but keeps it in RAM when the system is running. However, most games don't really benefit from a ramdisk beyond what you can achieve with an SSD as you are typically CPU limited at that point anyway.
if you do lots of high memory usage tasks, I always recommend having about as much swap as you have ram for workstation configurations. modern operating systems are actually really good about what gets swapped and what remains in ram and it prevents systems from crashing when you run out of main system memory. my workstation has 128gb of ram, so I have 128gb of swap (spread out over my 3 nvme ssd's). this has improved my memory handling drammatically because stuff that isn't used RIGHT NOW gets swapped out leaving the remaining memory free for applications that require the lowest possible latency and highest possible bandwidth.
I completely agree with you.
Did he just call his own pillow... overpriced? 3:05 *shock*
Ik it's as if he is telling a joke!!
@@vrgamer6644 it is. 69.99 for a pillow lol
Well what do you expect from linus
They just don’t care if you buy it or not coz they getting paid from sponsors and videos
This was a splendid explanation of how swap storage works, and then it makes sense to use it. A+ animations to go along with the narration.
There is also another somewhat funky configuration you can apply to swap. You can basically also swap to RAM, which sounds completely stupid, but if you combine that with compression, it actually makes some sense. Assuming a good compression ratio, which can be as much as 50% in many typical workloads, you can basically end up with a fast (you only pay for compression/decompression) swap space and can technically squeeze more stuff into RAM, giving you "extra RAM for free".
I've got an absolutely stupid idea that might work: swap inside of a raid array.
A lot of distros do this by default. Fedora uses zram which is almost exactly what you described with a few small differences. It's worth a read (and setting up on distros that don't have it on by default.)
Android phones do this nowadays, with swappiness set to 100.
"But what if we use it in storage attached locally?" THEN IT ISN'T DOWNLOADING RAM, LINUS!
When memes become reality……kinda. I didn’t think downloading RAM would actually be possible (even to this extent).
it its not, its just a bait title
@@okktok That’s why I said kinda.
He aint downloading ram tho and this aint a substitute for ram by a long shot
ACTUALLY...
In the distant future you might be able to download pretty much anything over a network that feeds it into your 3d matter printer. So you could download as much memory as you need for your new PC.
What if you download the schematics for a ram stick and then 3d printed it?
congrats, keep the quality vids up
I love seeing linus using the linux terminal properly to mount filesystems and make swap space. He's come so far from "yes, do as i say!"
3:59 Well… That’s how it feels on the PlayStation…
I remember Windows had some feature that allowed USB drives to be used as extra RAM. I remember using it back in the day, when "Spider-Man 3" game was released and it did help a little bit, by giving me maybe like 2 FPS to original 20-something I was getting in the game.
ReadyBoost?
@@FriskGamer1 yes, that's the one
Am, I expected to see more from you guys. I used 2Tb of swap (which was placed on NAS server using NFS protocol) a few years ago (seems it was 2018) and it wasn't something surprising for me. Another thing is that you shouldn't recommend to turn off the swap, on the contrary I expect that you should know that linux kernel is smart enough for not using SWAP when it is not necessary, i.e. it always speeds up your system and it will never work slower than without using swap. Also swap is used for hibernation, so I recommend to leave it enabled (but I don't recommend to use network storage for swapping because it can crash your system since connection will be dropped for some reason, e.g. during hibernation).
One big advantage of swap is when you have stuff that takes (lots of) RAM but doesn't actually use it to often. (Think VMs) There the performance hit isn't to dramatic since it's infrequently accessed anyways.
For VMs, using KSM is quite a bit more lucrative though. You generally don't want to swap on VMs.
Yeah!! I'm proud of you Linus you can finally open a second page at Chrome.
Lol
With that much memory, you can maybe just maybe, open 4 google chrome tabs.
If you tried cod warzone on a swap you would see incredibly low framerates. I had a friend where warzone was hitting the swap file and tanking his fps. Bumping his ram from 8 to 16 gigs worked wonders to fix it.
Not to mention if he went from single to dual channel, that'll help too
A blast from the past. It's been a very long time since I've put together a PC with less than 32GB of ram. I remember swap, I remember...
The computer I'm using right now has only 8GB. It seems to work fine.
I can record streaming audio, print documents, and have a bunch of browser windows open simultaneously, along with a number of other running programs, all without a problem. I do a lot of work on my computer.
I currently have 20 browser instances running with multiple tabs on each, genealogy software (2 instances), a couple of file folders open, and Winamp shuffle playing a 200+ playlist of 70s hits. After a couple of weeks like this, I will finally gag the memory and have to save and close everything for a reboot. I guess I could go months between reboots with 32GB of RAM.
I built the computer myself over 10 years ago, so I suppose I could add more RAM. I did replace the hard drive with an SSD.
I've been at this long enough that my very first hard drive was 20MB and cost about $700.
8GB of RAM is more than enough for anything these days. You don't need 32GB of RAM unless you're using it for work.
i'm sorry, but these sponsor segues are so funnily predictable
ikr
The whole point of that is because its a "horrible transition" into todays sponsor!
I thought the same
This is seriously a great introduction to understanding the memory model that computers use-and way more entertaining than the 400-level computer architecture course I had to sit through when I was in college. Nice work!
This also wasn’t anywhere near as in depth as a 400 level OS course lol, it’s like 5 minutes of a 100 level course.
@@iMasterchris the keyword is “introduction”. The specifics of caching, paging, and virtual addressing really don’t need to be included here; most programmers barely spend any time thinking about them unless they’re doing kernel/embedded/high-performance development
Man Linus's Segways are always amusing 🤣
When I first go on the Internet in 1988, it was on Sun 3/50 workstations. We were running them diskless. They had no hard drive (or local storage of any kind besides RAM) at all. All files were served over Ethernet running at the amazing 10 Mbit/second. 10BaseT, which means subnets all shared that same 10Mbit/second (with collision loss and too).
They DID have swap space. ON THE NETWORK. Swap was required back then, not optional. Swap had to at least match the memory. So swapfiles were network mounted. If these systems ran out of memory, they could continue to run, swapping over the network. The system was basically unusable at this point, but if a user was desperate because they had unsaved work, we would spend the half hour to run the commands needed to kill off processes until the machine became usable again.
Swaps are very important when you compile with all threads and use pipes, or when your app starts writing more and more data without stopping.
I recommend people have their swap partition as big or twice the size or RAM, it's simply a good practice that'd save you more crashes than frames.
That is terrible advice. Most workstations have today 64gb and more ram, setting up a 128GB swap space for that is just stupid.
@@hawkanonymous2610 if you spend 5 minutes to look at statistics, most gamers have either 8 or 16 gb or RAM, but I'm also counting servers and virtual machines, so I assume the average is 8.
Obviously if you have 32 gb of RAM or more you shouldn't be mirroring all of your memory, I'm not assuming people's lack of common sense.
That was valid in the 32 bit era when your ram was still in Mo.
@@tomf3150 it's still valid when you accidentally place a time bomb in your pc. Plus extra swap space never hurts, does it?
By this definition you could technically download RAM, CPU and GPU power through a remote desktop
Alright at least he acknowledges it’s an overpriced pillow. That fact makes it not so bad
You will never see this... But to prevent Linux from crashing with swap you need to configure the reserved portion of system memory that the Kernel will keep free no matter how many programs try to allocate memory. These parameters are controlled through sysctl, involved settings include lowmem_reserve_ratio and user_reserve_ratio.
is it generally necessary to alter these properties? If so, why aren't they the default? In Linux kernel world, I have generally learned to trust the default parameters.
@@philuhhh It isn't unless you try to open half a dozen programs at once even though your memory is already full. Which is what the benchmark / stresstest they have is doing. These settings have performance implications as you can't fully utilize all of system memory anymore, so not having these as defaults outweighs the niche usecase of people opening 10 memory heavy apps at once with full system memory.
Everyone is free to configure the memory subsystem for their usecase which is one of many things what makes Linux so flexible.
@@DantalionNl I see, thanks!
What I want to know is if no ram and only swap space would work at all. I'd like to see a video testing that.
Linus, you're the only guy on the internet that I enjoy going through sponsors! Kudos my man
What Linus might didn't know: swap does not extend your memory it only works when the system is able to swap. See if your active programs (so called processes) need more than 8 GB of ram - let's say compiling or gaming they allocate on the heap. If this can't be swapped out the process crashes (or your system) so having Problem with chrome tabs? They get swapped to disk.. having all occupied by one process - like video editing - nope it will crash
It does... I only have 8gb of ram. My games crash and it says "windows ran out of system memory." Games used to run on more ram... Now I can see the ram filling up to 99% when I play and then sometimes hits 100 and crashes. Also when I close/crash the Avengers game I see the ram usage go to 20%, that means everything else has been swapped/paged out of system memory. Then the ram gets filled back upto a normal 50-60% usage that I generally see in everyday usage. With chrome or edge open it reaches 92-93% usage but never goes beyond that, so I assume its swapping my tabs.
Pov: I had 12 gb of ram but one stick stopped working. Now I am left with 8gb.
No, just increase page file
@@milesfarber of course it has
guys, please read again. I am talking about a process not your entire stack ..criticism is welcome if you are sure what you are talking about
He did brush that topic at 08:30. I think it was implied that it might crash at this point.
0:15 Oh you just increased your swap partition. Meh. Windows uses a Page file, linux uses either a swapfile or a swap partition. On old spinning disks,you could set the swap partition to be on the outer edge of the disk, where read/writes are done faster than the center of the disk. Or so I've been told.
As an aside I remember a time when hard drives were slow enough that compressing data transfers was useful for speed as well as space ...and there was a time when compressing RAM was a reasonable way to get more memory 'space'.
The intention of SSD swap is to avoid system crashes when running out of RAM. The intention is not to replace RAM, because it is much slower. If not even crashes can be avoided by enabling SSD swap (like in the video), I can see no point in using it at all.
you should use zswap to compress the swapped memory and double the bandwidth
But wouldn't that increase latency even more
@@pumbi69 cpu is faster than storage, so no
9:00 what is actually happening is that the system is using extra available RAM to cache the drive access - because there is extra available to do that. It won't try to cache the drive access using the swap file, because the swap file is ON the drive (or at least a similarly slow device).
Now get 100T
Now draw it giving b
Love this Linux based content. Keep using Linux instead of Windows for your stuff! Help people see it being used. Thank you thank you thank you!
LTT is one step away from setting up Remote Direct Memory Access and I want that video so badly
3:00 At least he knows it XD
I love the production value, you can really see the increase from a few years ago.
The section on memory hierarchy is something we cover on our Higher (year... 12? Dunno how it compares to US/Canadian schools) course, and nails it in such a nice, short, compact format that I'm putting it into our class resources. Thanks!
1:26 this example is genius
I really enjoyed the memory hierarchy explanation.
Took me months to learn what you basically showed in minutes and I learned something too!
Just make sure when minimizing your swap space/page file/whatever that you leave enough memory to dump error information. I think for windows it's 800 MB which is barely anything out of your 128 gig boot drive and isn't enough for windows to use if it somehow runs out of ram
there's also the consideration that you don't want to constantly running swap read/writes on an SSD but I'm not sure that's as much of a problem nowadays than before. Either way if you are actually running out of system memory (and not say, trying to play 4K 120 fps on a GPU where that simply isn't going to happen) then please consider upgrading the memory
I have an odd configuration of 64+32 = 96 gigs of ram. But that was because I was using ffmpeg instead of a more efficient program to combine 4K mp4 files. I don't use that program any more so i guess technically i don't or didn't ever need that much ram. shrug
Interesting, what program did you switch to for combining the 4k mp4's?
read/write errors/corruption/destruction can still pop up from ssd. It's really about how you write to what the gold standard is large arm cache with further dram cache, but there's virtual slc cache on tlc and qlc, and the question of corrupted eprom is hopefully solved. Flash drives were a typical example of a device where os style read writes used to over hit singular sections, like the usb firmware part, and fry the device while the rest is good. Better firmware that acts like auto trim and treats tlc/qlc like slc has fixed some of the wear leveling issues
If it were for the thumbnails and titles I would watch ZERO LTT videos!
One point that I feel should be touched on at the end about forgoing the swap file is that swap is used to hibernate your system. RAM is considered volatile memory, because without power you can't guarantee what's stored there after the computer is powered off. When your computer hibernates, it writes state information and dumps the information stored in ram to the swap file before fully powering off. While some people prefer not to use a swap file, they are also the same people who don't want to hibernate their computer, and know enough to make this decision. If you don't know whether you want/should hibernate or not, you lose very little by having a swap file until you learn enough to feel confident with your decision one way or the other.
I have activated another 8 GB vRAM in addition to my 8 GB RAM.
With Samsung cell phones, this is easy to do with the software. (Original)
With my s21 I was able to double my ram.
My guess before watching: a program that turns storage space into ram
Edit: I was wrong, but now I wonder if a PC can be booted without ram sticks
Technically you don't need any ram at all
All it is is a faster version of your storage
@@szymex8341 sounds like a Linux project? Might even be deeper, like in the CPU itself
"With this power, I can.. I can tell you about our spons-'
**Skips**
SWAP should just be considered as a RAM backup - so that if you max out a server’s ram, for example, it makes use of SWAP instead of crashing. But if a server is regularly needing its SWAP, it’s time to upgrade the RAM (or reduce the servers processes, optimise RAM utilisation, etc)
Using NFS instead of SMB would definitely help a lot with latency on smaller files.
That said - it would still lose badly to any local SSD due to the latency involved in any network protocol, so there probably isn't any way to make such a thing viable.
A mate of mine many years back made a really dumb library,
Basically it made used SDL2 to make an OpenGL Context, and use an extension (I think it was ARB_Pinned_Memory or something like that), and overwrite libc's malloc with his own malloc.
The end result being that if the libc malloc failed, it would then try to allocate a buffer in VRAM instead.
hey!
Hi
hi
Hi
Hello
hi
POV: You when you were 6 trying to download more RAM then getting 5 trojans.