To be honest, I just btrfs because of the snapshot feature, and it sounded like a great tool as a noob who switched to linux. Glad to see that it does well for the most part in these performance tests. Thank you for the video! As I get deeper into the technical side of things, this channel is a trove of useful and interesting info
I use it for the snapshots, being able to send the snapshots to my NAS over ssh for backups, and using btrfs-raid1 on my NAS for the safety from bit rot or other corruption.
I would say it is CoW itself. You don't really need to explicitly make snapshots to do that. You can simply copy files/directories and they will not take extra space.
@@mk72v2oq Only if your copy tool uses the appropriate calls like reflinks, or you're deduplicating the data on write. Tools like snapper provide an easy way to do that systematically.
Worth mentioning that Btrfs, Bcachefs and ZFS are about data integrity first. As you said, Copy-on-Write mechanism can have significant performance overhead to ensure that the filesystem is always in vaild state even on unexpected power loss. They also perform data checksumming, i.e. they compute and write checksums along with every data block, then read and verify them on every access to that data block. This obviously also isn't free and requires extra CPU time. But it protects you from silent data corruption (due to faulty disk or some other reasons), which other filesystems without the checksumming are unable to detect.
I like your filesystem comparison videos👍👏 Used ext4 forever and never failed. I love zfs, but it is at home in pro scenarios. Use it only when distribution supports it out of the box like proxmox. Btrfs is very well supported and has all the bells and whistles. Love Linux for the choice it gives me.
Ironically, I've only broken ext4. Zfs, btrfs, NTFS, xfs, and exfat have never broken on me. There was power failure involved, but losing power shouldn't break your filesystem, only corrupt files
@@__Brandon__ A new (Win 8.1) machine broke on me with NTFS. Probably because I shrank the Windows partition as much as it would allow after enabling full disk compression to give Linux as much as possible without deleting that partition, yet. It worked for a little while playing the default installed games. Then it didn't.
XFS is more reliable overall and is also the more readable codebase. Ext4 is the one that scares new kernel devs and is a fairly difficult codebase to read, it just has had decades of work put into it to iron out one kink at a time (which is also true of xfs but with a more maintainable upfront design).
@@__Brandon__ ZFS is way too good for linux because they're afraid of getting sued by Oracle. They want it but they just don't use it to avoid a lawsuit. Changes on linux Kernel happens slowly, especially on debian waters; I am eyeing on btrfs but will stick to ext4 for the years to come until btrfs gets that reliable and stable stage that ext4 is offering today.
This video shows another feature that the distributions of Linux has. Until this video I did not know that there were multiple file systems available for Linux. With Microsoft Windows the only file system for the storage drive is NTFS. With the latest Apple Mac computers the storage drive file system is APFS. So with these operating systems the main file system is the file system that comes with the operating system.
ZFS does work on 6.7.2 but not the latest stable version, the zfs-dkms for Arch Linux does compile and work with that kernel. However, it hasn't trickled down in some distros though.
I'm in no rush, when it gets here I will install it and test it. EDIT: I just checked current version of what I use is 2.2.2 and it only supports 6.6 kernels or earlier, which version are you referring to?
@@CyberGizmo I am using the ZFS version for arch Linux, archzfs on Github is where the project is hosted. It has the same version number too, but it seems they have pulled in a few of the kernel versions from upstream git version I think. I haven't tried compiling their source code on other distros, but I am running it on 6.7.3 at the moment.
Been using Bcachefs, I have absolutely run into the problem you mentioned at the end (but kept using it anyways like an idiot). The devs are aware of it at least since I posted it on the subreddit. I can definitely say that, in spite of these benchmarks, my computer has absolutely been much faster since switching the Bcachefs, but I also know that that is directly due to facts not accounted for in the benchmarks, that being the fact that I was switching from BTRFS on a hard disk to Bcachefs on a pool consisting of the same disk and two SSDs (one NVME, one SATA). True to its legacy as Bcache, Bcachefs really shines in multi-device configurations letting you keep frequently used files on the SSDs, while less frequently used files stay archived on the (ideally larger) hard disks, without having to plan ahead which directories you want on which devices. This is especially noticeable when booting up and also when loading large games.
I love seeing these comparisons. Been considering switching to BTRFS and saw other channels praising it and it's new improvements. Advice or Thoughts from anyone who uses it regularly would be apreeciated?
Been using it on Arch for about 3 years. Saved me a few times already. I use it with auto-snap which is on the AUR. You can easily rollback your system in literally seconds. As for performance, well my main desktop uses btrfs and my laptop ext4 and I don't notice any difference at all. At least on my use case. I did try timeshift/auto-snap on ext4 and it takes AGES to rollback or do a snapshot. On btrfs, as I already mentioned, seconds. Also, just in case you don't know about this, if you try to chroot at any point, mounting btrfs volumes is slightly different than mounting ext4. Not that different just thought I mentioned it. Other than that, I don't notice or think about whether it is ext4 or btrfs. It makes no difference for my use case.
Thanks for the contribution. The statement with Copy On Write is only correct if you refer to the original definition. This is the case in Windows NTFS or Linux BTRFS, for example. File systems such as Netapp WAFL or even more ZFS do not have to copy the data away, as they write to free areas and then link them. Both have advantages and disadvantages. NTFS is able to continue defragmentation through this copying away, which is an advantage, the disadvantage is that every rollback is a copy back and therefore consumes time. A rollback in WAFL or ZFS is an action that usually takes less than a second.
Appreciated all the effort you put into these. Your explanations of the various workloads alone make the video worthwhile completely independent of the results. Shame about ZFS though. Personally, I'm never running a kernel as recent as 6.7 is now-having long ago realized that my overall productivity is much better served by avoiding surprises than by chasing features/performance. So I, for one, would find benchmarks on longterm kernels (ie 6.6 and including openZFS) rather more useful. Still, there's no reason to assume my preferences are typical and I can see the value of trying to track development progress by using the latest versions.
Ah, I see. Well, it's a good reminder for me that these sorts of things (while interesting) are a long way divorced from the context I actually live in day-to-day. Had ZFS (which I've been using for years) been included and performed well, I might have taken that as some kind of vindication. If it had performed poorly I may have entertained a change. In either case I'd probably be putting too much stock in not-very-comparable testing. In any case, I do appreciate the coverage of bcachefs' (and to a lesser extent btrfs') progress. Quality, thoughtful content, as ever. Thanks @@CyberGizmo
Much thanks for the video. It's always a lot of fun to watch discussions and benchmarks on different filesystems. I'm curious on how well FreeBSD performs in comparison to Linux. Personally I'm using zfs on Linux (NixOS these days), have been for a long time, maybe with ext4 if I need volumes on top of it like e.g. with virtual machines. If you want to give a new zfs a try, you could try a nixos bootable iso. You'll get bleeding edge for a lot of filesystems, zfs included, and you won't have to deal with compile issues. Most filesystems are nice but it sucks when filesystems can't be easily resized. btrfs and ext4 do a good job there, zfs can only be grown, xfs, i'm not sure.
At least as it is right now, Bcachefs can be grown (I think both online and offline), but not shrunk. That said, as someone who's never actually used LVM, is filesystem resizing really that much of a game-changer?
@@CyberGizmo It's true! For example I setup a backup volume for incremental backups. No idea how big to make it so started by giving it 200G. Now I know 70G is enough, with headroom. So: long live ext4.
Thank you for the video. I think bcachefs has lots of potential for replacing ZFS, which I have been using for my home storage system since 2008 (started on OpenSolaris, then OpenIndiana, Nexenta, FreeBSD amd finally, GNU Linux. You wouldn't believe how much I prefer the GNU userland over these other systems'). It has just landed mainline kernel, that's good, now we have to wait for it to have the pending features implemented and stabilized. Bugs are appearing in its github repo, and particularly there's a user reporting huge performance regressions since August '23. Even then, I am using ZFS for its features, not necessarily performance (although it would be great if it were faster, but it doesn't impact my use case, which is as storage for photos, documents, etc.).
For a normal laptop/PC with NVME drives I use either ext4 or XFS, I'm also very happpy with them :) Each use case it's best suited by a different FS: ZFS for NAS storage, where I value snapshots, ZFS send/receive, mirroring, scrubbing, etc., while on my root and home partitions it's better to have speed as a priority (I would like snapshots in my laptop like NILFS2 does, but I don't trust it enough and I've read it has some performance issues on certain situations).
Great video. I wonder how does JFS stack up in those tests. I remember about 20 years ago this was my default filesystem as I had the least trouble with it, it was faster than ext3 in many applications and traded blows with XFS. Recently my Slackware server with ext4 had a filesystem error for some unknown reason, so I am considering migrating.
What compression (if any) did you use for the BTRFS benchmark? I have seen one benchmark that had way better results when using light compression (lzo, zstd:3) on a btrfs system.
Definitely interested in what's the best open cluster fs the idea of getting a few cheap mini pcs with a few nvme in each then running a cluster fs somehow sounds fun :D
I am really impressed with bcachefs, a new FS, just merged into the kernel and performing so well. Just reading about it, and its backing feature is quite interesting.
I find it interesting that F2FS is in here at all, given it's largely optimized for the mediocre NAND flash on phones and _maybe_ sd cards. And given how it scales, it seems to make the most sense for that use case and little else.
In my experience, on a 7200 RPM HDD, Btrfs was so darn slow when it comes to traversing directories, especially ones with many files. Sometimes in excess of 10 seconds to load a folder! XFS was so much better. Felt almost instant in comparison. Remained so even after migrating the data to a 5400 RPM laptop HDD. Shame really, because I love how you can set (unfortunately named) raid1 on btrfs and just throw disks at it. No need to perfectly manage a RAID array with all same drive sizes and whatnot. Even if the disks are not the same size it figures out the most efficient way to use the space on all of them while ensuring there is always another copy on another drive.
I have the SN750X, I bought it after the SN850X came out because the difference in performance is relatively small and I got 2 TB for around €130 which was a pretty good deal comparatively. I am happy with the performance. That latency due to the electron having a finite traveling speed, I have thought about this: how much impact would it have on performance if you would put RAM further away from the CPU to make room for the coolers and how much impact would a riser cable have? Those kind of questions. You can't make a sensible estimate without knowing how often every second data goes in one direction or the other.
Hmm, in my opinion it's not only about the speed of filesystem, unless this is your only criteria. I tend to choose filesystems that are both performant AND safe. I cannot live without snapshots. They saved my ass many times already :) I have local snapshots and also replicate them on backup machine.
This is pedantic and I apologize, but an electron's drift velocity is only 23 um/s, not 100m/s. The electrical field generated by the movement of electrons expands at ~300,000km/s (ie ~300m/us), but the electron itself barely moves at all after 1 second.
I used to use f2fs for some time but considering the relatively poor error recovery and no performance gains I mostly switched to ext4 now. Still I may just go to btrfs in the future, especially seeing the limited performance penalty .now.
I've had f2fs on 2 drives including root for over 4 years now, with numerous unclean shutdowns and i have not had a single corruption or problem. I think it does a very good job at what it is designed for, which is longevity for flash storage, my drives show barely any usage. I would not trust important data to any filesystem which isn't mirrored though.
@@kukuri1234 The fsck of f2fs is known to be weaker, does not mean you will actually encounter the corruption but it has been tested to be significantly worse at repairing errors. Can't remember the sources though. As for the longlivity part, perhaps for emmc storage or a usb-drive / sd-card but for a decent ssd it is no big factor of consideration from my understanding.
_"An electron will travel 100 meters in 1 µs"_ - seems you are confusing electrons with photons or electrons with the electrical field. An electron will travel this-or-that distance depending on its speed, which is always less than the speed of light, most often it will travel at surprisingly low speeds, about 1mm per second. The speed of an electrical signal, however, is traveling at the speed of light in the relevant medium, say copper. But that is the electrical field, not the elementary particle.
Same procedure as every year .. EXT4 is the best overall, including availability, usability, data integrity and performance. It's bad and good at once .. bad because there seems to be no progress in filesystems, but good because maybe this means that EXT4 is so good that there's not much room for improvement. Of coure if you want no compromise maximum data integrity, there's no way around clumsy ZFS.
If they would be so kind and add snap-shots and namespaces to EXT4 that would be killer. If they also added the features ZFS has to manage devices and pools of devices directly (probably via the md raid driver foundation) it would be the be-all-end-all filesystem.
@@jthoward Btrfs is and has been default on fedora. Btr still has some large users, SuSe and Synology specifically. Unfortunately it hasn't really lived up to the hope of it being able to compete with ZFS. Synology doesn't use it's raid capabilities at all for example and has some custom hacks to somehow pass btr's self healing features through mdadm. Now with bcachefs effectively starting over, and rhel developing something else entirely (I forget what it's called). I suspect btrfs's future is dim.
I think you should rely more on tools like IOZone, mdtest and IOR. These the the defacto standard to test large (parallel, luster/gpfs) file systems like those used for HPC (aka some of biggest FS in the world). The only issue you may have is that they rely on MPI but that can easily be worked around.
While that may seem like a good idea and I have seen a lot of You Tubers use that method of comparing...the only thing I could show you is how power efficiency is for me, it would not necessarily translate to your experience. This benchmark was collected on a Meteor Lake CPU, that model of Intel CPU will call in the new Intel Hybrid Cluster algorithm, and it will direct processes to a group of processors based on what it sees load wise and which type of CPU is available to execute on (P-core, E-Core or LE E-Core). and it is possible it is that part of the algorithm that is effecting F2FS poor scores on this video
@@CyberGizmo you completely schooled me, Sir. I try my best to run hibernation mode on my productive machines and f2fs wherever possible. It could be because I saw many benefits of Android phones using f2fs and then some testing that was done on HDDs, which showed several benefits, especially with power consumption numbers.
Do you do these logged into a desktop environment? I would wonder if, at times, the DE is doing some goofy housekeeping or other background task while the benchmark is running, possibly skewing a specific measurement at the time it's doing that task. Just curious, obviously doing while in a DE is more "real world" for many people, rather than "theoretical" (unless you're benchmarking for server workloads, of course.)
I do this on a desktop because that is what most of us run, if I wanted to show server performance it would be a totally different benchmark...servers do not work the way workstations do today, that's a totally different world these days. Go check out Level1Tech video from today to see what I mean
@@CyberGizmo Fair enough. I run KDE myself, although I know some people have bucked the trend of "environments" and use tiling window managers and whatever else. I would assume those have less housekeeping/background crap scheduled than something like GNOME or KDE, but I dunno.
@@mercster AFAIK the only background tasks KDE runs are occasional update checks by Discover or whatever graphical package manager you have and daemons like bluetoothd (unless you have any background apps running). Dunno about GNOME though
@@jthoward There's Baloo, which is the indexing engine daemon that scrapes metadata for Dolphin's (and other apps, I guess?) search functionalities. Depending on what applets you have in your bar, there are many things that could be happening... Weather applet updates, etc. Desktop environments can get incredibly complex in terms of the cookie jars it has its' hands in at any one time.
@@jthoward (Granted, DJ Ware is probably not doing a whole lot to the DE before running the benchmark... still, these are incredibly large frameworks doing a lot of stuff. If the goal is to evaluate raw performance between different filesystems, I'd want to eliminate as much going on in the background as possible. But I'm not doing the benchmarks, and DJ Ware is no fool, so I accept his rationale for doing so inside of a DE. It was just a consideration I thought I'd ask about.)
Excellent video as usual. Shame that so many people are put off by the ZFS license. I’d love to see Debian easily support root-on-ZFS easily out of the box.
If you want the best overall performance, it looks like Ext4 for the win. F2FS is a niche use case file system, why consider it for a root filesystem on a desktop or server? Btrfs and BcacheFS are only useful if you want copy-on-write features. They are inherently riskier. Why use them unless you really want one of the features, otherwise you are just tolerating sub-par performance and lower reliability for nothing. Thanks for the update.
"Btrfs and BcacheFS are only useful if you want copy-on-write features." I would say that if you only want copy-on-write features, then stick with Btrfs. Bcachefs I would mainly recommend for situations where you would otherwise run a traditional filesystem over top of Bcache, that being in setups with highly heterogeneous drive configurations (like my case with 2 SSDs and 2 HDDs, all of which have different speeds). TLDR: Btrfs for copy-on-write. Bcachefs for a heterogeneous multi-device filesystem.
Just anecdotal evidence of seeing and hearing about problems of failure to recover from broken filesystems on btrfs formatted drives. Also, BcacheFS is way to new to be trustworthy. It's been over ten years and btrfs is barely trustworthy (in a corporate setting with professional admins its probably fine, but with home users who have little idea how to properly set it up, I would not consider btrfs to be a good option). @@0LoneTech
@@0LoneTech Would love to hear the explanation, too. I suppose @eznix knows what he's talking about considering he's the author of eznixOS. My take was that the whole purpose of COW filesystems is a data protection...
Snapshots and compression is pretty nice on btrfs. You can always disable cow on a directory level if you run into performance issues. This kind of workload only really appears with vms and databases so it's easy to disable 95% of the bottlenecks while keeping snapshots which can speed up remote backup times and sizes a considerable amount
As the real life filesystem workload is completely different to fio load and as mostly every server use some kind of raid to disk bundle fio load benchmarks to single disk is unfortunately confusing to end admin/user productiv use experience. Measuring time to any kind of single or multiload application-like-benchmark and keep datasets between min 1 to 10 TB and 5 to 20 M files, even measure metadata handling as long listing, find and du. I prefere HW-raid, xfs and some knobs in /sys/block/... and /proc/sys/..., as it's unbeatable then. Nevertheless anybody like some kind of filesystem but then please don't say any kind of would be fast which isn't in reality, sorry.
Maybe you should call this the "fastest" filesystem, as speed alone doesn't make a good filesystem, also there is no best filesystem, just many different ones for different usecases. So the question should be what is the best for what task.
No ZFS? (Get it built/compiled if you are not able to make it work yourself!). Useless info (non groupies will always use ZFS). Speed matters, but resiliency and features matter most!
and what happened to OpenZFS? Died 2 days ago? why would you need to compile it? you just install it with package (at least on Debian-like systems) plus ZFS is perfectly working/implemented on FreeBSD - see enterprise TrueNAS systems as an example No, removing ZFS from the comparison is like making a comparison i.e.: "which car is the best?" and you test only Fiat and Ford...
If data throughput is the only variable that matters to you, yes. If you also care that your data is consistent after interruption, you may want to switch on journal_checksum, data=journal and metadata_csum.
I suggest you read up on the existing problems of ReiserFS. It has already been phased out by SUSE, who also suggested to drop Reiser from the mainline kernel in 2025. Their reasoning was that maintaining Reiser takes too much effort, and there are good alternatives available. No need beating a dead (or at least almost a quarter century old and ailing) horse any longer.
On the ZFS compile error: Linux devs need to stop releasing busted updates as stable, last year mySQL also released busted versions and people switched to MariaDB. The more Linux devs push bad updates that break peoples systems, the more people we will see switching to Immutable distros, people are tired of this nonsense and want to be able to quickly rollback from a busted release back to a working one.
EndeavourOS does not have official support for ZFS. It's not a matter of pushing out busted stuff. ZFS is not part of the linux kernel, so if the distro pushes out a kernel that ZFS hasn't been tested on, then you can have these breakages. So you don't do that, if you're using ZFS, then you should use a distro that includes it. Secondly, MariaDB has been the successor to the old mySQL project for like 14 years ever since Oracle acquired the brand, most distros don't even ship mysql. Mariadb is superior to mysql in almost every respect. Why are you even using it?
I would assume that the compiler error is because of the heavy refactoring that is going on which is a positive thing and is only an issue because XFS is out of tree.
@@russchristensen3808 Breaking peoples systems isn't a positive. It's just plain old deceitful to say release X is safe and stable when it is unsafe, unstable and untested.
@@ericneo2You're full of it. ZFS's latest release doesn't claim support for 6.7. Live on the bleeding edge, get cut. If you're going to use ZFS in production, then use a distro that explicitly supports it so that you know they wont advance their kernel past what zfs supports. I agree with you in principal, breaking changes = bad, but you're not complaining about a real example.
@@entelin You are projecting like Cathy Newman. The stable MySQL releases that broke production had very real consequences for web hosting companies last year. You can stick your head in the sand, lie and make excuses all you want but people are tired of this nonsense.
Linux, Windows and Mac. That is thus WebDAV missing from the story. Clients like Konquer have this build-in. I rather have smb build in instead of using winscp. #vfs #vde #volume
No ZFS? (thus a useless comparison of outdated (XFS) or unreliable file systems (btrfs; bacheFS; F2FX)). Please use a main stream distro for testing next time and use a Linux OS, that is supported by OpenZFS, like 6.5 or 6.6. Avoid rolling distros for finding the best (= reliable) file system. Of course ext4 and OpenZFS are the best file systems; bcacheFS might be the best in 5 years; F2FS is used for phones and btrfs already tries for 10 years to deal reliably with the more complex RAID configurations. So btrfs is nice for desktop or laptop, but not for servers.
Yep, it broke on install...Linux kernel module failed to compile and I wanted to push the video out and not troubleshoot today...been doing that all month with the new box
To be honest, I just btrfs because of the snapshot feature, and it sounded like a great tool as a noob who switched to linux. Glad to see that it does well for the most part in these performance tests. Thank you for the video! As I get deeper into the technical side of things, this channel is a trove of useful and interesting info
Same. Combined with auto-snap it's great. Saved me a few times on Arch.
I use it for the snapshots, being able to send the snapshots to my NAS over ssh for backups, and using btrfs-raid1 on my NAS for the safety from bit rot or other corruption.
Same use for it 👍
I would say it is CoW itself. You don't really need to explicitly make snapshots to do that. You can simply copy files/directories and they will not take extra space.
@@mk72v2oq Only if your copy tool uses the appropriate calls like reflinks, or you're deduplicating the data on write. Tools like snapper provide an easy way to do that systematically.
As every year, thanks for comparing filesystems!
I would love to see Hammer2 and ZFS!
me 2!
Worth mentioning that Btrfs, Bcachefs and ZFS are about data integrity first. As you said, Copy-on-Write mechanism can have significant performance overhead to ensure that the filesystem is always in vaild state even on unexpected power loss.
They also perform data checksumming, i.e. they compute and write checksums along with every data block, then read and verify them on every access to that data block. This obviously also isn't free and requires extra CPU time. But it protects you from silent data corruption (due to faulty disk or some other reasons), which other filesystems without the checksumming are unable to detect.
How do you read about all this?
@@divyamsharma5198 what exactly? Particular filesystems have documentation and lots of info around the internet.
Happy to see good old XFS still doing the thing. I've used it on and off for years, starting with SGI machines way back in the day.
I like your filesystem comparison videos👍👏 Used ext4 forever and never failed. I love zfs, but it is at home in pro scenarios. Use it only when distribution supports it out of the box like proxmox. Btrfs is very well supported and has all the bells and whistles. Love Linux for the choice it gives me.
The most useful annual comparison, I will watch those always. THANK YOU for your work and appreciate time spend benchmarking
Thank you for all the work you do for us❤
Thank you for taking the time to do all this testing in such an easy to understand way.
I'll remain with ext4... Good and reliable...
Always good to know one is not "missing out" with ext4
Ironically, I've only broken ext4. Zfs, btrfs, NTFS, xfs, and exfat have never broken on me. There was power failure involved, but losing power shouldn't break your filesystem, only corrupt files
@@__Brandon__ A new (Win 8.1) machine broke on me with NTFS. Probably because I shrank the Windows partition as much as it would allow after enabling full disk compression to give Linux as much as possible without deleting that partition, yet. It worked for a little while playing the default installed games. Then it didn't.
XFS is more reliable overall and is also the more readable codebase. Ext4 is the one that scares new kernel devs and is a fairly difficult codebase to read, it just has had decades of work put into it to iron out one kink at a time (which is also true of xfs but with a more maintainable upfront design).
@@__Brandon__ ZFS is way too good for linux because they're afraid of getting sued by Oracle. They want it but they just don't use it to avoid a lawsuit. Changes on linux Kernel happens slowly, especially on debian waters; I am eyeing on btrfs but will stick to ext4 for the years to come until btrfs gets that reliable and stable stage that ext4 is offering today.
This video shows another feature that the distributions of Linux has. Until this video I did not know that there were multiple file systems available for Linux. With Microsoft Windows the only file system for the storage drive is NTFS. With the latest Apple Mac computers the storage drive file system is APFS. So with these operating systems the main file system is the file system that comes with the operating system.
Really appreciate the chapters. Thank you for marking!
At home: ext4 with monthly backups.
At work: xfs with nightly backups.
ZFS does work on 6.7.2 but not the latest stable version, the zfs-dkms for Arch Linux does compile and work with that kernel.
However, it hasn't trickled down in some distros though.
I'm in no rush, when it gets here I will install it and test it. EDIT: I just checked current version of what I use is 2.2.2 and it only supports 6.6 kernels or earlier, which version are you referring to?
@@CyberGizmo I am using the ZFS version for arch Linux, archzfs on Github is where the project is hosted.
It has the same version number too, but it seems they have pulled in a few of the kernel versions from upstream git version I think.
I haven't tried compiling their source code on other distros, but I am running it on 6.7.3 at the moment.
@@CMDRSweeper ok thanks will give it a go im on 6.7.3 as well
Now we trust you even more, Oh wise most honorable bearded Guru! 👍
hahaha....
Been using Bcachefs, I have absolutely run into the problem you mentioned at the end (but kept using it anyways like an idiot). The devs are aware of it at least since I posted it on the subreddit.
I can definitely say that, in spite of these benchmarks, my computer has absolutely been much faster since switching the Bcachefs, but I also know that that is directly due to facts not accounted for in the benchmarks, that being the fact that I was switching from BTRFS on a hard disk to Bcachefs on a pool consisting of the same disk and two SSDs (one NVME, one SATA). True to its legacy as Bcache, Bcachefs really shines in multi-device configurations letting you keep frequently used files on the SSDs, while less frequently used files stay archived on the (ideally larger) hard disks, without having to plan ahead which directories you want on which devices. This is especially noticeable when booting up and also when loading large games.
Ah that's a good use case for my setup too. I'll keep an eye out for bcachefs when it's more mature
Very informative and helped a lot in choosing a filesystem. Subscribed, thanks.
Thank you! This was very supportive; I'm installing Arch for the first time and had no idea which file system to choose. XFS is going to be the one.
I love seeing these comparisons. Been considering switching to BTRFS and saw other channels praising it and it's new improvements. Advice or Thoughts from anyone who uses it regularly would be apreeciated?
BTRFS + Snapshots = God mode
I've been using BTRFS since August and will not go back
Been using it on Arch for about 3 years. Saved me a few times already. I use it with auto-snap which is on the AUR. You can easily rollback your system in literally seconds.
As for performance, well my main desktop uses btrfs and my laptop ext4 and I don't notice any difference at all. At least on my use case. I did try timeshift/auto-snap on ext4 and it takes AGES to rollback or do a snapshot. On btrfs, as I already mentioned, seconds.
Also, just in case you don't know about this, if you try to chroot at any point, mounting btrfs volumes is slightly different than mounting ext4. Not that different just thought I mentioned it.
Other than that, I don't notice or think about whether it is ext4 or btrfs. It makes no difference for my use case.
Thanks for the contribution. The statement with Copy On Write is only correct if you refer to the original definition. This is the case in Windows NTFS or Linux BTRFS, for example. File systems such as Netapp WAFL or even more ZFS do not have to copy the data away, as they write to free areas and then link them. Both have advantages and disadvantages. NTFS is able to continue defragmentation through this copying away, which is an advantage, the disadvantage is that every rollback is a copy back and therefore consumes time. A rollback in WAFL or ZFS is an action that usually takes less than a second.
Appreciated all the effort you put into these. Your explanations of the various workloads alone make the video worthwhile completely independent of the results.
Shame about ZFS though. Personally, I'm never running a kernel as recent as 6.7 is now-having long ago realized that my overall productivity is much better served by avoiding surprises than by chasing features/performance. So I, for one, would find benchmarks on longterm kernels (ie 6.6 and including openZFS) rather more useful. Still, there's no reason to assume my preferences are typical and I can see the value of trying to track development progress by using the latest versions.
The kernel version is a necessary to support 1) the laptop and 2) bcachefs isn't offered on earlier kernel versions (yet)
Ah, I see. Well, it's a good reminder for me that these sorts of things (while interesting) are a long way divorced from the context I actually live in day-to-day. Had ZFS (which I've been using for years) been included and performed well, I might have taken that as some kind of vindication. If it had performed poorly I may have entertained a change. In either case I'd probably be putting too much stock in not-very-comparable testing.
In any case, I do appreciate the coverage of bcachefs' (and to a lesser extent btrfs') progress. Quality, thoughtful content, as ever. Thanks @@CyberGizmo
Much thanks for the video. It's always a lot of fun to watch discussions and benchmarks on different filesystems. I'm curious on how well FreeBSD performs in comparison to Linux. Personally I'm using zfs on Linux (NixOS these days), have been for a long time, maybe with ext4 if I need volumes on top of it like e.g. with virtual machines.
If you want to give a new zfs a try, you could try a nixos bootable iso. You'll get bleeding edge for a lot of filesystems, zfs included, and you won't have to deal with compile issues.
Most filesystems are nice but it sucks when filesystems can't be easily resized. btrfs and ext4 do a good job there, zfs can only be grown, xfs, i'm not sure.
At least as it is right now, Bcachefs can be grown (I think both online and offline), but not shrunk. That said, as someone who's never actually used LVM, is filesystem resizing really that much of a game-changer?
wait...please actually shrink their storage !?!?!?! imagine that LOL
@@CyberGizmo It's true! For example I setup a backup volume for incremental backups. No idea how big to make it so started by giving it 200G. Now I know 70G is enough, with headroom. So: long live ext4.
Thank you for the video.
I think bcachefs has lots of potential for replacing ZFS, which I have been using for my home storage system since 2008 (started on OpenSolaris, then OpenIndiana, Nexenta, FreeBSD amd finally, GNU Linux. You wouldn't believe how much I prefer the GNU userland over these other systems').
It has just landed mainline kernel, that's good, now we have to wait for it to have the pending features implemented and stabilized. Bugs are appearing in its github repo, and particularly there's a user reporting huge performance regressions since August '23.
Even then, I am using ZFS for its features, not necessarily performance (although it would be great if it were faster, but it doesn't impact my use case, which is as storage for photos, documents, etc.).
For a normal laptop/PC with NVME drives I use either ext4 or XFS, I'm also very happpy with them :) Each use case it's best suited by a different FS: ZFS for NAS storage, where I value snapshots, ZFS send/receive, mirroring, scrubbing, etc., while on my root and home partitions it's better to have speed as a priority (I would like snapshots in my laptop like NILFS2 does, but I don't trust it enough and I've read it has some performance issues on certain situations).
Yeah, I've been stuck on a 6.5. zfs-kmod 2.2.2 will compile against
I would appreciate if you could cover and explain this new scheduler
I am working on it...the EEVDF part is pretty straight forward, the Intel Cluster Hybrid stuff, not so much
Very nice video, thanks for the information! I moved to btrfs recently it got much better than few years ago. Im really glad brtfs gets better.
Great video. I wonder how does JFS stack up in those tests. I remember about 20 years ago this was my default filesystem as I had the least trouble with it, it was faster than ext3 in many applications and traded blows with XFS. Recently my Slackware server with ext4 had a filesystem error for some unknown reason, so I am considering migrating.
What compression (if any) did you use for the BTRFS benchmark? I have seen one benchmark that had way better results when using light compression (lzo, zstd:3) on a btrfs system.
i ran JFS for years and years until one day I didnt check the box and ended up with ext4... Id love to test hammer at some point
Definitely interested in what's the best open cluster fs the idea of getting a few cheap mini pcs with a few nvme in each then running a cluster fs somehow sounds fun :D
I am really impressed with bcachefs, a new FS, just merged into the kernel and performing so well. Just reading about it, and its backing feature is quite interesting.
Also it is basically a one-man project
I find it interesting that F2FS is in here at all, given it's largely optimized for the mediocre NAND flash on phones and _maybe_ sd cards.
And given how it scales, it seems to make the most sense for that use case and little else.
great as usual 👍
Would be nice to see zfs included as a lot of people are comparing btrfs and zfs when moving away from ext4
In my experience, on a 7200 RPM HDD, Btrfs was so darn slow when it comes to traversing directories, especially ones with many files. Sometimes in excess of 10 seconds to load a folder! XFS was so much better. Felt almost instant in comparison. Remained so even after migrating the data to a 5400 RPM laptop HDD. Shame really, because I love how you can set (unfortunately named) raid1 on btrfs and just throw disks at it. No need to perfectly manage a RAID array with all same drive sizes and whatnot. Even if the disks are not the same size it figures out the most efficient way to use the space on all of them while ensuring there is always another copy on another drive.
I have the SN750X, I bought it after the SN850X came out because the difference in performance is relatively small and I got 2 TB for around €130 which was a pretty good deal comparatively. I am happy with the performance. That latency due to the electron having a finite traveling speed, I have thought about this: how much impact would it have on performance if you would put RAM further away from the CPU to make room for the coolers and how much impact would a riser cable have? Those kind of questions. You can't make a sensible estimate without knowing how often every second data goes in one direction or the other.
It all adds up
I use F2FS, not for speed but because it writes on secuencial new blocks each time and with that having less stress on ssd
What about filesystem reliability (in case of faults. )
NTFS.
After watching: glad to see Ext4 still doing great.
Hmm, in my opinion it's not only about the speed of filesystem, unless this is your only criteria. I tend to choose filesystems that are both performant AND safe. I cannot live without snapshots. They saved my ass many times already :) I have local snapshots and also replicate them on backup machine.
This is pedantic and I apologize, but an electron's drift velocity is only 23 um/s, not 100m/s. The electrical field generated by the movement of electrons expands at ~300,000km/s (ie ~300m/us), but the electron itself barely moves at all after 1 second.
If you just focus on the single and two threaded workloads f2fs does actually pretty well. But anything more and it falls flat.
Thank you for the video!
Amazing job. Wish it included Zfs and hammer2.
I used to use f2fs for some time but considering the relatively poor error recovery and no performance gains I mostly switched to ext4 now. Still I may just go to btrfs in the future, especially seeing the limited performance penalty .now.
I've had f2fs on 2 drives including root for over 4 years now, with numerous unclean shutdowns and i have not had a single corruption or problem. I think it does a very good job at what it is designed for, which is longevity for flash storage, my drives show barely any usage. I would not trust important data to any filesystem which isn't mirrored though.
@@kukuri1234 The fsck of f2fs is known to be weaker, does not mean you will actually encounter the corruption but it has been tested to be significantly worse at repairing errors. Can't remember the sources though.
As for the longlivity part, perhaps for emmc storage or a usb-drive / sd-card but for a decent ssd it is no big factor of consideration from my understanding.
Good job DJ
Thank you, Sir!
_"An electron will travel 100 meters in 1 µs"_ - seems you are confusing electrons with photons or electrons with the electrical field. An electron will travel this-or-that distance depending on its speed, which is always less than the speed of light, most often it will travel at surprisingly low speeds, about 1mm per second. The speed of an electrical signal, however, is traveling at the speed of light in the relevant medium, say copper. But that is the electrical field, not the elementary particle.
Same procedure as every year .. EXT4 is the best overall, including availability, usability, data integrity and performance. It's bad and good at once .. bad because there seems to be no progress in filesystems, but good because maybe this means that EXT4 is so good that there's not much room for improvement. Of coure if you want no compromise maximum data integrity, there's no way around clumsy ZFS.
If they would be so kind and add snap-shots and namespaces to EXT4 that would be killer. If they also added the features ZFS has to manage devices and pools of devices directly (probably via the md raid driver foundation) it would be the be-all-end-all filesystem.
For 2024, I will still rely on XFS, although Fedora is hard pushing to using ButterFS
Hard pushing? I doubt it, Redhat dropped Btr entirely. I don't see much of a future for BTRfs to be honest.
@@entelin Pretty sure Fedora is (or at least recently was) planning to make it default
@@jthoward Btrfs is and has been default on fedora. Btr still has some large users, SuSe and Synology specifically. Unfortunately it hasn't really lived up to the hope of it being able to compete with ZFS. Synology doesn't use it's raid capabilities at all for example and has some custom hacks to somehow pass btr's self healing features through mdadm. Now with bcachefs effectively starting over, and rhel developing something else entirely (I forget what it's called). I suspect btrfs's future is dim.
@@entelin did you try Fefora lately ? No ? So your doubt values nothing
@@entelin "Synology doesn't use it's raid capabilities at all for example". I call BS.
I think you should rely more on tools like IOZone, mdtest and IOR. These the the defacto standard to test large (parallel, luster/gpfs) file systems like those used for HPC (aka some of biggest FS in the world). The only issue you may have is that they rely on MPI but that can easily be worked around.
He did collect this data using iozone.
@@0LoneTech Yes, and I think this should continue.
Are we looking at the time to complete or a chart where more is better?
I thought at first, that is the former. (Time to complete sized workloads).
its MB/sec on all of the tests, hope that helps (there should be text at the bottom of each slide with the benchmark results)
EROFS and HMDFS (OpenHarmony/HarmonyOS Distributed File System)
I would test these all again for power efficiency. I think you'd find F2FS saves the most power, both for ssd and HDD drives.
While that may seem like a good idea and I have seen a lot of You Tubers use that method of comparing...the only thing I could show you is how power efficiency is for me, it would not necessarily translate to your experience. This benchmark was collected on a Meteor Lake CPU, that model of Intel CPU will call in the new Intel Hybrid Cluster algorithm, and it will direct processes to a group of processors based on what it sees load wise and which type of CPU is available to execute on (P-core, E-Core or LE E-Core). and it is possible it is that part of the algorithm that is effecting F2FS poor scores on this video
@@CyberGizmo you completely schooled me, Sir.
I try my best to run hibernation mode on my productive machines and f2fs wherever possible. It could be because I saw many benefits of Android phones using f2fs and then some testing that was done on HDDs, which showed several benefits, especially with power consumption numbers.
I always appreciate your deep dive into this, but I'm a bit surprised that OpenZFS isn't included.
It would have been, had it not failed to install
Do you do these logged into a desktop environment? I would wonder if, at times, the DE is doing some goofy housekeeping or other background task while the benchmark is running, possibly skewing a specific measurement at the time it's doing that task. Just curious, obviously doing while in a DE is more "real world" for many people, rather than "theoretical" (unless you're benchmarking for server workloads, of course.)
I do this on a desktop because that is what most of us run, if I wanted to show server performance it would be a totally different benchmark...servers do not work the way workstations do today, that's a totally different world these days. Go check out Level1Tech video from today to see what I mean
@@CyberGizmo Fair enough. I run KDE myself, although I know some people have bucked the trend of "environments" and use tiling window managers and whatever else. I would assume those have less housekeeping/background crap scheduled than something like GNOME or KDE, but I dunno.
@@mercster AFAIK the only background tasks KDE runs are occasional update checks by Discover or whatever graphical package manager you have and daemons like bluetoothd (unless you have any background apps running). Dunno about GNOME though
@@jthoward There's Baloo, which is the indexing engine daemon that scrapes metadata for Dolphin's (and other apps, I guess?) search functionalities. Depending on what applets you have in your bar, there are many things that could be happening... Weather applet updates, etc. Desktop environments can get incredibly complex in terms of the cookie jars it has its' hands in at any one time.
@@jthoward (Granted, DJ Ware is probably not doing a whole lot to the DE before running the benchmark... still, these are incredibly large frameworks doing a lot of stuff. If the goal is to evaluate raw performance between different filesystems, I'd want to eliminate as much going on in the background as possible. But I'm not doing the benchmarks, and DJ Ware is no fool, so I accept his rationale for doing so inside of a DE. It was just a consideration I thought I'd ask about.)
Excellent video as usual. Shame that so many people are put off by the ZFS license. I’d love to see Debian easily support root-on-ZFS easily out of the box.
Thanks Mike, what is even a bit ironic...I use Debian to host my ZFS storage (sshhh, don't tell anyone)
Great info thanks matey
If you want the best overall performance, it looks like Ext4 for the win. F2FS is a niche use case file system, why consider it for a root filesystem on a desktop or server? Btrfs and BcacheFS are only useful if you want copy-on-write features. They are inherently riskier. Why use them unless you really want one of the features, otherwise you are just tolerating sub-par performance and lower reliability for nothing. Thanks for the update.
Good to see you eznix and welcome
"Btrfs and BcacheFS are only useful if you want copy-on-write features." I would say that if you only want copy-on-write features, then stick with Btrfs. Bcachefs I would mainly recommend for situations where you would otherwise run a traditional filesystem over top of Bcache, that being in setups with highly heterogeneous drive configurations (like my case with 2 SSDs and 2 HDDs, all of which have different speeds).
TLDR: Btrfs for copy-on-write. Bcachefs for a heterogeneous multi-device filesystem.
What basis do you have for calling the principle of not overwriting current data "inherently riskier" and "lower reliability"?
Just anecdotal evidence of seeing and hearing about problems of failure to recover from broken filesystems on btrfs formatted drives. Also, BcacheFS is way to new to be trustworthy. It's been over ten years and btrfs is barely trustworthy (in a corporate setting with professional admins its probably fine, but with home users who have little idea how to properly set it up, I would not consider btrfs to be a good option). @@0LoneTech
@@0LoneTech Would love to hear the explanation, too. I suppose @eznix knows what he's talking about considering he's the author of eznixOS. My take was that the whole purpose of COW filesystems is a data protection...
Rocking the beard.
Is it possible that bcachefs developers haven't.. Tested... Rebooting a machine?
Gonna give xfs a try due to this video. Thank you
My favorite file system is DJ Ware
TL;DR: Keep using XFS or EXT4
Snapshots and compression is pretty nice on btrfs. You can always disable cow on a directory level if you run into performance issues. This kind of workload only really appears with vms and databases so it's easy to disable 95% of the bottlenecks while keeping snapshots which can speed up remote backup times and sizes a considerable amount
As the real life filesystem workload is completely different to fio load and as mostly every server use some kind of raid to disk bundle fio load benchmarks to single disk is unfortunately confusing to end admin/user productiv use experience. Measuring time to any kind of single or multiload application-like-benchmark and keep datasets between min 1 to 10 TB and 5 to 20 M files, even measure metadata handling as long listing, find and du. I prefere HW-raid, xfs and some knobs in /sys/block/... and /proc/sys/..., as it's unbeatable then. Nevertheless anybody like some kind of filesystem but then please don't say any kind of would be fast which isn't in reality, sorry.
Is OpenZFS a thing?
it masquarades as zfs on linux, but oh yes
@@CyberGizmo sorry for the bother, forgot to mention, ZFS is still good on FreeDSB, right. I want to try BSD for building a NAS! Thank you!
Maybe you should call this the "fastest" filesystem, as speed alone doesn't make a good filesystem, also there is no best filesystem, just many different ones for different usecases. So the question should be what is the best for what task.
reiserfs?
a filesystem which will be deprecated from the linux kernel soon. Sadly
@@CyberGizmo yes, really sad.
👍DJ!
No ZFS? (Get it built/compiled if you are not able to make it work yourself!).
Useless info (non groupies will always use ZFS).
Speed matters, but resiliency and features matter most!
I am trouibleshooting weary after a month getting a new machine running...will look into it once I relax for a day or two.
Data reside on the NAS or the SAN, not the PC. So ZFS is useless, considering how much RAM ZFS needs
Linux uses BTRFS, ZFS is not build into the linux kernel.
ZFS will give all kinds of trouble like: ZFS not compatible with kernel version x.x
@@CaptainDangeax You can have hundreds of gigs in a standard desktop PC now. And high capacity RAM DIMMs aren't insanely expensive.
@@shadow7037932 That's not a capacity problem. that's an organisation problem. Makes me believe you don't work in IT as a professionnal.
and what happened to OpenZFS? Died 2 days ago?
why would you need to compile it?
you just install it with package (at least on Debian-like systems)
plus ZFS is perfectly working/implemented on FreeBSD - see enterprise TrueNAS systems as an example
No, removing ZFS from the comparison is like making a comparison i.e.: "which car is the best?" and you test only Fiat and Ford...
Looks like EXT4 wins.
If data throughput is the only variable that matters to you, yes. If you also care that your data is consistent after interruption, you may want to switch on journal_checksum, data=journal and metadata_csum.
HFS+
Okay, bye. Have fun!
bcachefs is really promising fs. Less featureful than btrfs, but perhaps more stable.
A sure way to lose your job is to install any of these filesystems, other than ext4 or xfs, in a production environment.
true
Winner?
if a Filesystem is Stable and don't require any changes, why deleting it from Linux? This is really dumb decision.
I suggest you read up on the existing problems of ReiserFS. It has already been phased out by SUSE, who also suggested to drop Reiser from the mainline kernel in 2025. Their reasoning was that maintaining Reiser takes too much effort, and there are good alternatives available. No need beating a dead (or at least almost a quarter century old and ailing) horse any longer.
On the ZFS compile error: Linux devs need to stop releasing busted updates as stable, last year mySQL also released busted versions and people switched to MariaDB. The more Linux devs push bad updates that break peoples systems, the more people we will see switching to Immutable distros, people are tired of this nonsense and want to be able to quickly rollback from a busted release back to a working one.
EndeavourOS does not have official support for ZFS. It's not a matter of pushing out busted stuff. ZFS is not part of the linux kernel, so if the distro pushes out a kernel that ZFS hasn't been tested on, then you can have these breakages. So you don't do that, if you're using ZFS, then you should use a distro that includes it. Secondly, MariaDB has been the successor to the old mySQL project for like 14 years ever since Oracle acquired the brand, most distros don't even ship mysql. Mariadb is superior to mysql in almost every respect. Why are you even using it?
I would assume that the compiler error is because of the heavy refactoring that is going on which is a positive thing and is only an issue because XFS is out of tree.
@@russchristensen3808 Breaking peoples systems isn't a positive. It's just plain old deceitful to say release X is safe and stable when it is unsafe, unstable and untested.
@@ericneo2You're full of it. ZFS's latest release doesn't claim support for 6.7. Live on the bleeding edge, get cut. If you're going to use ZFS in production, then use a distro that explicitly supports it so that you know they wont advance their kernel past what zfs supports. I agree with you in principal, breaking changes = bad, but you're not complaining about a real example.
@@entelin You are projecting like Cathy Newman. The stable MySQL releases that broke production had very real consequences for web hosting companies last year.
You can stick your head in the sand, lie and make excuses all you want but people are tired of this nonsense.
Linux, Windows and Mac.
That is thus WebDAV missing from the story.
Clients like Konquer have this build-in. I rather have smb build in instead of using winscp.
#vfs #vde #volume
No ZFS? (thus a useless comparison of outdated (XFS) or unreliable file systems (btrfs; bacheFS; F2FX)). Please use a main stream distro for testing next time and use a Linux OS, that is supported by OpenZFS, like 6.5 or 6.6. Avoid rolling distros for finding the best (= reliable) file system.
Of course ext4 and OpenZFS are the best file systems; bcacheFS might be the best in 5 years; F2FS is used for phones and btrfs already tries for 10 years to deal reliably with the more complex RAID configurations. So btrfs is nice for desktop or laptop, but not for servers.
Yep, it broke on install...Linux kernel module failed to compile and I wanted to push the video out and not troubleshoot today...been doing that all month with the new box
i use zfs-linux on 6.7.2-arch1-1. Not sure about zfs-dkms one but they should work without problems, right?
ZFS upstream officially is still on 6.6, one needs to apply ZFS compat patch. Manjaro has it alteady.
XFS outdated?
@@benjy288 Some people can't distinguish old from bad.
What is happening with f2fs though