Because every time btrfs is mentioned, 5 more people come out of the woodwork saying that it irreparably lost all their data. Sorry but there's just too many stories for it to be mere coincidences.
Your statement is misleading. No one is using btrfs on servers. Debian and Ubuntu use ext4 by default. RHEL removed support for btrfs long ago, and it's not coming back:
> Red Hat will not be moving Btrfs to a fully supported feature. It was fully removed in Red Hat Enterprise Linux 8.
They do, but this is misleading due to a number of caveats
First one is that they don't use btrfs own RAID (aka btrfs-raid/volume management). They actually use hardware RAID so they don't experience any of the stability/data integrity issues people experience with btrfs-raid. Ontop of this, facebooks servers run in data centers that have 100% electricity uptime (these places have diesel generators for backup electricity)
Synology likewise offers btrfs on their NAS, but its underneath mdadm (software RAID)
The main benefit that Facebook gets from btrfs is transparent compression and snapshots and thats about it.
In my experience, btrfs just doesn't seem to be very resilient to hardware faults. Everything works great as long as you stay on the golden path, but when you fall off that path, it gets into a confused state and things start going very wrong and there is no way to recover (short of wiping the whole filesystem, because fsck doesn't fix the faults).
So yes, if you are Facebook, and put it on a rock-solid block layer, then it will probably work fine.
But outside of the world of hyperscalers, we don't have rock solid block layers. [1] Consumer drives occasionally do weird things and silently corrupt data. And on top of drives, nobody uses ECC memory and occasionally weird bit flips will corrupt data/metadata before it's even written to the disk.
At this point, I don't even trust btrfs on a single device. But the more disks you add to a btrfs array, the more likely you are to encounter a drive that's a little flaky.
And Btrfs's "best feature" really doesn't help it here, because it encourages users to throw a large number of smaller cheap/old spinning drives at it. Which is just going to increase the chance of btrfs encountering a flaky drive. The people who are willing to spend more money on a matched set of big drives are more likely to choose zfs.
The other paradox is that btrfs ends up in a weird spot where it's good enough to actually detect silent data corruption errors (unlike ext4/xfs and friends where you never find out your data was corrupted), but then it's metadata is complex and large enough that it seems to be extra vulnerable to those issues.
---------------
[1] No, mdadm doesn't count as a rock-solid block layer, it still depends on the drives to report a data error. If there is silent corruption, madam just forwards it. I did look into using a synology style btrfs on mdadm setup, but I searched and found more than a few stories from people who's synology filesystem borked itself.
In fact, you might actually be worse off with btrfs+mdadm, because now data integrity is done at a completely different layer to data redundancy, and they don't talk to each other.
In a scenario where they don't have to worry about data going poof because it's used to run stateless containers (taking advantage of CoW to reduce startup time etc)
And they almost always 'forget' to mention "that was in 2010" or "I was using the BTRFS feature marked 'do not use, unstable'".
It's really difficult to get a real feel for BTRFS when people deliberately omit critical information about their experiences. Certainly I haven't had any problems (unless you count the time it detected some bitrot on a hard drive and I had to restore some files from a backup - obviously this was in "single" mode).
Some of the most catastrophic ones were 3 years ago or earlier, but the latest kernel bug (point 5) was with 6.16.3, ~1 month ago. It did recover, but I already mentally prepared to a night of restores from backups...
> We had a few seconds of power loss the other day. Everything in the house, including a Windows machine using NTFS, came back to life without any issues. A Synology DS720+, however, became a useless brick, claiming to have suffered unrecoverable file system damage while the underlying two hard drives and two SSDs are in perfect condition. It’s two mirrored drives using the Btrfs file system
Synology does not use vanilla btrfs, they use a modified btrfs that runs over mdraid mirror, which somehow communicates with btrfs layer to supposedly fix errors, when they occur. It's not clear how far behind that fork is.
And also, I've read plenty enough about how hard it has been to maintain btrfs over the years. It's never really felt like the future.
Plus I needed zvols for various applications. I've used ZFS on BSD for even longer so when OpenZFS reached a decent level of maturity the choice between that and btrfs was obvious for me.
Not really data loss per se, but let me add my own story to the pile: just last week, I had a btrfs filesystem error out and go permanently read-only simply due to the disk becoming full. Hours of searching and no solution to be found, had to be reformatted.
I don't understand how btrfs is considered by some people to be stable enough for production use.
I know somebody is going to say otherwise, but BTRFS seems genuinely rock solid in single-disk setups. OpenSUSE defaults to it so I've been using it for years. No problems, it's not even something I worry about.
I've been running Btrfs on Fedora for a decade now (and it's been the default since 2020). I have basically never done any of those things and it's been fine. I've had to do more babysitting with my ZFS systems than I did my Btrfs ones.
Your statement is misleading. No one is using btrfs on servers. Debian and Ubuntu use ext4 by default. RHEL removed support for btrfs long ago, and it's not coming back:
> Red Hat will not be moving Btrfs to a fully supported feature. It was fully removed in Red Hat Enterprise Linux 8.