I was literally recommended to use BTRFS because it has been battle tested and proven. I should probably have done more research but given its maturity (we're in 2023 after all), I feel a bit let down.
BTRFS is far better designed than other recent linux filesystems. It's just that linux always was and always will be very very buggy. If you want less bugs, use the oldest possible FS that is still usable. That's probably ext2.
Most problems with btrfs is that it fails to mount in many situations and there's no automated fsck to fix it, requiring manual intervention; in fact running fsck on btrfs is considered a very bad thing.
This makes it OK for desktop, but not at all suitable for headless servers and other unsupervised machines.
Speaking of bad design, F2FS for example, a filesystem designed for flash drives, keeps both primary and backup superblocks in the same flash erase block. If that block gets corrupted the entire fs is lost.
That's interesting. I've always thought ext3fs would at least be safer than ext2 seeing that ext2 isn't a journaling filesystem. What happens if you abruptly shut it down e.g. a 1000 times.
What happens? I don't know... fsck would run automatically on reboot, fix the errros and recover part of the last file written in /lost+found/
But why would a server abruptly shut down 1000 times?? I already said btrfs is good enough for desktops and other interactive/supervised systems. And data loss on servers is recovered from backups, not journals!
Restoring from backups can be tedious and time consuming. For example, if you have a database and PITR system, you may have to replay WALs until you get the point in time where the server failed.
XFS is known to be extremely robust for servers - and it's journaled. Most servers nowadays are SSD.