cross-posted from: https://lemm.ee/post/4274796

Just wanted to share some love for this filesystem.

I’ve been running a btrfs raid1 continuously for over ten years, on a motley assortment of near-garbage hard drives of all different shapes and sizes. None of the original drives are still in it, and that server is now on its fourth motherboard. The data has survived it all!

It’s grown to 6 drives now, and most recently survived the runtime failure of a SATA controller card that four of them were attached to. After replacing it, I was stunned to discover that the volume was uncorrupted and didn’t even require repair.

So knock on wood — I’m not trying to tempt fate here. I just want to say thank you to all the devs for their hard work, and add some positive feedback to the heap since btrfs gets way more than it’s fair share of flak, which I personally find to be undeserved. Cheers!

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 year ago

    Agreed, RAID 1 (and 10) are pretty stable.

    Moderately fun fact, RAID 1 in BTRFS is not really RAID 1 in the traditional sense. Rather it’s a guarantee that your data lives on two separate drives. You don’t know which ones though. You could have one copy of everything on a 12TB drive, whith various secondary copies distrivlbuted on three 4TB drives.
    Traditional RAID 1 works ONLY with two drives, with a capacity of the smaller drive as upper limit. The way to extend a traditional RAID 1 array is by adding two new drives and creating a RAID 10 with all four. (Multiple RAID 1 striped)

    • geoff@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      This right here is what has made it so flexible for me to reuse salvaged equipment. You can just chuck a bunch of randomly sized drives at it, and it will give you as much storage as it can while guaranteeing you can lose any one drive. Fantastic.

  • flux@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Anything specific advice you would give to others to prevent corruption? Or keep drives healthy?

    • geoff@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m not sure I know enough to be giving out advice, but I can tell you what I do. I do have a cron job to run scrub, to keep the bitrot away. I also tend to replace my drives proactively when they get REALLY old — the flexibility of btrfs raid1 lets me do that one drive at a time instead of two, making it much more affordable. You can plan out your storage with the btrfs calculator.

  • blashork [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    1 year ago

    I’m glad it’ working well for you, but I don’t think it’ true to say that btrfs gets beyond its fair share of flak. It gets the exactly correct amount of flak for what it is. Every place I have worked at that wanted to deploy a COW fs on like, a NAS or server, has always gone with zfs. btrfs is such a mess it never even enters the conversation. Even if it can have its bugs ironed out, the bcache dev was right in pointing out that its on disk formats are poorly designed for their job, and cannot be revised except in a new version of the entire fs. I hope bcachefs gets merged into the kernel next year, that’s a filesystem I would actually trust with my data.

    • ProtonBadger@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Btrfs does get a lot of flak based on hearsay or experiences that are out of date. It works well in a lot of scenarios and is used a lot now, ZFS is also a good fs for many use cases, especially in enterprise situations.

      I can’t comment on the on-disk formats as I have no experience there but Btrfs works well in a lot of use cases for for a lot of users.

      Bcachefs sounds promising but it does have a long way to go and will need a lot of testing. It’s getting into the kernel to get more testing mileage on it and encourage more developers, it only have one guy working on it (except for the casefolding submission) which is a big problem for both present and future. Hopefully it’ll get more devs interested.

      Never trust any filesystem, or the storage media. Consider anything that holds your data to be fallible.

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I’ve been using it in Fedora since they switched to it as the default FS. I have not done anything special. I am not trying anything fancy except compress-force=zstd:1. Seems good to me!

    • deadcatbounce@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Why just :1? The default is :3 and looking at the timings for zstd deflate speed vs compression level (Google for it … ), becomes slow at around 7.

      Don’t mean shit to me but suggest you reconsider.

      • GnuLinuxDude@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Slow relative to what? Any zstd compression, while really fast, will be slower than native write speeds to my nvme. A tiny bit of ratio gain isn’t worthwhile to me.

  • Grass@geddit.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I use it on my steam deck microsd to cram more shit in via compression. Main drive is left as ext4 though so case folding can be used for particularly janky windows games or mods.

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Same experience for me, except without the RAID. It’s the only file system I ever used that just worked and didn’t self destruct. With ext234 I always ended up with all my files scattered in lost+found sooner or later. In the early days XFS couldn’t handle system crashes without deleting important files and I even managed to corrupt ZFS on an USB drive. Never had anything catastrophic happen with BTRFS, quite the opposite, it warned me of broken RAM or drives a few times.

    That said, BTRFS can get a bit finicky when it gets full. It has gotten a lot better over the years, but it’s a situation that one should better avoid.

  • Revan343@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’ve been wanting to build a raid for a while, what raid controller do you use/would you recommend?

    • geoff@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      For a software RAID like this, you don’t want a hardware RAID controller, per se – you just want a bunch of ports. After my recent controller failure, I decided to try one of these. It’s slick as hell, sitting close to the motherboard, and seems rock solid so far. We’ll see!

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    14
    ·
    1 year ago

    Yeah BTRFS is way more reliable than Ext4. A simple power failure or other hardware fuckup with Ext4 and you can be sure all your data is gone, with BTRFS your data will survive a lot of shit.

      • TCB13@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        1 year ago

        Yes it is :) but comparatively I’ve never lost a volume / disk to BTRFS in years of the same scenarios.

    • Quazatron@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      My experience says otherwise.

      Ext4 is rock solid and will survive power loss without a problem.

      I love btrfs for the compression and snapshot capabilities, but it still has a long way to go to reach ext4 maturity.

      That’s not a shot at btrfs, it’s just that filesystem maturity and reliability take time.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I can’t share your enthusiasm about Ext4’s safety. I’ve had multiple disks lost to simple power failures at home and more complex hardware failures at datacenters. At the time I migrated to XFS - which also always performed better than Ext4 when things failed - and then moved to BTRFS when become mostly stable.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I’ve been using Ext4 for over 150 years now and I never had any issues with it. I did not only survive multiple power failures but also house fire, couple direct hits with EMP and a zombie apocalypse.

    • Wispy2891@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I had the exact opposite experience. A power loss destroyed my btrfs boot drive, it couldn’t be mounted anymore