I have actually been running Btrfs on our home NAS for a while. At first I was sort of turned off by the name and a bit by the folks behind if. But a nice article over at Ars Technica convinced me to give it a go. Previously, I was using MDRAID + LVM + XFS, a setup that is still solid. Btrfs can replace that entire stack but really only if you use Btrfs exclusively (a least on that particular set of drives). In some cases, that is not ideal – I would say MySQL on XFS is probably still the best fit there. In multi-filesystem cases, Btrfs can live happily on MDRAID and/or LVM, but one will miss some of the really neat features that way.
Regardless of those cases, Btrfs seemed like soemthing worthy of trying out on my NAS where data integrity is rather important. Setting it up is definitely different. I won’t cover that here (the Btrfs wiki linked above should get most folks going), but it is easy if a tad bit alien. Instead, I thought I’d share my experience with going from a RAID1 of 2x 1TB drives to a monster (for me) RAID10 of 4x 4TB drives. I had been wanting to do that for a while, particularly since the new line of consumer NAS drives came out. I opted to go with Seagate’s NAS drives, having been a long time Seagate fan. Other hard drive vendors make similar products these days.
As a bit of an aside, while I opted for a RAID10, Btrfs has a pretty neat approach to mirroring – Btrfs will generally figure out the most optimum way to store the data while keeping two copies of the data across different drives. That means configurations of more than 2 drives, or even drives of different sizes, are possible while having a balance between data integrity and space usage. I didn’t go with this option because I had 4 of the exact same drive, and my hunch is that RAID10 will still be faster (it usually is on classic RAID).
So how did I make all this happen? I started on a healthy RAID1 setup already so the first step was to add 2 4TB drives and setup a mirror. I only have 4 bays available so I wasn’t able to just chuck all 4 of the new drives in, although conceivably that may have removed a few steps if I could have. My system supports SATA hotplug devices, so it was just a matter of putting the drives in caddies and shoving them in, making sure the system saw the new drives, and then adding them to btrfs. In brief that was, more or less the following:
# echo "- - -" > /sys/class/scsi_host/host0/scan # partprobe # mount /mnt/btrfs # btrfs device add /dev/sdd /mnt/btrfs # btrfs device add /dev/sde /mnt/btrfs # btrfs filesystem df /mnt/btrfs/
Note that host0 may be different depending on your configuration (I actually had two buses in my case). You can figure that out by doing an
ls -l /sys/class/scsi_host/. For those not familiar with Btrfs subvoluming, I’m using that extensively but normally do not keep the root btrfs file-system mounted (as there is no reason to during normal operations). Also note that I’m not partitioning the drives, though that certainly can be done if preferred. I got in the habit of not doing that when dealing with LVM and SSDs, where cluster alignment was a headache when dealing with old style partitions. It just seemed easier to use the entire drive unless I have a need to partition. Btrfs makes that need less since it supports subvolumes anyway. Anyways, after doing the above, the next step was to remove the old drives.
That is where I ran into a minor snag. Turns out, removing drives requires mounting the file-system in degraded mode. Otherwise you get a somewhat cryptic error message. So removing the drives required:
# mount -oremount,degraded /mnt/btrfs # btrfs device delete /dev/sdb # btrfs filesystem df /mnt/btrfs/ (wait for migration to complete) # btrfs device delete /dev/sdc
According to the documentation, you can remove more than one device at once, so I could have removed both 1TB drives. In either case, btrfs will migrate the data over to the other drives for you. In my case, I opted to do it one at a time. Once that was done, I yanked the old drives out, added the new 4TB drives in and then converted btrfs over to a RAID10:
# echo "- - -" > /sys/class/scsi_host/host0/scan # partprobe # btrfs device add /dev/sdb /dev/sdc # btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/btrfs
I didn’t think to grab some screenshots of some of the btrfs output of the previous steps, but I do have what the conversion to RAID10 looks like:
# btrfs filesystem df /mnt/btrfs/ Data, RAID10: total=220.00GiB, used=109.49GiB Data, RAID1: total=818.00GiB, used=817.99GiB System, RAID10: total=64.00MiB, used=148.00KiB Metadata, RAID10: total=1.00GiB, used=294.68MiB Metadata, RAID1: total=2.00GiB, used=1.54GiB
Once that is done, I’ll have around 8TB (not accounting for compression) of usable space! A minor quirk with Btrfs is that conventional tools like
df don’t give you the whole story, so
df actually looks more impressive than it is:
root@filedawg:~# df -hT /mnt/btrfs/ Filesystem Type Size Used Avail Use% Mounted on /dev/sdc btrfs 15T 1.9T 12T 14% /mnt/btrfs
This is both a good and bad thing. Given the way Btrfs works, alongside compression and de-duplication, providing a simple answer to how much space is available is no longer quite so trivial. In my opinion, that, plus the other minor quirks of Btrfs, are totally worth it given how easy it was to convert, without any downtime, from a RAID1 to RAID10. I could have done the same thing using MDRAID + LVM, but it would have involved more steps. I don’t think Btrfs is a one-size-fits-all sort of file-system – at least not yet, but for my NAS it has done a pretty good job and I’ve been quite happy with it!
All that said, it is still experimental, though, and I take backups of my important data (in my case to an XFS file-system running on an ioSafe), but *knock on wood*, it has served me very well so far!