Btrfs Experiment
After watching this video and reading this blog post, I decided to have a play with the new btrfs (ButterFS) filesystem.
So I downloaded OEL6u2 and made a minimal install in VirtualBox (which doesn’t even include wget or scp!)
I decided not to use LVM, but to use LUKS encryption for my 8Gb ext4 /, a 500Mb ext4 /boot with no encryption, and no swap partition.
Upon first boot I installed the Base Yum repo from here which is essentially just what’s on the DVD, and also the beta repo from here. I then installed the Unbreakable2 kernel and btrfs:
|
|
I then booted from the Fedora 16 LiveCD (not the install DVD) to do the ext4-to-btrfs conversion, as the OEL installer hasn’t yet been updated to include btrfs, although eventually it will allow direct btrfs root installation.
Next I had to decrypt the LUKS partition and do the conversion of the underlying ext4 filesystem:
|
|
As you need to edit the fstab, make a temporary directory to mount your root filesystem inside root’s $HOME directory:
|
|
Simply replace “ext4” with “btrfs” on the “/” line and :wq
For some reason when I tried to boot at this stage I got all sorts of permissions issues and it wouldn’t boot. The fix is to disable SELinux:
|
|
Prove we’ve booted into our btrfs root filesystem:
|
|
Remove the old ext4 snapshot and defragment:
|
|
Next I thought I’d try out the the yum plugin that takes a snapshot every time you run yum, so you can rollback (Debian equivalent is apt-btrfs-snapshot):
|
|
Check for the snapshot, note its ID and reboot:
|
|
To boot into the snapshot made before you installed Apache, press “e” at grub prompt to edit the parameters and insert the following into the kernel line:
|
|
Then press “b” to boot. Once booted prove Apache isn’t installed anymore, and reboot back into the default snapshot:
|
|
Check apache is installed again:
|
|
Magic! So no we’ve converted an ext4 filesystem to btrfs on top of a LUKS partition, and have proven yum snapshots work and can boot into a snapshot without needing any backup/restore system.
The snapshot is just regular files, so could be tarred up and moved to an external drive for backup I guess:
|
|
I tried to simulate a catastrophic drive failure – i.e. if you’ve got a tarball of a snapshot, can you restore that to a blank disk with empty ext4 /boot and btrfs / partitions? Well the answer is no if you’re using LUKS. By using dd to backup the MBR and /boot partition I got somewhere, however when I restored the snapshot (untarred it into an empty btrfs filesystem) it booted without prompting for the LUKS key, and then died.
So it still seems a bit like btrfs is for rolling back to a previous point, or to be used with one or more drives in a RAID array, but no use when a single drive fails.
Update: I found that because btrfs stores the BLKID of the disk in every data block, you could never actually move a btrfs filesystem to another disk! The only way to do it (as I have) is to dd the entire disk to another disk, and then they’re BLKID’s will be the same, also it solves the problem of trying to piece together the /boot sector and grub in the MBR.
The problem however, is that as you can’t mount two devices with the same BLKID’s at the same time, you couldn’t use rsync to sync the two disks. So for backup of btrfs, you’re stuck with dd’ing the entire disk, or don’t use LUKS and use hardware RAID (not btrfs RAID or software RAID) instead, which is pretty crap.