Btrfs Raid on top of Luks
@ ヨハネス · Monday, Jul 20, 2020 · 3 minute read · Update at Feb 3, 2021

I have an encrypted btrfs drive, but I want to turn it into a raid-1

Motivation

I already have a big encrypted storage drive, so far this has been working pretty well. Whenever I restart my server, I run a scrypt to unlock it and mount some of the subvolumes, than start services that rely on those mounts. Since hard drives are bound to fail at some point, I’d like to set up a raid as an additional security.

Btrfs is supposed to be able to be converted into a raid by simply adding devices later and than rebalancing the data.

But will it blend?

Whatever, take me to the TR:DL already

Setting up the second LUKS device

Using cryptsetup and luks to completely encrypt the entire drive is pretty straight forward. a few commands and we are ready for the next step.

When prompted to enter YES and password, just do it.

cryptsetup luksFormat /dev/sdc

And that’s it

❯ blkid
...
/dev/sdc: UUID="2f439c44-1d60-4e26-a47f-44cc2ae5d100" TYPE="crypto_LUKS"

Opening the 2 drives

Now we are repeatedly going to use cryptsetup open to mount the encrypted devices, note the uuid from the previous step.

First open and mount the existing device

mkdir /mnt/raid

CRYPTO_UUID="2a94831d-0606-47dc-b212-1d342725c1e4"
MAPPER_NAME="sea1"
cryptsetup open /dev/disk/by-uuid/$CRYPTO_UUID $MAPPER_NAME
mount -v /dev/mapper/$MAPPER_NAME /mnt/raid

Than open the new device

CRYPTO_UUID="2f439c44-1d60-4e26-a47f-44cc2ae5d100"
MAPPER_NAME="sea2"
cryptsetup open /dev/disk/by-uuid/$CRYPTO_UUID $MAPPER_NAME

Creating the raid

Finally add the unencrypted device to the existing device in a raid-1 fashion. Extensive documentation is available in the btrfs wiki.

This can take some time, so better run it in a byobu session.

btrfs device add /dev/mapper/sea2 /mnt/raid
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/raid

In my case, the amount of data stored was rather low, only about 290GB, and the whole operation was finished after about 95 minutes. So the raid was formed at about 50Mb/s, which in my opinion, ain’t to bad for the SMR drives I am using.

I copied another 270gb of big files I stored elsewhere after the raid was formed and that operation took about 76 Minutes. Or 59Mb/s. I hope I don’t run into the trouble that people get when rebuilding ZFS raids with SMR drives.

Mounting the raid after a reboot

And finally, write a small script to do all the opening / mounting. With btrfs it is pretty simple, since we only need to mount one device to mount the entire array. So just unlock the luks-crypt, than mount and finally start the services.

#!/bin/sh

onfailexit (){
if [ $? != 0 ]; then
  echo "!! FAILED !!"
  exit 1
fi
}

openluks (){
 echo "unlock crypto drive $2"
 cryptsetup open /dev/disk/by-uuid/$1 $2
 onfailexit
}

mountsub (){
 mount -v -o subvol=/$1 /dev/mapper/sea1 $2
 onfailexit
}

echo "unlock raid devices"
openluks "2a94831d-0606-47dc-b212-1d342725c1e4" "sea1"
openluks "2f439c44-1d60-4e26-a47f-44cc2ae5d100" "sea2"

echo "mount to expected place"
mountsub "" "/mnt/raid"
mountsub "@misc" "/mnt/misc"
mountsub "@onedrive" "/home/synsi/od_backup"
mountsub "@embylib" "/home/synsi/OneDrive"

echo "mount additional binds"
mount -v --bind /mnt/misc/share /var/snap/lxd/common/lxd/storage-pools/default/containers/utor/rootfs/root/share
onfailexit

echo "starting dependant services"
echo "start syncthing"
systemctl start syncthing@synsi.service
echo "start emby"
systemctl start emby-server.service

TL;DR

Maybe there won’t be a TLDR for this post.

Social Links