2013-07-09

Encrypted ZPOOL on top of ZRAID under CentOS 6

While setting up my new CentOS server based on "ZFS on Linux" (ZoL) I wanted to have completely encrypted file systems.

But ZoL currently does not support native ZFS encryption. Only solution to date is to use some sort of layered encryption e.g. encfs, dm-crypt, luks.

Although there have been a lot how-to guides to this topic most of them did it the same way: build a zpool on top of encrypted block devices.

There is nothing wrong with these attempts but I prefer to have ZFS's checksum feature as near as possible to the real disks. I also do not want to play with any other file system. And I do not want to setup 6 encrypted LUKS devices to have just one encrypted pool.

What came to my mind was "encrypted ZFS on top of ZFS". This gives ZFS raid and checksumming as near as possible to the physical disks and also gives all other ZFS features on the encrypted side. I also do not want to discuss performance issues today as it works out for my setup pretty well: CentOS 6.4 with ZoL 0.6.1 on quad core AMD-A8 system with raidz1 pool (6x SATA3) and a single disk zpool.

First step: create the unencrypted zpool bigz with raidz1 (raidzX or mirror also possible):

$ su -
$ zpool create bigz raidz1 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh -m /zraw
$ zfs list bigz
NAME   USED  AVAIL  REFER  MOUNTPOINT
bigz  12.8T  220M   217K  /zraw/bigz

Now you need to decide how much of the available space you would like to encrypt. This decision may not be changed afterwards that easily ;)

We are going to create a ZFS volume vm0 of specified size in the pool bigz - because ZFS volumes can be used just like other block devices:
$ zfs create -V 11500G bigz/vm0

This creates a volume of size 11.5TiB which is available as block device /dev/zvol/bigz/vm0

which we are going to encrypt:
## create a random key
$ dd if=/dev/urandom of=/root/master.key bs=1M count=1
## encrypt block device
$ cryptsetup luksFormat -c aes-cbc-essiv:sha256 -s 256 --key-file /root/master.key /dev/zvol/bigz/vm0
## open encrypted device
$ cryptsetup luksOpen -d /root/master.key /dev/zvol/bigz/vm0 bigz-vm0-luks
## create pool on to of the luks device
$ zpool create -m /zcrypt/zc-bigz-vm0 zc-bigz-vm0 /dev/mapper/bigz-vm0-luks

Now we have an encrypted zpool named zc-bigz-vm0 with mountpoint /zcrypt/zc-bigz-vm0.
Last step is to add this device to /etc/crypttab to get it mounted after boot.

For this to work we need the UUID of the ZFS volume. Using /dev/zdX or /dev/zvol/... would work too but I'm not sure if these are stable under all circumstances.
## evaluate which /dev/zdX we are using
$ find /dev/zvol -type l -exec ls -l {} \;|cut -d/ -f4-
bigz/vm0 -> ../../zd0
$ blkid | grep zd0
/dev/zd0: UUID="8d9ea780-fab9-47a9-9c3d-445ad0dc9060" TYPE="crypto_LUKS"

This UUID without "" goes to /etc/crypttab
echo "bigz-vm0-luks  UUID=375acb58-2ee7-4bf9-86db-b33f9d2857b5  /root/master.key  luks" >>/etc/crypttab

Done. So far. Of course you must take care of the master.key file by setting access rigths. Of course it makes perfect sense to have an encrypted rootfs too (which is not covered here).

Some tuning for performance: because the "outer" zpool takes care of checksums we can disable them on the inner pool:
$ zfs set checksum=off zc-bigz-vm0

I will not cover installation of ZoL on CentOS 6.4 here but there are a few pitfalls I'd like to note.
  • Disable selinux in /etc/selinux/config.
  • If you have installed webmin go to /System/Bootup and Shutdown and check that service zfs is enabled for "start at boot". Without webmin you are on your own to check that out (no idea / not motivated to break that down today).
  • I have been root all the time above.
  • I have not taken care of permissions on file systems and devices.
  • I do not know what happens, if one writes data to the unencrypted volume bigz/vm0. I will test that another day and report.
  • I will not take any responsibility for data loss while trying anything of the above. Foe me it works since some days now...
  • Please note: this is a very rough version of my how-to.
Update Okt. 2013: system is running 24/7 for couple of months now - also survived power outages without any hassle - I consider it stable for my purposes. It provides space for a fileserver and a couple of virtual machines without resistance :)

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.