Linux RAID + dm-crypt: Poor write performance
I’ve been running a RAID 6 with four active disks for quite some time now, and performance always seemed adequate, achieving about 230MB/s read and 90MB/s write (obviously in rather artifical tests using dd; actual file operations might vary).
Now it occured to me, that encrypting my data might actually be a good
idea, so I looked into cryptsetup for adding encryption on top of my
RAID, using LUKS. Now, anyone familiar with RAIDs should know that
alignment is key to proper performance. cryptsetup offers an option
--align-payload to align data at a multiple of 512 bytes.
Based on the chunk size I am using (512KB) and the number of disks,
2048 seemed to be the right value for the setting.
Weirdly enough, though, it didn’t matter what value I used: 2048,
1024, none at all or completely arbitrary values: Write performance
would always suffer and be around 30MB/s, with my CPU being almost
idle. I spent hours trying to figure out what I might’ve been doing
wrong, why the option wasn’t having any effect and so on, until I
found an aligning-unrelated setting called
defaults to 256 on many systems, mine including. After setting that
option to a higher value of 8192, my write speeds rocketed up to
190MB/s for both plain md0 as well as the encrypted device.
The option can be set by simply writing a value to
/sys/block/<your_md_device>/md/stripe_cache_size, so for example:
8192 > /sys/block/md0/md/stripe_cache_size.
Please be aware that the setting describes the number of pages to cache, so a higher value will consume more memory.
Also note that I did not rely on the speed output of dd but instead used nmon, to avoid getting higher speeds caused by buffers. In fact, the system will continue writing data while dd already thinks it is done.