Expanding the size of a Debian Linux software RAID 5 array

Not long ago I built a new storage and mail server to replace several aging servers. I knew that to provide space for all my files and a little room to grow I would need a little more than a terrabyte of storage space and that I wanted this to be in a redundant RAID array. I mentioned in a previous post how I created a software RAID + LVM setup for this. The one catch is that my motherboard had fewer SATA ports than I initially thought so I had to leave one drive off the srray while I got things up and running.

A few days ago the PCI Express SATA controller came in so I needed to add the fifth drive to the existing array, ideally without breaking anything. The first few site I checked stated this was not yet possible but after doing a bit more checking I found out that if you have a current kernel, mdadm and LVM2 tools it actually is. I based this on information from the Gentoo-wiki but I did make a few changes for my specific scenario.

Really the only additional information I needed was to copy the partition table from one of the other drives (sdd) to the new drive (sde). “sfdisk -d /dev/sdd | sfdisk /dev/sde

I also use “lsof /home” to find which processes had open files on the volume before I unmounted it. I also set the stride flag on resize2fs to 16 based on my block and chunk sizes. Apparently getting this correct has a great bearing on speed and efficiency. For the record the stride size should be chunk size (from /proc/mdstat) divided by block size (from the file system). In my case my chunk size was 64k and block size 4k.

Comments are closed.