Linux Software RAID and LVM

Wouter configured my nas with two raid1 setups and two lvm partitions like this:

md0 = sdb1 + sdd1
md1 = sda1 + sdc1

/ mounted on wolf/root on md1
swap mounted on wolf/swap on md0

What I wanted was raid5 with four disks, but without rebooting (and possibly bricking) the intel SS4000. Like this:

md0 = sda1 + sdb1 + sdc1 + sdd1
/ mounted on wolf/root on md0

Step 1: remove the swap partition:

swapoff /dev/wolf/swap
vi /etc/fstab
lvremove wolf/swap

Step 2: remove md0 from lvm and stop the raid1

vgreduce wolf /dev/md0
pvremove /dev/md0
mdadm --stop /dev/md0

Step 3: remove one disk from the remaining raid1

mdadm /dev/md1 --fail /dev/sdb1
mdadm /dev/md1 --remove /dev/sdb1

Step 4: build a degraded raid5 with four disks

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 missing

Step 5: add the raid5 to lvm and move the root partition to it

pvcreate /dev/md0
vgextend wolf /dev/md0
pvmove -v /dev/md1 /dev/md0

Step 6: remove and stop md1

vgreduce wolf /dev/md1
pvremove /dev/md1
mdadm --stop /dev/md1

Step 7: complete the raid5 and update initrd

mdadm /dev/md0 --add /dev/sdd1
update-initramfs -k $(uname -r) -u

1 comment:

Paul Cobbaut said...

*It didn't work* ...

After the first reboot it bricked :(
You can avoid this by creating a proper /etc/mdadm/mdadm.conf using "dpkg-reconfigure -plow mdadm" (which will invoke the initramfs).