How I setup a 2TB raid on my Linux box
I currently have a server at home that has 10 250GB drives in a raid 5 configuration. The original install on it was Gentoo 2008.0, that over time and multiple updates is now using the profile of 10.0. Things have changed since the last time I setup raid on this machine, and upgrading is going to be fun. To begin with, Grub can only boot raid level 0 and 1. This has always been the case, so no change there. However, newer mdadm now uses metadata version 1.2 which grub doesn't recognize, and raid autodetect is deprecated now, favoring initramfs. I have decided to use 3 1TB drives in a raid 5 configuration on my setup to replace my 10 250GB drives in an effort to save electricity. This article is not really a HOWTO, but more of an aggregation of notes and links to real HOWTO's and something of a journal on how I proceeded.
Things to read before proceeding
- https://raid.wiki.kernel.org/index.php/RAID_Boot
- https://raid.wiki.kernel.org/index.php/RAID_setup
- http://en.gentoo-wiki.com/wiki/Initramfs
Setting up Raid
I created 2 partitions on each drive, one was a 100MB partition for /boot that grub can read my kernel, and my initramfs image. The second partition is for my raid 5.
Device Boot Start End Blocks Id System
/dev/sda1 2048 206847 102400 fd Linux raid autodetect
/dev/sda2 206848 1953525167 976659160 fd Linux raid autodetect
I made sure to use metadata 1.0 for my boot partition, this is necessary as I point out above so that grub can read that superblock. To avoid some confusion, in the event you reboot into a System Rescue CD or something similar you may notice the raid devices coming up as /dev/md126, /dev/md127. Don't freak out, this is normal. If you look in /dev/md/ you'll see a convention like sysresccd:0 which is fine and completely normal.
mdadm -C /dev/md0 -l 1 -n 2 -x 1 --metadata=1.0 /dev/sda1 /dev/sdb1 /dev/sdc1
I let the default metadata be used here, and created the raid 5 drive:
mdadm -C /dev/md1 -c 128 -l 5 -n 3 /dev/sda2 /dev/sdb2 /dev/sdc2
Setting up Grub
Fairly simple and straight forward, just need to be sure that all files are referenced in the root directory since this is the reference point for grub in its partition.
default 0
timeout 5
splashimage=(hd0,0)/grub/splash.xpm.gz
title 2.6.34-gentoo-r12
root (hd0,0)
kernel /vmlinuz-2.6.34-gentoo-r12
initrd /initramfs.cpio.gz
/etc/fstab
About as simple as you can get.
/dev/md1 / ext3 noatime 0 1
/dev/md0 /boot ext3 noatime 0 1
initramfs
I followed the instructions on http://en.gentoo-wiki.com/wiki/Initramfs, which are really quite well done.
- I chose to use busybox, it makes life a piece of cake
- I re-emerged mdadm with USE="static" flag
- I didn't bother with setting up mdadm.conf, rather I just did the assemble straight up. IMHO its same amount of work, and at least only 1 file to modify
Quick initramfs construction
mkdir /usr/src/initramfs
cd /usr/src/initramfs
mkdir -p bin lib dev etc mnt/root proc root sbin sys
cp -a /dev/* /usr/src/initramfs/dev/
USE="static" emerge -av busybox mdadm
cp -a /bin/busybox /usr/src/initramfs/bin/busybox
cp -a /sbin/mdadm /usr/src/initramfs/sbin/mdadm
touch /usr/src/initramfs/init # grab the init script below
chmod +x /usr/src/initramfs/init
find . -print0 | cpio --null -ov --format=newc | gzip -9 > /boot/initramfs.cpio.gz
This is my init script for busybox
For the record, https://raid.wiki.kernel.org/index.php/RAID_Boot has a much better init script example, but for my purposes the following is all I needed.
#!/bin/busybox sh
rescue_shell() {
echo "$1"
busybox --install -s
exec /bin/sh
}
# Mount the /proc and /sys filesystems.
mount -t proc none /proc || rescue_shell "failed to mount proc"
mount -t sysfs none /sys || rescue_shell "failed to mount sysfs"
mount -t devtmpfs none /dev || rescue_shell "failed to mount dev"
# Do your stuff here.
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 || rescue_shell "failed to assemble /dev/md0"
mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 || rescue_shell "failed to assemble /dev/md1"
# Mount the root filesystem.
mount -o ro /dev/md1 /mnt/root || rescue_shell "failed to mount root"
# Clean up.
umount /proc
umount /sys
umount /dev
# Boot the real thing.
exec switch\_root /mnt/root /sbin/init
Make the necessary kernel changes (namely adding **initramfs** support, and **devtmpfs** support) and you should be good. At this point everything \*should\* work. Good Luck!