This would be a variation of the procedure described in:
https://raid.wiki.kernel.org/index.php/SATA_RAID_Boot_RecipeSince your root filesystem is on LVM, you can use pvmove to migrate your vg00 onto the RAID set while it is in use, so there will be no requirement to use a Live CD.
WARNING: if the system is rebooted while in the critical phase of this procedure, you may need to use the RHEL 5 installation CD/DVD in rescue mode to boot the system and complete the mirroring procedure.
DISCLAIMER: I have not tested this procedure.
# Copy the partition table of /dev/sda to /dev/sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdb
# Change the type of partitions on /dev/sdb to "Linux raid autodetect:
fdisk /dev/sdb
ENTER "t"
SELECT "1" for partition 1
ENTER "fd" for partition type
Repeat for partition 2
# Make sure your system sees the new partition table on /dev/sdb by checking the list of visible partitions:
cat /proc/partitions
# If necessary, use the "partprobe" command to refresh the partition information:
partprobe
# Create "degraded mirrored RAIDs" on /dev/sdb1 and /dev/sdb2
mdadm -Cv -ayes /dev/md0 -n2 -l1 missing /dev/sdb1
mdadm -Cv -ayes /dev/md1 -n2 -l1 missing /dev/sdb2
# Confirm that the RAIDs are running:
cat /proc/mdstat
# Format /dev/md0: this will be your new /boot.
mkfs.ext3 /dev/md0
# Mount /dev/md0 temporarily to /mnt:
mount /dev/md0 /mnt
# Copy everything from /boot (on /dev/sda1) to /mnt (on /dev/md0):
cp -a /boot/* /mnt/
# Unmount /mnt and /boot
umount /mnt
umount /boot
# Edit your /etc/fstab to mount /boot from /dev/md0 instead of /dev/sda1:
vi /etc/fstab
(At this point, the old /boot is still as it was before, only it won't be mounted when the system comes up - the new /boot will be mounted instead.)
# Mount /boot again, now using /dev/md0:
mount /boot
# Initialize /dev/md1 as LVM PV:
pvcreate /dev/md1
# Join /dev/md1 into vg00:
vgextend vg00 /dev/md1
# (Critical phase begins: *do not reboot* after this point until all the remaining steps are complete)
# Migrate all vg00 content from /dev/sda2 to /dev/md1:
pvmove -i 10 /dev/sda2 /dev/md1
(After this, the old initrd cannot mount the root filesystem, because it is not prepared to activate the RAID devices first)
# Remove the now-empty /dev/sda2 from vg00:
vgreduce vg00 /dev/sda2
# Copy the partition layout of /dev/sdb to /dev/sda:
sfdisk -d /dev/sdb | sfdisk /dev/sda
# Add the corresponding partitions of /dev/sda to the mirror sets:
mdadm /dev/md0 -a /dev/sda1 ; mdadm /dev/md1 -a /dev/sda2
(This will overwrite the old /boot filesystem: your system is definitely not bootable at this point. That will be fixed shortly...)
# The system will automatically resynchronize the mirrors. You can monitor this procedure with "cat /proc/mdstat". Wait for the resync to complete.
# Re-create your initrd: when you do this, mkinitrd will detect your root VG is on a RAID array, and it will add the necessary RAID support modules to the new initrd.
mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)
# Re-install your bootloader... twice.
The Master Boot Record is located outside /dev/md* devices, so it won't be mirrored automatically. Therefore you must install GRUB separately to both disks.
GRUB identifier "(hd0)" means "first disk detected by the system", so the bootloader on /dev/sdb needs to be installed with the assumption that if/when /dev/sdb is used to actually boot the system (= when current /dev/sda is damaged or removed), it will be (hd0) for GRUB.
The bootloader on /dev/sda must be reinstalled because the physical positions of /boot/grub/* files on the disk have been altered by this procedure.
Commands:
grub
grub> device (hd0) /dev/sdb
grub> setup (hd0)
grub> device (hd0) /dev/sda
grub> setup (hd0)
grub> quit
# (Critical phase ends: your system should now be capable of booting from RAID without a rescue CD.)
# create /etc/mdadm.conf; add a line with keyword MAILADDR and the mail address that should receive notification from disk failures:
echo "MAILADDR Nejad@your.work.email.example" >/etc/mdadm.conf
# Send a test message:
mdadm --monitor --scan -1 --test
(You should receive a TestMessage alert email for both md0 and md1. The actual RAID disk failure alarms will use a similar message format.)
# configure the RAID monitoring service to start at boot, and start it now:
chkconfig mdmonitor on
service mdmonitor start
MK
MK