Operating System - Linux
1830233 Members
2357 Online
109999 Solutions
New Discussion

RHEL4 Linux boot disk mirroring.

 
SOLVED
Go to solution
Gulam Mohiuddin
Regular Advisor

RHEL4 Linux boot disk mirroring.

On RHEL4 I would like to (software) mirror root disk/partition to one more identical extra SCSI hard drive disk installed on the server. (We don’t have any hardware RAID card installed on this Server).

On HP-UX I used to do LVM mirroring using Mirror/UX but I am note sure the same could be done on Red Hat Linux 4.

Thanks,

Gulam.
Everyday Learning.
8 REPLIES 8
Ivan Ferreira
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Here is how I did it once:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=904635

The step missing is to set the partition type to software RAID for the vfat partition.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Ivan Ferreira
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Probably you need to change raidtools for mdadm also, but the idea is that.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Ivan Ferreira
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Probably you need to change raidtools for mdadm also, but that's the idea.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Huc_1
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Hi,

I found this with google search maybe this will help you along!

http://www.tek-tips.com/viewthread.cfm?qid=1318503&page=1

But beware I have never try any of this and I would not try any of this unless I had a good/valid/tested back of my original disk!

enjoy, life.

Jean-Pierre Huc
Smile I will feel the difference
Andrea Rossi
Frequent Advisor

Re: RHEL4 Linux boot disk mirroring.

Hi

in my personal experience software raid on x86 platform is not as reliable as on HPUX, and linux make no exception. Besides hw raid is really more affordable.
Ivan Ferreira
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Hi Andrea Rossi, so bad your experience but I have several servers with software raid working without problems since years.

I can say that Linux software RAID is reliable.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Ralph Grothe
Honored Contributor
Solution

Re: RHEL4 Linux boot disk mirroring.

Hi,

we set up all our Linux servers with Linux MD software raid, and we have never experienced any problems with it, absolutely satisfied.
In fact I would consider Linux MD RAID even more reliable from my point of view than any queer hardware raid controller whose state is mostly difficult to monitor if you don't have a suitable MIB (some vendors seem to not care).
This is said, because with Linux MD RAID you can easily enable the mdadm --monitor mode,
and integrating traps to send passive checks to a Nagios server is absolutely painless.
I heard that newer versions of Linux LVM recently have started incorporating LVM mirroring, like known from HP-UX LVM,
but so far this is still considered experimental, wheras LVM volumes on top of MD RAID devices is well proved and tested technology.
You can set up MD RAID mirroring already during the installation (which is probably easier if you are new to the mdadm command).
But you can also at any time setup MD devices from a living Linux that was installed into partitions.
Here's how I would convert from a partitioned Linux.
First your kernel must support MD_RAID.
The RHEL vanilla kernels have been doing this for quite a while, but one never knows.
You could check your kernel's config file.

# grep MD_RAID /boot/config-$(uname -r)
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID5=m
CONFIG_MD_RAID6=m

If you're unlucky you have to build a new kernel from the sources.
There you can abbreviate things a lot by first running "make oldconfig"!
And only afterwards search in the deeply nested menuconfig for MD support
(it really can be difficult to find because the Linux kernels boat with configurable options, and I cannot remeber by heart anymore what the menu path was)
You can check the different raid levels in kernel menuconfig either as "y" which would link them statically, or as "m" (like in my output) which would require you to make an initial RAM disk (which is easy thanks to mkinitrd) later, because you need to have RAID1 available at an early boot stage.
Then run "make bzImage" and "make modules", "make mnodules_install", move the kernel into your /boot and adapt your boot loader, and boot the new kernel.
Once booted, check that you can load the raid1

# modprobe raid1

# lsmod|grep raid
raid1 19521 5

This is done automatically later by the init script from the initial RAM disk.

Ah, forgot your kernel also requires LVM support.
So check like

# grep BLK_DEV_DM /boot/config-$(uname -r)
CONFIG_BLK_DEV_DM=m

Let's assume you have two identical SCSCI disks /dev/sda, where your running Linux is installed, and /dev/sdb which is free.

You need to create a partition scheme on /dev/sdb which should contain at least 3 partitions of type auto raid (0xfd).
You can use fdisk for manual creation.
It could look similar to mine

# fdisk -l /dev/sdb

Disk /dev/sdb: 73.4 GB, 73407820800 bytes
255 heads, 63 sectors/track, 8924 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 17 136521 fd Linux raid autodetect
/dev/sdb2 18 1263 10008495 fd Linux raid autodetect
/dev/sdb3 1264 1762 4008217+ fd Linux raid autodetect
/dev/sdb4 1763 8924 57528765 5 Extended
/dev/sdb5 1763 6743 40009851 fd Linux raid autodetect
/dev/sdb6 6744 6868 1004031 82 Linux swap
/dev/sdb7 6869 8924 16514788+ fd Linux raid autodetect


/dev/sdb1 will carry your /boot filesystem,
and thus only requires to be small as some 100 MB.
I would strongly advise you to later have /boot mounted as /dev/md0 rather than some LVM volume (which although it should theoratically work would make your life in recovery mode far too complicated)
/dev/sdb2 becomes the PV for our root VG.
So abt. 5-10 GB is recommended.
Finally we need a swap device,
which also should not reside in an LVM volume for performance reasons (this is different to HP-UX LVM).
Apart from that you may partition more raid slices for devices beyond /dev/md2,
but they aren't required.

Now you can start building the raid devices.
At this stage they must be created as degraded arrays because the later mirror slices on /dev/sda are still used by our running Linux.
e.g.

# mdadm -C -n2 -l1 /dev/md0 /dev/sdb1 missing

-C is create mode, -n2 number of components (slices), -l1 layout as RAID1, and missing because our /dev/sda1 is still in use.

You repeat this for all the devices you require.

e.g.

# mdadm -C -n2 -l1 /dev/md1 /dev/sdb2 missing

# mdadm -C -n2 -l1 /dev/md2 /dev/sdb3 missing


Now you already can create an /etc/mdadm.conf
for the degraded array.

# cat </etc/mdadm.conf
DEVICE /dev/sdb[123]
#
MAILADDR root
#
EOF

# mdadm -QDs >>/etc/mdadm.conf

Now create an ext3 filesystem to later host /boot.

# mkfs.ext3 /dev/md0

and disable mount counts since we have a journalling FS

# tune2fs -c 0 -i 0 /dev/md0

mount it and copy contents from /boot
in it

# mkdir -p /mnt/tmp{1,2,3,4,5}

# mount /dev/md0 /mnt/tmp1

# cd /boot && tar cf - .|(cd /mnt/tmp1 && tar xf -)

# df -i /boot /mnt/tmp1

In case your current initial RAM disk has neither LVM nor MD support better create a new one like

# mkinitrd /mnt/tmp1/initrd-raid-$(uname -r).imp $(uname -r)

Depending on your boot loader you then need to edit either /etc/lilo.conf or /mnt/tmp1/grub/grub.conf

Not until so long ago LILO was the preferred loader for MD RAID boots.
But now grub also can cope with it, and I prefer the latter because of its wider features.

You need to the directives "kernel" point to your RAID enabled kernel (probably the same one as before).
And important, the root= must point to the LV that shall carry /, e.g. root=/dev/vgroot/lv_root
The other options can remain the same.
Finally "initrd" must point to the file we just created by mkinitrd.
Best copy a whole stanza from the menu and only edit the changes.

If all's done

# umount /mnt/tmp1

Then you can create LVM volumes on top of /dev/md1 to carry your OS.

# pvcreate /dev/md1

# vgcreate vgroot /dev/md1

# lvcreate -L 1024m -n lv_root vgroot
# lvcreate -L 512m -n lv_var vgroot
# lvcreate -L 512m -n lv_tmp vgroot
# lvcreate -L 4096m -n lv_usr vgroot
# lvcreate -L 256m -n lv_home vgroot
# lvcreate -L 356m -n lv_opt

etc.

The volumes and their sizes heavily relies on your needs.
If you later need more space, no problem.
Current RHEL have the ext2online command which lets you extend even a mounted filesystem.

Supply ext3 filesystems on them.

# vgdisplay -v vgroot 2>/dev/null

# lvs --noheadings --separator=,|cut -d, -f1|xargs -n1 -i mkfs.ext3 /dev/vgroot/{}

# lvs --noheadings --separator=,|cut -d, -f1|xargs -n1 -i tune2fs -i 0 -c 0 /dev/vgroot/{}

After that you need to copy all data from your current mounts into the new LVs.
You could do this either in single user mode
or produce snapshots and copy from them
if your current Linux already uses LVM.
I only demonstrate this for the / filesystem.
You must repeat for every filesystem.

I would use dump/restore for copying

e.g.

# mount /dev/vgroot/lv_root /mnt/tmp1

# dump -lf - /|(cd /mnt/tmp1 && restore -rf -)


Then you need to edit fstab in your new root filesystem to read the new mount points.

e.g.

/dev/md0 /boot ext3 defaults 1 2
/dev/vgroot/lv_root / ext3 defaults 1 1
/dev/md2 swap swap defaults 0 0
...


Finally, you need to write grub into the MBR of /dev/sdb

# grub

grub> root (hd1,0)
grub> setup (hd1)
grub> quit

You then can try to boot your new LVM on top of RAID.

If, hopefully all worked well
you can destroy your old Linux on /dev/sda
and complete the mirror.

e.g.

# sfdisk -d /dev/sdb | sfdisk /dev/sda

# mdadm /dev/md0 -a /dev/sda1

You can check syncing with

# mdadm -QD /dev/md0

If all's clean continue with /dev/md1 and /dev/md2.

Finally you must update your mdadm.conf.

Edit it and throw away only lines starting with ARRAY and execute

# mdadm -QDs >>/etc/mdadm.conf

Add to the DEVICE line your new partitions so that it reads

DEVICE /dev/sd[ab][123]

Finally update the MBR on /dev/sda.

# grub
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Now, if I haven't forgotten or overlooked something decisive, you should have a Linux installation on SW MD RAID.

Finally you can setup md monitoring by writing a short sccript that does some sort of notification (e.g. submitting a apssive check to a Nagios server) and put its path in an extra line after PROGRAM in mdadm.conf
and run

# service md-monitor start

Madness, thy name is system administration
Ralph Grothe
Honored Contributor

Re: RHEL4 Linux boot disk mirroring.

Hi Gulam,

I noticed a crucial typo I made yesterday while writing.
But I'm sure you have already noticed.
The dump command should take a level 0 dump (i.e. full backup).
Therefore, it should read "dump -0f -" before the pipe.
But you could of course also use any other backup mechanism you are familiar with, like piping output from find /fs -xdev into a cpio in passthrough mode.
e.g.

# lvcreate -L 256m -n lvsnap -s /dev/vgrootX/lv_root

# mount -r /dev/vgrootX/lvsnap /mnt/tmp1

# df -ik / /mnt/tmp1
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgrootX-lv_root
131072 6915 124157 6% /
/dev/mapper/vgrootX-lvsnap
131072 6915 124157 6% /mnt/tmp1

# lvdisplay /dev/vgrootX/lv_root | awk '/Current LE/{print$NF}'
256
# lvcreate -l 256 -n lv_root_copy vgrootX
Logical volume "lv_root_copy" created

# mkfs.ext3 -q /dev/vgrootX/lv_root_copy
max_blocks 268435456, rsv_groups = 8192, rsv_gdb = 63
inode.i_blocks = 2528, i_size = 4243456

# tune2fs -i 0 -c 0 /dev/vgrootX/lv_root_copy
tune2fs 1.35 (28-Feb-2004)
Setting maximal mount count to -1
Setting interval between check 0 seconds

# mount /dev/vgrootX/lv_root_copy /mnt/tmp2

# cd /mnt/tmp1 && find . -xdev |cpio -pmuda /mnt/tmp2
234814 blocks

# df -i /mnt/tmp[12]
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgrootX-lvsnap
131072 6915 124157 6% /mnt/tmp1
/dev/mapper/vgrootX-lv_root_copy
131072 6915 124157 6% /mnt/tmp2

# df -k /mnt/tmp[12]
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vgrootX-lvsnap
1032088 170388 809272 18% /mnt/tmp1
/dev/mapper/vgrootX-lv_root_copy
1032088 170408 809252 18% /mnt/tmp2


Of course, you could also use pax,
which is more flexible in on-the-fly path manipulation etc.

I just wanted to demonstrate what a great feature snapshots are that can be applied to living systems (e.g. an updating database)
for very fast snapshot copies for test environments etc.
After its use you just unmount the snapshot and destroy it

Btw, an lvdisplay on the snapshot shows you how much of it has been currently used up to store the differences of the snaphot volume since the snapshot was taken.
If you have snapshot a very active filesystem and you run into danger of filling the snapshot up (at which point it is rendered useless) you may alway lvextend the snapshot to accomodate more space.
As you can see my snapshot has been far to large with 256 MB.
During the time I took the dump I used up less then 5% of it.

# lvdisplay /dev/vgrootX/lvsnap |grep -i allocated
Allocated to snapshot 3.47%

# pwd
/mnt/tmp1
# cd
# umount /mnt/tmp1
# lvremove -f /dev/vgrootX/lvsnap
Logical volume "lvsnap" successfully removed

Madness, thy name is system administration