1751946 Members
4942 Online
108783 Solutions
New Discussion юеВ

Re: SAN Migration

 
Andrew Young_2
Honored Contributor

SAN Migration

Hi.

I have a two node Service Guard cluster on pSeries blades using Red Hat ES 4u6 using Cisco switches and EMC storage. It is a single path installation. The Linux nodes boot from the SAN. I am trying to migrate this to a new EMC DMX. My first attempt resulted in Kernel Panics on both nodes.

I would appreciate any assistance in this regard.

Andrew Y
Si hoc legere scis, nimis eruditionis habes
9 REPLIES 9
Thomas Callahan
Valued Contributor

Re: SAN Migration

How is your filesystem setup? Is your root disk and other devices managed under LVM?

LVM makes this kind of migration easier, because you can present the new disks from the EMC, add them into the existing volume groups, then remove the old disks. You will still need to do some GRUB work before rebooting to install the MBR on the new disks as well.

If it's not under LVM, there are other tricks that can be done, but most require longer downtime in order to be consistent.

Let me know how your disks are mounted/layed out, and I'll see what I can assist with
Andrew Young_2
Honored Contributor

Re: SAN Migration

Hi Thomas.

Any help would be much appreciated.

The boot disks are not LVM, nor is the Service Guard lock disk, but the cluster disks are all LVM.

The boot disk on the original server is /dev/sdc but I suspect it is changed for the new SAN.

So far my plan of action is to boot with the rescue disk and change the (hd0) entry in the /boot/grub/devices.map file and then make a new boot image with mkinitrd.

Is that all that needs to happen to get the server to boot?

Once the server has booted I will need to fix the cluster lock file definition in cluster configuration, get the cluster up and running and then try to convince the customer for the third time to buy Powerpath licenses.

Is there anything I have forgotten/missed? Or should I wait for my next attempt for any more unpleasant surprises.

Regards

Andrew Y
Si hoc legere scis, nimis eruditionis habes
Rob Leadbeater
Honored Contributor

Re: SAN Migration

Hi Andrew,

This is likely to be happening because the WWNs of the boot disks get written to initrd.

Take a look at the following Red Hat article, which explains how to get round the issue:

http://kbase.redhat.com/faq/docs/DOC-17660

(although big chunks of Red Hat's site appear to be offline at the time of writing).

Cheers,

Rob
Andrew Young_2
Honored Contributor

Re: SAN Migration

Hi Rob

Thanks. I did read that article earlier, but as we are not using device-mapper-multipath on these servers it doesn't (shouldn't) apply.

Briefly I set this up as a single path application because the client did not have Powerpath licenses and their EMC storage did not support device-mapper-multipath.

Thanks for the assist though.

Regards

Andrew Y
Si hoc legere scis, nimis eruditionis habes
Thomas Callahan
Valued Contributor

Re: SAN Migration

[QUOTE]So far my plan of action is to boot with the rescue disk and change the (hd0) entry in the /boot/grub/devices.map file and then make a new boot image with mkinitrd.

Is that all that needs to happen to get the server to boot?[/QUOTE]

This isn't necessary. If you boot from a rescue disk, here's what you can do.

In this example, /dev/sdc is your existing pre-san migration boot disk. /dev/sdq will be your boot disk on the new SAN storage.

I would dd the new boot disk (/dev/sdq) to clear it's partition table and everything. Then using sfdisk ( sfdisk -d /dev/sdc | sfdisk /dev/sdq ) copy the partitioning over so the disks are the same.

Format the partitions on /dev/sdq however they are on /dev/sdc, so you have identical layouts.

Mount both disks on temporary locations, and rsync all data from /dev/sdc to /dev/sdq

Once all this is done, and the disks are "mirrored", then you can use grub to install the MBR onto the new disk as below:

on the command line, run "grub". At the "grub>" prompt, type "device (hd0) /dev/sdq)". Then, if /dev/sdq1 is the partition containing /boot, you would type "root (hd0,1)", or substitute ",1" with whichever partition /boot is on. The next step is to type "setup (hd0)" which will install the MBR correctly. You should see no errors.

You are now able to assign the new SAN storage disk as your primary bootable disk on your server, and boot from that disk. I would unpresent (NOT DELETE, yet) the /dev/sdc disk to make sure everything is working as intended.

Once caveat to watch for, is sometimes filesystems in /etc/fstab are referenced by "LABEL=" clauses. If this is so, you will need to use "e2label" to setup those labels on the new SAN storage disk.

Thanks,
Tom Callahan
Andrew Young_2
Honored Contributor

Re: SAN Migration

Hi Tom

What we will be doing is cloning the disks onto the new storage. The server will then be disconnected from the old SAN switch and connected to the new SAN infrastructure. At no stage will the old and new disks be presented to the server at the same time. However as the disks are cloned at an array level the contents should be the same and only the disk addressing will have changed.

How will this change your setup?

Regards

Andrew Young
Si hoc legere scis, nimis eruditionis habes
Thomas Callahan
Valued Contributor

Re: SAN Migration

By kernel panic, do you possibly mean that they were unable to boot because they weren't able to "pivot" during the initrd startup process?

Any chance you could include the Kernel Panic?

Also please copy your currently used initrd file in /boot/ to a tmp directory, and then open it up and get the file called "init", and attach to this thread. That will help to determine if anything custom is occurring in your initrd.
Andrew Young_2
Honored Contributor

Re: SAN Migration

Hi Tom

I will get the info to you when I can. Currently the customer is having more serious problems data integrity issues with their new array. All additional work has been halted till the vendor can fix it.

Regards

Andrew Young
Si hoc legere scis, nimis eruditionis habes
Alzhy
Honored Contributor

Re: SAN Migration

Tom,

Timely reply, I am about to do somewhat a similar thingy - but instead of to another SAN disk, to another local (HW RAIDed) disk to implememt a so called independently bootable "Alternate Boot Disk" for rapid fallbacks in case of a bad patch or bad primary OS environment.

My primay OS is on /dev/sda (LVM) - named vg00.

I plan to set up /dev/sdb as a new LVM VG named vgbak -- partitioned exactly the same as vg00 with a /boot partition "labeled" as "vgboot". Then I regularly cpio or tar up the OS filesystems to the vgbak ones and have a routine to edit /etc/fstab on the vgbak env to reflect the new fstab entries.

So I can likely follow your recipe for making the other disk bootable -- no? Do I need to twek my initrd though?


Thanks.





Hakuna Matata.