- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: SAN Migration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2010 07:02 AM
тАО06-20-2010 07:02 AM
SAN Migration
I have a two node Service Guard cluster on pSeries blades using Red Hat ES 4u6 using Cisco switches and EMC storage. It is a single path installation. The Linux nodes boot from the SAN. I am trying to migrate this to a new EMC DMX. My first attempt resulted in Kernel Panics on both nodes.
I would appreciate any assistance in this regard.
Andrew Y
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-21-2010 05:20 AM
тАО06-21-2010 05:20 AM
Re: SAN Migration
LVM makes this kind of migration easier, because you can present the new disks from the EMC, add them into the existing volume groups, then remove the old disks. You will still need to do some GRUB work before rebooting to install the MBR on the new disks as well.
If it's not under LVM, there are other tricks that can be done, but most require longer downtime in order to be consistent.
Let me know how your disks are mounted/layed out, and I'll see what I can assist with
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-21-2010 05:49 AM
тАО06-21-2010 05:49 AM
Re: SAN Migration
Any help would be much appreciated.
The boot disks are not LVM, nor is the Service Guard lock disk, but the cluster disks are all LVM.
The boot disk on the original server is /dev/sdc but I suspect it is changed for the new SAN.
So far my plan of action is to boot with the rescue disk and change the (hd0) entry in the /boot/grub/devices.map file and then make a new boot image with mkinitrd.
Is that all that needs to happen to get the server to boot?
Once the server has booted I will need to fix the cluster lock file definition in cluster configuration, get the cluster up and running and then try to convince the customer for the third time to buy Powerpath licenses.
Is there anything I have forgotten/missed? Or should I wait for my next attempt for any more unpleasant surprises.
Regards
Andrew Y
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-22-2010 02:02 AM
тАО06-22-2010 02:02 AM
Re: SAN Migration
This is likely to be happening because the WWNs of the boot disks get written to initrd.
Take a look at the following Red Hat article, which explains how to get round the issue:
http://kbase.redhat.com/faq/docs/DOC-17660
(although big chunks of Red Hat's site appear to be offline at the time of writing).
Cheers,
Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-22-2010 02:57 AM
тАО06-22-2010 02:57 AM
Re: SAN Migration
Thanks. I did read that article earlier, but as we are not using device-mapper-multipath on these servers it doesn't (shouldn't) apply.
Briefly I set this up as a single path application because the client did not have Powerpath licenses and their EMC storage did not support device-mapper-multipath.
Thanks for the assist though.
Regards
Andrew Y
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-22-2010 04:23 AM
тАО06-22-2010 04:23 AM
Re: SAN Migration
Is that all that needs to happen to get the server to boot?[/QUOTE]
This isn't necessary. If you boot from a rescue disk, here's what you can do.
In this example, /dev/sdc is your existing pre-san migration boot disk. /dev/sdq will be your boot disk on the new SAN storage.
I would dd the new boot disk (/dev/sdq) to clear it's partition table and everything. Then using sfdisk ( sfdisk -d /dev/sdc | sfdisk /dev/sdq ) copy the partitioning over so the disks are the same.
Format the partitions on /dev/sdq however they are on /dev/sdc, so you have identical layouts.
Mount both disks on temporary locations, and rsync all data from /dev/sdc to /dev/sdq
Once all this is done, and the disks are "mirrored", then you can use grub to install the MBR onto the new disk as below:
on the command line, run "grub". At the "grub>" prompt, type "device (hd0) /dev/sdq)". Then, if /dev/sdq1 is the partition containing /boot, you would type "root (hd0,1)", or substitute ",1" with whichever partition /boot is on. The next step is to type "setup (hd0)" which will install the MBR correctly. You should see no errors.
You are now able to assign the new SAN storage disk as your primary bootable disk on your server, and boot from that disk. I would unpresent (NOT DELETE, yet) the /dev/sdc disk to make sure everything is working as intended.
Once caveat to watch for, is sometimes filesystems in /etc/fstab are referenced by "LABEL=" clauses. If this is so, you will need to use "e2label" to setup those labels on the new SAN storage disk.
Thanks,
Tom Callahan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-22-2010 04:47 AM
тАО06-22-2010 04:47 AM
Re: SAN Migration
What we will be doing is cloning the disks onto the new storage. The server will then be disconnected from the old SAN switch and connected to the new SAN infrastructure. At no stage will the old and new disks be presented to the server at the same time. However as the disks are cloned at an array level the contents should be the same and only the disk addressing will have changed.
How will this change your setup?
Regards
Andrew Young
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-23-2010 12:04 PM
тАО06-23-2010 12:04 PM
Re: SAN Migration
Any chance you could include the Kernel Panic?
Also please copy your currently used initrd file in /boot/ to a tmp directory, and then open it up and get the file called "init", and attach to this thread. That will help to determine if anything custom is occurring in your initrd.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-23-2010 11:43 PM
тАО06-23-2010 11:43 PM
Re: SAN Migration
I will get the info to you when I can. Currently the customer is having more serious problems data integrity issues with their new array. All additional work has been halted till the vendor can fix it.
Regards
Andrew Young
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-28-2010 09:47 AM
тАО06-28-2010 09:47 AM
Re: SAN Migration
Timely reply, I am about to do somewhat a similar thingy - but instead of to another SAN disk, to another local (HW RAIDed) disk to implememt a so called independently bootable "Alternate Boot Disk" for rapid fallbacks in case of a bad patch or bad primary OS environment.
My primay OS is on /dev/sda (LVM) - named vg00.
I plan to set up /dev/sdb as a new LVM VG named vgbak -- partitioned exactly the same as vg00 with a /boot partition "labeled" as "vgboot". Then I regularly cpio or tar up the OS filesystems to the vgbak ones and have a routine to edit /etc/fstab on the vgbak env to reflect the new fstab entries.
So I can likely follow your recipe for making the other disk bootable -- no? Do I need to twek my initrd though?
Thanks.