TruCluster
cancel
Showing results for 
Search instead for 
Did you mean: 

Migrate single node cluster from EMA to EVA

Adam Garsha
Valued Contributor

Migrate single node cluster from EMA to EVA

We are migrating all of our legacy data and systems from an EMA (HSG80's) to our (in-production-already) EVA (HSV110's).

I seek advice on steps for migrating a single-node trucluster system's OS/boot/swap disks from the EMA to the EVA.

Assuming this is done:

1.) adjust fabric zones as necessary
2.) present necessary storage/LUN from EVA (and document new device file names)

So far my plan looks something like this:
3.) system is booted from CD (5.1B) and data from root, boot, usr, var partitions are vdumped to equivalent new partitions under:

/mnt/new_root1
/mnt/new_cluster_root
/mnt/new_usr
/mnt/new_var

4.) /mnt/new_root1/cluster/members/member1/boot_partition/etc/sysconfigtab is edited to reflect new swap
5.) /mnt/new_cluster_root/etc/fdmns links for root, boot, usr, and var are adjusted to point to the new disk partitions
6.) console vars are adjusted to boot from new cluster boot disk

Am I missing anything? Do I need to do any wwidmgr stuff?
Do you know of the availability of any whitepapers that document steps of such a migration?

Thanks much.
4 REPLIES
Ivan Ferreira
Honored Contributor

Re: Migrate single node cluster from EMA to EVA

You need to restore the CNX partition data with clu_bdmgr. There are instructions on this forum to do that.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Michael Schulte zur Sur
Honored Contributor

Re: Migrate single node cluster from EMA to EVA

Hi,

information about cluster restore and to a certain degree your migration is a subset of it you will find in the cluster admin guide.
http://h30097.www3.hp.com/docs/pub_page/cluster51B_list.html

greetings,

Michael
Uwe Zessin
Honored Contributor

Re: Migrate single node cluster from EMA to EVA

You need WWIDMGR to reconfigure the boot path from the EMA to the EVA. That can be a bit 'tricky', because the AlphaServer console can store only 4 target ports.

If you have not done it already, I suggest you write down your current boot configuration to that you can revert in case the move to the EVA has failed.

You can clean up the old configuration with
>>> wwidmgr -clear all
.
Adam Garsha
Valued Contributor

Re: Migrate single node cluster from EMA to EVA

I worked with backline support to iron out the details. Here it is:

Migrate OS disk for single node cluster from EMA to EVA storage.

###########################################################################
0. Present necessary storage from EVA and adjust zones as appropriate (so box can see EVA) and back everything up.

hwmgr -scan scsi
dsfmgr -v
# if output isn't good, then "dsfmgr -vVF" a couple times.

###########################################################################
1. Use addvol/rmvol to update cluster_usr, cluster_var, cluster_root domains with disk partitions from the EVA.

###########################################################################
2. Delete, then Re-add quorum disk (where dskXX is an EVA based disk). Currently, we have a single node and the
quorum disk isn't counting a vote, so we can do this with the system up and online without any impact.

clu_quorum -f -d remove
clu_quorum -f -d add dskXX 0

###########################################################################
3. Build copy of boot disk onto EVA disk

clu_bdmgr -c dskYY 2 # 2 is a non-existant cluster member

# Mount member2's root domain (now on dskYY) so you can edit member2's /etc/sysconfigtab and restore the boot partitions:

mount root2_domain#root /mnt

# Restore the boot partition:

vdump -0 -f - /cluster/members/member1/boot_partition | (cd /mnt; vrestore -x -f -)

# Adjust disk entries and swap disk entry in sysconfigtab

vi /mnt/etc/sysconfigtab # update sysconfigtab with information about boot disk
...
vm:
swapdevice=/dev/disk/dskYYb
clubase:
cluster_seqdisk_major=19 # Get correct number from file /dev/disk/dskYYh
cluster_seqdisk_minor=175 # Get correct number from file /dev/disk/dskYYh
...

# Restore the h partition CNX information:

/usr/sbin/clu_bdmgr -h dskYY

The h partition information is copied from the cluster member where you run the clu_bdmgr command to the h partition on dskYY.

# Unmount the new boot domain:

umount root2_domain#root /mnt

# Edit disklabel and change "label: clu_member2" to "label: clu_member1":

disklabel -r -e dskYY

# Adjust /etc/fdmns links:

cd /etc/fdmns
mv root1_domain root1_domain_original
mv root2_domain root1_domain

# Use the consvar -s bootdef_dev disk_name command on member1 to set the bootdef_dev variable to the new disk.

consvar -s auto_action HALT
consvar -l
consvar -a

# Shutdown

shutdown -h now

###########################################################################
4. Adjust SRM's view of the world.

wwidmgr -show wwid # note that the old boot disk OS_Identifier (UDID) was 20.
wwidmgr -clear all
init
wwidmgr -quickset -udid YY_UNIQUE_ID
wwidmgr -show wwid
init
show device
set bootdef_dev dga20,dgb20 (or whatever)

###########################################################################
5. Hold onto your hat

boot

###########################################################################
6. reset consvar auto_action

consvar -s auto_action BOOT
consvar -a
consvar -l