Operating System - Linux
1753479 Members
4849 Online
108794 Solutions
New Discussion

Re: Host based storage migration on Linux SG clsuter nodes

 
Senthil_N
Advisor

Host based storage migration on Linux SG clsuter nodes

Hi All,

 

Have created the document for host based storage migration on Linux SG cluster nodes. There are couple of clusters which have RHEL 4 / RHEL 5 with A.11.18.06 / A.11.18.01.

 

Have created the migration steps for Lock LUN and migration of SG cluster package's file system

 

 

Please review the below mentioned steps completely and let me know if you need any things to be added / changed.

 

 

My Questions:

 

 

1)Please let me know if we can do online migration of Lock LUN on HP SG version A.11.18.06 / A.11.18.01.

2)I understand that migration of SG cluster package's file system can be done online. 

 

 

Steps created for migration of Lock LUN and SG cluster package migration:

 

 

  1. 1.      Lock LUN Migration – Offline :

 

1.1.   Take server configuration backup:

#/opt/linuxdepots/tcs/scripts/linux_cfg_bkup.sh

 

      #cmviewcl > /root/backup/cmviewcl

      #cmviewcl –v > /root/backup/cmviewcl-v

      #cmviewconf > /root/backup/cmviewconf

 

1.2.   Storage team needs to assign the 1GB LUN for lock LUN for all the nodes of the cluster:

 

1.3.   Unix team need to scan 1GB LUN (Lock LUN) on all three servers:

 

#/opt/hp/hp_fibreutils/hp_rescan –a

#powermt check

#powermt config

#powermt save

 

Note: Now you can identify the new device. For say, we can assume new device name is /dev/emcpowerXY

 

1.4.   On Node 1, Please create one partition (id – 83):

The partition name will be /dev/emcpowerXY1

 

1.5.   On all three nodes, run below command and confirm if newly created partition is available on all nodes:

#partprobe

 

1.6.   Halt all packages, nodes and entire cluster:

 

1.6.1.      First we have to halt all the packages after getting confirmation from application team:

#cmhaltpkg –v <package_name>

 

1.6.2.      Second we have to halt all the nodes and stop entire cluster:

#cmhaltcl –v

 

1.6.3.      Verify if the cluster is stopped:

#cmviewcl

 

1.7.   On Node1, run following steps to edit cluster configuration and add new device as lock LUN:

#mkdir /root/storage_migration

#cd /root/storage_migration

#cmviewconf > cmviewconf_original

#cmgetconf -c <clsutername> <clustername.ascii>

      Note: Now edit the file<clustername.ascii>” and update new device for lock LUN for all nodes (for all nodes, lock LUN device name will be same).

      Example:

            Old:

            cluster lock lun name:            /dev/emcpowerl1

            New:

            cluster lock lun name:            /dev/emcpowerXY1

 

1.8.   Verify if we have edited properly:

#cmcheckconf -v -C <clustername.ascii>  ------------à (C is capital)

 

1.9.   Apply / Distribute new configuration changes to all nodes of the cluster:

#cmapplyconf -v -C anyname.ascii (C is capital)

 

1.10.              Now start the cluster and make all nodes join the cluster:

#cmruncl –v

Note: Now the cluster will be started and all nodes will join the cluster.

            Run below command to join particular node to the cluster if that node is joined:

#cmrunnode –v <hostname>

            Verify if cluster started and all the nodes joined the cluster:

            #cmviewcl

 

1.11.              Now start all the packages which were running earlier in respective nodes:

#cmrunpkg <package name> -n <node name>

 

Note: Packages cannot be started if the packages is not enabled on particular node. Run below command to enable the package.

#cmmodpkg –n <hostname> -e <package name>

 

1.12.              Verify if all the packages are started and running on respective nodes as same as before migration:

#cmviewcl      OR      cmviewcl -v

 

  1. 2.            Migrate Service Guard cluster packages file systems / volumes:

 

2.1.   Ensure to have full backup of the file system in place before you start SG cluster file system migration.

 

2.2.   Storage team needs to assign the required LUNs for SG cluster file system migration.

 

2.3.   Scan for new disks On Node - 1

# powermt display dev=all > /tmp/power.before.out

 

#opt/hp/hp_fibreutils/hp_rescan –a

 

# powermt config

 

#powermt save

 

# powermt check

 

# powermt display dev=all > /tmp/power.after.out

 

# diff /tmp/power.before.out /tmp/power.after.out > /tmp/diff.out

 

# grep -i "Logical Device" /tmp/diff.out   --------------à This command will show new LUN IDs.

 

NEW LUN id:

 

> Logical device ID=14B6 -- emcpowerd  -- 112

> Logical device ID=14B7 -- emcpowerf  -- 112

> Logical device ID=14B8 -- emcpowerc  -- 112

> Logical device ID=14B9 -- emcpowerg  -- 112

 

 

 

2.4.   Scan for new disks on Node 2 & 3.....

 

# powermt display dev=all > /tmp/power.before.out

 

#opt/hp/hp_fibreutils/hp_rescan –a

 

# powermt config

 

#powermt save

 

# powermt check

 

# powermt display dev=all > /tmp/power.after.out

 

# diff /tmp/power.before.out /tmp/power.after.out > /tmp/diff.out

 

# grep -i "Logical Device" /tmp/diff.out   --------------à This command will show new LUN IDs.

 

NEW LUN id:

 

> Logical device ID=14B6 -- emcpowerd  -- 112

> Logical device ID=14B7 -- emcpowerf  -- 112

> Logical device ID=14B8 -- emcpowerc  -- 112

> Logical device ID=14B9 -- emcpowerg  -- 112

 

 

2.5.   Create partitions on all new disks On Node - 1

# fdisk /dev/emcpowerXX

 

2.6.   Scan for partition changed On node 2 & 3

 

# partprobe

 

#fdisk –l </dev/emcpowerXX

 

 

2.7.   Do pvcreate in all new disks on Node 1

 

# pvcreate /dev/emcpowerXX1

 

 

 

2.8.   Extend VG with new disks on Node 1

 

#vgextend VolGroup01 /dev/emcpowerXX1 /dev/emcpowerXY1

 Volume group "VolGroup01" successfully extended

 

 

2.9.   Confirm the new LUNs are part of VG on Node 1:

 

# vgdisplay –v <VG name> | more

2.10.              Now mirror the existing disk to new disk on Node1:

# lvconvert -m1 --corelog /dev/<VG name>/<LV name> /dev/emcpowerXX1 /dev/emcpowerXY1

 

2.11.              Verify if mirroring is done 100% on Node1:

#lvs --noheadings /dev/<VG name>/<LV name>

 

2.12.              Split the mirror from old disks once the mirror completed on Node1:

 

# lvconvert -m0 /dev/<VG name >/<LV name> /dev/emcpowerYX1 /dev/emcpoweYY1

 (emcpowerYX1 and emcpoweYY1 are old disks)

 

2.13.              Verify the new LUNs are part of LV now on Node1

# lvdisplay -m /dev/<VG name> /<LV name>

 

2.14.              Run partprobe on all nodes to make above changes effective on other nodes:

#partprobe

 

2.15.              Verify if the cluster packages file systems are working fine on other nodes

Note: We need to halt the packages on running node and start the packages on other nodes one by one and verify if the package is working fine and packages file systems are getting mounted when the packages are started on other nodes. Once the package is test verified on all nodes. We have start the package back on original node.

# cmhaltpkg –v <package_name>

       #cmrunpkg –v –n <nodename> <packagename>

       #vgdisplay

        #lvdisplay   -----------à Need to check if lvdisplay is showing new device.

 

 

2.16.              Remove the old LUNs from the VG on node 1

Note : This should be done once you get confirmation from Application team and the LV which has been mirrored is running fine.

# vgreduce <VG name> /dev/emcpowerYX

 

2.17.              Remove the LVM header from old disks

# pvremove /dev/emcpowerYX1

 

2.18.              Remove the partitions from the old disks by fdisk command

 

2.19.              Ask SAN team to remove the disk after removed the disk from LVM and fdisk

 

2.20.              Once SAN confirmed that disk has been removed then remove the old path on all nodes by below commands

 

1 REPLY 1
Stephen Doud
Honored Contributor

Re: Host based storage migration on Linux SG clsuter nodes

"1)Please let me know if we can do online migration of Lock LUN on HP SG version A.11.18.06 / A.11.18.01."

 

No - Per page 264 in "Managing HP Serviceguard for Linux, Eighth Edition"

 

Change Lock LUN Configuration   --> Cluster must not be running.

 

 

"2) I understand that migration of SG cluster package's file system can be done online. "

 

There are two forms of packages to consider:

modular style:  Per page 294:

Add a file system   ->  Package should not be running.

Remove a file system -> Package should not be running.

 

legacy-style:  Per page 293:

Change run script contents (legacy package) ->      Package should not be running. Timing problems may occur if the script is changed while the package is running.