Operating System - HP-UX
1824994 Members
2131 Online
109678 Solutions
New Discussion юеВ

New VG addition on Cluster.

 
sineesh
New Member

New VG addition on Cluster.

I have a 3 Node cluster running on HP-UX 11.00
I need to add a new VG on to one of the package.
Please suggest me the step by step procedure.



6 REPLIES 6
Devender Khatana
Honored Contributor

Re: New VG addition on Cluster.

Hi,

Have a look at this thread with similar task.

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=585694

Regards,
Devender
Impossible itself mentions "I m possible"
Geoff Wild
Honored Contributor

Re: New VG addition on Cluster.

Managing MC/SG:

http://docs.hp.com/en/B3936-90079/index.html

Basic steps:

pvcreate
mkdir /dev/vgXX
mknod /dev/vgXX/group c 64 0xHH0000
vgcreate
lvcreate
vgchange -c y /dev/vgXX

Add the mount point to the package control script...copy script to other nodes....

Mount manually on node running the package - or cmhaltpkg then cmrunpkg.....

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
sineesh
New Member

Re: New VG addition on Cluster.

IF i edit the existing clconfig.ascii file and make entries for the new VG (after creating the VG) and do a cmapplyconf is it ok?

1. I am trying to do cmgetconf for collecting the cluster information and it takes very long time.

2. how will i know whether a existing Vg is cluster aware or not?

D Block 2
Respected Contributor

Re: New VG addition on Cluster.

Sineesh-

I think this note is very long, but I think it should cover most of your concerns.. and before doing anything I suggest, please verify and re-verify..

best of luck,
Tom

in a nut shell:

I. perform the VG work first, and verify you can do the import on each node.

II. place the new VG to be used by the Application (Package) in the ".cntl" file, and increase the [indx] number. Eample:

note: the VG Activation in this file should already be set: VGCHANGE="vgchange -a e"

Just add your new and
VG[1]=vg02

LV[1]="/dev/vg02/my_lvol_name"; FS[1]="/my_mnt_point/my_dir"; FS_MOUNT_OPT[1]="-o rw"

III. update the .cntl .conf to each node's directory, and run cmapplyconf.

IV. start the Package up again.

tail syslog file.
tail /etc/cmcluster//.cntl.log file.

******************************************
**You Ask** IF i edit the existing clconfig.ascii file and make entries for the new VG (after creating the VG) and do a cmapplyconf is it ok?
*******************************************

My thought is, you are making an update for a "Package" right?

Bring the Package down on all nodes:
cmhaltpkg -v

So, how to add a VG to a "package" ?

Step 1 - Update the Package Control File
========================================

/etc/cmcluster//.cntl

It is here you define the shared VG and Lvol names used by the Package in the ".cntl" file.

Edit this .cntl file, and manually push it to all of your nodes in the proper directory and verify it's mode is readable and executable. It is this ascii file that is used to start up the Package.

Use the "-k" option to cmcheckconf, so you don't have to check all Volume Groups on the system (may take a long time to run).

cmcheckconf -v -k -P packagename.conf

Finally, run the apply

cmapplyconf -v -k -P packagename.conf

This will push automatically the "binary" config to all your nodes.

verify using: cmviewcl -v

start package up:
cmrunpkg -v
cmmodpkg -e
cmviewcl -v

You might verify you have node-switching, the "-e" option should do this for you above. Also, the Package is set to startup on the First node defined by "NODE_NAME" in your .conf file. (the original node). This is where I would do all you edits/updates, and testing on -- this node.



Step 2. Test By-Hand the Package
Start/Monitor/Stop scripts:
=======================================

Do this on the one-node and any changes
to the .cntl, make certain you push to other nodes all your changes.


1. Test using your Monitor Script
=================================

1.1 start the Application up on one-node:
-----------------------------------------

/etc/cmcluster//
This should return you a prompt, and verify that the monitor and the Application is running.

1.2 monitor the Application is up
---------------------------------

run:
This will *not* return a prompt to you.

1.3 Halt the Application
-------------------------

run:
This will stop your Application.

Do these steps above on each node.. this way
you will be sure things work when your Application is down and happens to be moved to an Adoptive-Node.

In order for this test to work with the shared VG, you will have to do some manual tasks to begin on each node:

a. vgimport -m mymap.file /dev/dsk..
vgchange -a y

if this does not work, then on the other nodes make certain the is Not Active:
vgchange -a -n

b. mount up your file-systems

c. when finished testing on this node, be certain to make the inactive:
vgchange -a n

OK, there you have it, the Package now works on each node manually.. so ServiceGuard should work you configured the Control File (.cntl) and Config (.conf) and
don't forget to run once: vgchange -c y




******************************************
**You Asked** 2. how will i know whether a existing Vg is cluster aware or not?
******************************************
you run the vgchange command once on this volume group on one node:

vgchange -c y
this will set the Cluster Flag and made it "cluster-aware".

but, you still have to manually veify that you can vgimport this on each node..
also, your Package Control file ".cntl" will have the Activation using the "-a e" which make the exclusive used when the Package becomes active.

When ServiceGuard starts the Package on a Node, it performs the Activation "-a e" on that node. If the Package has Failed, then the Adoptive-Node will do the activation.. So, test this out manually first.


First, I would verify and re-verify that you can do a manul VG import and you have the exact disk-device-NAMES correct on each node. and the mount-points are correct. actually do the mount if needed and then the: vgchange -a n
when you are done.

also, save a copy of /etc/lvmconf file so you can see the vg-name and its associated devices used - do this on each node.

also, backup your when it is Active, do on each node:
vgcfgbackup


More details about disk-devices that could be different on all three nodes..

Does your "ext_bus" instances match up on each node ? (Not required, but makes your work with the disk-devices cleaner!)

Note: your new VG disk devices /dev/dsk/cYtXd0 will most likely be different on all three of your nodes. (Some SysAdmin's take time to correct this by having the "ext_bus" number to be the same on each node -- this is another topic.) So, verify you can see the VG on each node and keep in mind that you should test the "vgimport" command using the correct /dev/dsk/cYtXd0 on that one-node, and do the same for the other nodes.

to see your /dev/dsk/c'Y' or 'Y' instances run:

ioscan -fnC ext_bus

the ext_bus is the disk's C#


On the one-node create the "map file" when the is Active.

- create a Map.File from the Active Volume Group (no -s opt)

vgexport -p -m /tmp/.map /dev/

note: -p preview is all you need, don't worry about any error message. Send to .map file to all nodes for the vgimport.

what is it's PVRA field of the disk device?
Dump the PVRA of the LVM on each of the three nodes, now you know that you are dealing with the exact same disk device.

look at the PVRA field on the
disk device, and do this on other two nodes (should be the exact same one).

- get the PVRA's 1st field: LVM Record, skip the boot area

echo "0x2008?4X" | adb /dev/dsk/cXtYdZ

- verify output from other nodes are the same !


Next, push the map.file to all your nodes and see if you can vgimport, when done on each node, vgchange -a n





on second-node:

make the directory, and special file group. the minor number you pick like: 0x010000 should be the same on all three nodes.

you might have to deactive the VG on your first node:
vgchange -a n vg01


on second-node:
mkdir /dev/
mknod /dev//group c 64 0x010000
vgimport /dev/ /dev/dsk/c9t9d1 /dev/dsk/c10t0d1
vgchange -a y /dev/
vgcfgbackup /dev/
vgchange -a n /dev/

note: no -s on vgimport/vgexport, takes time to run
scanning each disk on the system giving:
CPU-ID+VG-ID values for each disk

do not use the "-s" option -- so it does *not* place the CPU field in the "map.file" when you go to import it.




- document the disk layouts, used also for recovery purposes, do this for each node. you can do this also:

strings /etc/lvmtab
and note the and the disk devices.


one final note here is:

you only have to run -c a option once !!!

vgchange -c y VG



Golf is a Good Walk Spoiled, Mark Twain.
Babu A
Frequent Advisor

Re: New VG addition on Cluster.

Hi Sineesh,

To add the new VG in SG.

Assume the new disk is cXtYdZ and a new is vgXX.

In Node A do the following steps

# ls -l /dev/vg*/group. - to verify the Volume Group IDs existing.

# pvcreate /dev/rdsk/cXtYdZ.

# mkdir /dev/vgXX
# mknod /dev/vgXX/group c 64 0xYY0000 , where YY is the new Volume Group ID

# vgcreate /dev/vgXX /dev/dsk/c1XtYdZ - Create the Volume Group

--- Create the new Logical Volumes and the new File Systems

# vgexport -s -m -p /tmp/vgXX.map /dev/vgXX - Create the Mapfiles

In Node B do the following steps

--- Copy the mapfile /tmp/vgXX.map from Node A to Node B.

# rcp nodeA:/tmp/vgxx.map /tmp/vgxx.map
# mkdir /dev/vgXX
# mknod /dev/vgXX/group c 64 0xYY0000 ,where YY is the Volume Group ID
# vgimport -s -m /tmp/vgXX.map /dev/vgXX
# vgchange -a y vgXX
# vgchange -a n vgXX

--- Create the mount points in Node B.

--- Update the new Volume Group in the configurations files in Node A:

/etc/cmcluster/cmclconfig.ascii and
/etc/cmcluster/"package"/"package".cntl.

--- Update the mount points in the control file:

/etc/cmcluster/"package"/"package".cntl

--- Transfer the files from Node A to Node B.

Follow the steps in Node A:

--- If the clsuter is running

# vgchange -c y /dev/vgXX -- to indicate that new Volume Group it will be used by Service Guard.

--- If the cluster is down:
# cmcheckconf -C /etc/cmcluster/cmclconfig.asii -P /etc/cmcluster/"package"/"package".conf,

# cmapplyconf -C /etc/cmcluster/cmclconfig.asii -P /etc/cmcluster/"package"/"package".conf,

--- Start the package

# cmrunpkg -v "package"
# cmmodpkg -e "package"
# cmviewcl -v

Regards,

Babu


vinod_25
Valued Contributor

Re: New VG addition on Cluster.

hi sineesh,

i would like to give you two options for doing this:
option 1: creating a VG on a new disk added
option 2: creating a VG on a existing shared disk.

if you are going to do on an existing shared disk skip the option 1

option 1:
1. Add the new disk to shared disk array...
2.use this command to check both the nodes are using the same device files
# lssf /dev/dsk/*
3.PVcreate the new disk
# pvcreate -f /dev/rdsk/cXtXdX

option 2:
4.Create new Vg (follow the process)
# mkdir /dev/newvg
# mknod /dev/newvg/group c 64 0xhh0000
# vgcreate /dev/newvg /dev/dsk/cXtXdX
5. continue the process for additional volume groups..
6. Create logical volumes...
# lvcreate -L 500 /dev/newvg
7.Create filesystem
# newfs -F vxfs /dev/newvg/rlvol1
8. Create mount point
# mkdir /mnt1
9.mount the lv and verify
# mount /dev/newvg/lvol1 /mnt1
10.verify the configuration
# vgdisplay -v /dev/newvg
11.deactivate the VG
# umount /mnt1
# vgchange -a n /dev/newvg
12. edit the .config file in /etc/cmcluster
Insure all of the volume groups common to the cluster nodes' lvmtab files are listed at the bottom of the file.
MAX_CONFIGURED_PACKAGES 10 ## List of cluster aware Volume Groups. These volume groups# will be used by package applications via the vgchange -a e command.# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02 VOLUME_GROUP /dev/vg01
VOLUME_GROUP /dev/vg02
VOLUME_GROUP /dev/newvg

13. update the pkg file with the new vg and lv info...
14. Distribute the Vg,pkg config file and cluster config files among the nodes
#vgexport -p -s -m /tmp/newvg.map /dev/newvg
# rcp /tmp/newvg.map node2:/tmp/newvg.map
# rcp -pr /etc/cmcluster/pkg node2:/etc/cmcluster/

# rcp /tmp/newvg.map node3:/tmp/newvg.map
# rcp -pr /etc/cmcluster/pkg node3:/etc/cmcluster/
15.create volume group informations on node 2 and node 3
# mkdir /dev/newvg
# mknod /dev/newvg/group c 64 0xhh0000
16. vgimport the vg information from map file
# vgimport -s -m /tmp/newvg.map /dev/newvg
17.Check the config on both node 2 and node 3

Make sure that you have deactivated the volume group on node 1. Then enable the volume group on node2 and the same to node 3:
# vgchange -a y /dev/newvg
Create a directory to mount the disk:
# mkdir /mnt1
Mount and verify the volume group on node2:
# mount /dev/newvg/lvol1 /mnt1
Unmount the volume group on node2:
# umount /mnt1
Deactivate the volume group on node2:
# vgchange -a n /dev/newvg
If satisfied, check the file:

# vgchange -a y (vg_lock)
# cmcheckconf -C
18.If this succeeds, proceed to create the cluster binary file:
# cmgetconf -v -c cluster_name filename

# vi filename

VG[#]=/dev/newvg

# cmcheckconf -v -C filename
# cmapplyconf -v -C filename

Make changes to the package startup script and add the volume group and the file system mountpoints.

now u manually activate the vg and mount the logical volumes


step 18 is to update the vg information on all the nodes without restarting the cluster/package...

Hope this steps serves you the purpose

Regards

Vinod K