- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- New VG addition on Cluster.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 02:32 AM
тАО07-04-2005 02:32 AM
New VG addition on Cluster.
I need to add a new VG on to one of the package.
Please suggest me the step by step procedure.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 02:45 AM
тАО07-04-2005 02:45 AM
Re: New VG addition on Cluster.
Have a look at this thread with similar task.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=585694
Regards,
Devender
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 03:00 AM
тАО07-04-2005 03:00 AM
Re: New VG addition on Cluster.
http://docs.hp.com/en/B3936-90079/index.html
Basic steps:
pvcreate
mkdir /dev/vgXX
mknod /dev/vgXX/group c 64 0xHH0000
vgcreate
lvcreate
vgchange -c y /dev/vgXX
Add the mount point to the package control script...copy script to other nodes....
Mount manually on node running the package - or cmhaltpkg then cmrunpkg.....
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 03:57 AM
тАО07-04-2005 03:57 AM
Re: New VG addition on Cluster.
1. I am trying to do cmgetconf for collecting the cluster information and it takes very long time.
2. how will i know whether a existing Vg is cluster aware or not?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 09:01 AM
тАО07-04-2005 09:01 AM
Re: New VG addition on Cluster.
I think this note is very long, but I think it should cover most of your concerns.. and before doing anything I suggest, please verify and re-verify..
best of luck,
Tom
in a nut shell:
I. perform the VG work first, and verify you can do the import on each node.
II. place the new VG to be used by the Application (Package) in the ".cntl" file, and increase the [indx] number. Eample:
note: the VG Activation in this file should already be set: VGCHANGE="vgchange -a e"
Just add your new
VG[1]=vg02
LV[1]="/dev/vg02/my_lvol_name"; FS[1]="/my_mnt_point/my_dir"; FS_MOUNT_OPT[1]="-o rw"
III. update the
IV. start the Package up again.
tail syslog file.
tail /etc/cmcluster/
******************************************
**You Ask** IF i edit the existing clconfig.ascii file and make entries for the new VG (after creating the VG) and do a cmapplyconf is it ok?
*******************************************
My thought is, you are making an update for a "Package" right?
Bring the Package down on all nodes:
cmhaltpkg -v
So, how to add a VG to a "package" ?
Step 1 - Update the Package Control File
========================================
/etc/cmcluster/
It is here you define the shared VG and Lvol names used by the Package in the ".cntl" file.
Edit this .cntl file, and manually push it to all of your nodes in the proper directory and verify it's mode is readable and executable. It is this ascii file that is used to start up the Package.
Use the "-k" option to cmcheckconf, so you don't have to check all Volume Groups on the system (may take a long time to run).
cmcheckconf -v -k -P packagename.conf
Finally, run the apply
cmapplyconf -v -k -P packagename.conf
This will push automatically the "binary" config to all your nodes.
verify using: cmviewcl -v
start package up:
cmrunpkg -v
cmmodpkg -e
cmviewcl -v
You might verify you have node-switching, the "-e" option should do this for you above. Also, the Package is set to startup on the First node defined by "NODE_NAME" in your .conf file. (the original node). This is where I would do all you edits/updates, and testing on -- this node.
Step 2. Test By-Hand the Package
Start/Monitor/Stop scripts:
=======================================
Do this on the one-node and any changes
to the .cntl, make certain you push to other nodes all your changes.
1. Test using your Monitor Script
=================================
1.1 start the Application up on one-node:
-----------------------------------------
/etc/cmcluster/
This should return you a prompt, and verify that the monitor and the Application is running.
1.2 monitor the Application is up
---------------------------------
run:
This will *not* return a prompt to you.
1.3 Halt the Application
-------------------------
run:
This will stop your Application.
Do these steps above on each node.. this way
you will be sure things work when your Application is down and happens to be moved to an Adoptive-Node.
In order for this test to work with the shared VG, you will have to do some manual tasks to begin on each node:
a. vgimport -m mymap.file
vgchange -a y
if this does not work, then on the other nodes make certain the
vgchange -a -n
b. mount up your file-systems
c. when finished testing on this node, be certain to make the
vgchange -a n
OK, there you have it, the Package now works on each node manually.. so ServiceGuard should work you configured the Control File (.cntl) and Config (.conf) and
don't forget to run once: vgchange -c y
******************************************
**You Asked** 2. how will i know whether a existing Vg is cluster aware or not?
******************************************
you run the vgchange command once on this volume group on one node:
vgchange -c y
this will set the Cluster Flag and made it "cluster-aware".
but, you still have to manually veify that you can vgimport this
also, your Package Control file ".cntl" will have the Activation using the "-a e" which make the
When ServiceGuard starts the Package on a Node, it performs the
First, I would verify and re-verify that you can do a manul VG import and you have the exact disk-device-NAMES correct on each node. and the mount-points are correct. actually do the mount if needed and then the: vgchange -a n
when you are done.
also, save a copy of /etc/lvmconf file so you can see the vg-name and its associated devices used - do this on each node.
also, backup your
vgcfgbackup
More details about disk-devices that could be different on all three nodes..
Does your "ext_bus" instances match up on each node ? (Not required, but makes your work with the disk-devices cleaner!)
Note: your new VG disk devices /dev/dsk/cYtXd0 will most likely be different on all three of your nodes. (Some SysAdmin's take time to correct this by having the "ext_bus" number to be the same on each node -- this is another topic.) So, verify you can see the VG on each node and keep in mind that you should test the "vgimport" command using the correct /dev/dsk/cYtXd0 on that one-node, and do the same for the other nodes.
to see your /dev/dsk/c'Y' or 'Y' instances run:
ioscan -fnC ext_bus
the ext_bus
On the one-node create the "map file" when the
- create a Map.File from the Active Volume Group (no -s opt)
vgexport -p -m /tmp/
note: -p preview is all you need, don't worry about any error message. Send to
what is it's PVRA field of the disk device?
Dump the PVRA of the LVM on each of the three nodes, now you know that you are dealing with the exact same disk device.
look at the PVRA field on the
disk device, and do this on other two nodes (should be the exact same one).
- get the PVRA's 1st field: LVM Record, skip the boot area
echo "0x2008?4X" | adb /dev/dsk/cXtYdZ
- verify output from other nodes are the same !
Next, push the map.file to all your nodes and see if you can vgimport, when done on each node, vgchange -a n
on second-node:
make the directory, and special file group. the minor number you pick like: 0x010000 should be the same on all three nodes.
you might have to deactive the VG on your first node:
vgchange -a n vg01
on second-node:
mkdir /dev/
mknod /dev/
vgimport /dev/
vgchange -a y /dev/
vgcfgbackup /dev/
vgchange -a n /dev/
note: no -s on vgimport/vgexport, takes time to run
scanning each disk on the system giving:
CPU-ID+VG-ID values for each disk
do not use the "-s" option -- so it does *not* place the CPU field in the "map.file" when you go to import it.
- document the disk layouts, used also for recovery purposes, do this for each node. you can do this also:
strings /etc/lvmtab
and note the
one final note here is:
you only have to run -c a option once !!!
vgchange -c y VG
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 01:14 PM
тАО07-04-2005 01:14 PM
Re: New VG addition on Cluster.
To add the new VG in SG.
Assume the new disk is cXtYdZ and a new is vgXX.
In Node A do the following steps
# ls -l /dev/vg*/group. - to verify the Volume Group IDs existing.
# pvcreate /dev/rdsk/cXtYdZ.
# mkdir /dev/vgXX
# mknod /dev/vgXX/group c 64 0xYY0000 , where YY is the new Volume Group ID
# vgcreate /dev/vgXX /dev/dsk/c1XtYdZ - Create the Volume Group
--- Create the new Logical Volumes and the new File Systems
# vgexport -s -m -p /tmp/vgXX.map /dev/vgXX - Create the Mapfiles
In Node B do the following steps
--- Copy the mapfile /tmp/vgXX.map from Node A to Node B.
# rcp nodeA:/tmp/vgxx.map /tmp/vgxx.map
# mkdir /dev/vgXX
# mknod /dev/vgXX/group c 64 0xYY0000 ,where YY is the Volume Group ID
# vgimport -s -m /tmp/vgXX.map /dev/vgXX
# vgchange -a y vgXX
# vgchange -a n vgXX
--- Create the mount points in Node B.
--- Update the new Volume Group in the configurations files in Node A:
/etc/cmcluster/cmclconfig.ascii and
/etc/cmcluster/"package"/"package".cntl.
--- Update the mount points in the control file:
/etc/cmcluster/"package"/"package".cntl
--- Transfer the files from Node A to Node B.
Follow the steps in Node A:
--- If the clsuter is running
# vgchange -c y /dev/vgXX -- to indicate that new Volume Group it will be used by Service Guard.
--- If the cluster is down:
# cmcheckconf -C /etc/cmcluster/cmclconfig.asii -P /etc/cmcluster/"package"/"package".conf,
# cmapplyconf -C /etc/cmcluster/cmclconfig.asii -P /etc/cmcluster/"package"/"package".conf,
--- Start the package
# cmrunpkg -v "package"
# cmmodpkg -e "package"
# cmviewcl -v
Regards,
Babu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-04-2005 06:44 PM
тАО07-04-2005 06:44 PM
Re: New VG addition on Cluster.
i would like to give you two options for doing this:
option 1: creating a VG on a new disk added
option 2: creating a VG on a existing shared disk.
if you are going to do on an existing shared disk skip the option 1
option 1:
1. Add the new disk to shared disk array...
2.use this command to check both the nodes are using the same device files
# lssf /dev/dsk/*
3.PVcreate the new disk
# pvcreate -f /dev/rdsk/cXtXdX
option 2:
4.Create new Vg (follow the process)
# mkdir /dev/newvg
# mknod /dev/newvg/group c 64 0xhh0000
# vgcreate /dev/newvg /dev/dsk/cXtXdX
5. continue the process for additional volume groups..
6. Create logical volumes...
# lvcreate -L 500 /dev/newvg
7.Create filesystem
# newfs -F vxfs /dev/newvg/rlvol1
8. Create mount point
# mkdir /mnt1
9.mount the lv and verify
# mount /dev/newvg/lvol1 /mnt1
10.verify the configuration
# vgdisplay -v /dev/newvg
11.deactivate the VG
# umount /mnt1
# vgchange -a n /dev/newvg
12. edit the .config file in /etc/cmcluster
Insure all of the volume groups common to the cluster nodes' lvmtab files are listed at the bottom of the file.
MAX_CONFIGURED_PACKAGES 10 ## List of cluster aware Volume Groups. These volume groups# will be used by package applications via the vgchange -a e command.# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02 VOLUME_GROUP /dev/vg01
VOLUME_GROUP /dev/vg02
VOLUME_GROUP /dev/newvg
13. update the pkg file with the new vg and lv info...
14. Distribute the Vg,pkg config file and cluster config files among the nodes
#vgexport -p -s -m /tmp/newvg.map /dev/newvg
# rcp /tmp/newvg.map node2:/tmp/newvg.map
# rcp -pr /etc/cmcluster/pkg node2:/etc/cmcluster/
# rcp /tmp/newvg.map node3:/tmp/newvg.map
# rcp -pr /etc/cmcluster/pkg node3:/etc/cmcluster/
15.create volume group informations on node 2 and node 3
# mkdir /dev/newvg
# mknod /dev/newvg/group c 64 0xhh0000
16. vgimport the vg information from map file
# vgimport -s -m /tmp/newvg.map /dev/newvg
17.Check the config on both node 2 and node 3
Make sure that you have deactivated the volume group on node 1. Then enable the volume group on node2 and the same to node 3:
# vgchange -a y /dev/newvg
Create a directory to mount the disk:
# mkdir /mnt1
Mount and verify the volume group on node2:
# mount /dev/newvg/lvol1 /mnt1
Unmount the volume group on node2:
# umount /mnt1
Deactivate the volume group on node2:
# vgchange -a n /dev/newvg
If satisfied, check the file:
# vgchange -a y (vg_lock)
# cmcheckconf -C
18.If this succeeds, proceed to create the cluster binary file:
# cmgetconf -v -c cluster_name filename
# vi filename
VG[#]=/dev/newvg
# cmcheckconf -v -C filename
# cmapplyconf -v -C filename
Make changes to the package startup script and add the volume group and the file system mountpoints.
now u manually activate the vg and mount the logical volumes
step 18 is to update the vg information on all the nodes without restarting the cluster/package...
Hope this steps serves you the purpose
Regards
Vinod K