- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Problem of adding a new VG to the running cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-05-2008 08:22 PM
03-05-2008 08:22 PM
HP-UX 11.11 MC/SG A.11.15 Oracle 9.2
Problem: Suscessful added a set of new HDD and made a new VG for the running DB space.
but only node-1 can use this space, after copied the VG map to the node-2, I failed to vgchange -a n VGxx with error message said that {vgchange: Couldn't set the unique id for volume group "/dev/vgxx"}
Please give me some suggestions. Apprecaited.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-05-2008 10:04 PM
03-05-2008 10:04 PM
Re: Problem of adding a new VG to the running cluster
Let me make this point clear that if any new disk or VG addition required in the cluster environemt then cluster has to be halted for this task. Since VG addition is required in cluster.ascii files and cluster has to be started again after executing the cmcheckconf and cmapplyconf.
So ideally steps will include the following:
1. Create vg on one node.
2. halt the cluster.
3. change the cluster configuration files
4. Copy the vg.map file to another node and import it so lvmtab is populated.
5. execute cmcheckconf and cmapplyconf
6. If no error in the above steps then start the cluster..
Current picture is not clear from the information provided by you. So please share more information, may be someone could help you out.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2008 03:48 AM
03-06-2008 03:48 AM
Re: Problem of adding a new VG to the running cluster
You need to first halt the cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2008 08:12 AM
03-06-2008 08:12 AM
Re: Problem of adding a new VG to the running cluster
What u need to do is.
1. Create the VG and LV's on the node where package is running( hope it is active/passive cluster).
2. make the VG cluster aware
#vgchange -a c
3. update the package control file with VG name You don't need to halt the cluster.
What u need to do is.
1. Create the VG and LV's on the node where the package is running( hope it is active/passive cluster).
2. make the VG cluster aware
#vgchange -a c
3. update the package control file with VG name and LV and mount point on both nodes manually.
4. activate the vg manually in exclusive mode.
# vgcgange -a e
# mount the LV's manually using mount command with proper mount options specified in the contol file
5. create a map file
#vgexport -v -p -s -m /tmp/vgname.map vgname
6. copy the map file on the second nodes.
7. create the vgdirectory and create the group file
#mkdir /dev/vgname
#mknod /dev/vgname/group c 64
8. import the vg
#vgimport -v -s -m /tmp/vgname.map vgname
9. if u want to confirm that the vg is imported properly, activate the vg in read only mode and then deactivate it.
#vgchange -a r vgname
#vgdisplay vgname
#vgchange -a n vgname.
and LV and mount point on both nodes manually.
4. activate the vg manually in exclusive mode.
# vgcgange -a e
# mount the LV's manually using mount command with proper mount options specified in the contol file
5. create a map file
# vgexport -v -p -s -m /tmp/vgname.map vgname
6. copy the map file on the second nodes.
7. create the vgdirectory and create the group file
# mkdir /dev/vgname
# mknod /dev/vgname/group c 64
8. import the vg
#vgimport -v -s -m /tmp/vgname.map vgname
9. vf u want to confirm that the vg is imported properly, activate the vg in read only mode and then deactivate it.
# vgchange -a r vgname
#vgdisplay vgname
# vgchange -a n vgname.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2008 08:14 AM
03-06-2008 08:14 AM
Re: Problem of adding a new VG to the running cluster
You don't need to halt the cluster.
What u need to do is.
1. Create the VG and LV's on the node where the package is running( hope it is active/passive cluster).
2. make the VG cluster aware
#vgchange -a c
3. update the package control file with VG name and LV and mount point on both nodes manually.
4. activate the vg manually in exclusive mode.
# vgcgange -a e
# mount the LV's manually using mount command with proper mount options specified in the contol file
5. create a map file
#vgexport -v -p -s -m /tmp/vgname.map vgname
6. copy the map file on the second nodes.
7. create the vgdirectory and create the group file
#mkdir /dev/vgname
#mknod /dev/vgname/group c 64
8. import the vg
#vgimport -v -s -m /tmp/vgname.map vgname
9. if u want to confirm that the vg is imported properly, activate the vg in read only mode and then deactivate it.
#vgchange -a r vgname
#vgdisplay vgname
#vgchange -a n vgname.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2008 08:17 AM
03-06-2008 08:17 AM
Re: Problem of adding a new VG to the running cluster
Adding a new volume group to the cluster does not require halting it.
vgexport/vgimport to the second node is enough to prepare the disk for use in the cluster.
A new package can activate the volume group.
If you want to give the space to an existing package, you will need to modify the package configuration script, include the new volume group on one node and then fail the package over to that node. Then migrate the new script over to the second node.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2008 11:53 PM
03-06-2008 11:53 PM
Re: Problem of adding a new VG to the running cluster
I am very sorry about this delaied reply for a busy schedule.
And I appreciated everyone who gave me suggestions and shared your experience and know-how. Really, thank you very much, and your time.
Actually, this problem was happend after anotherone's operation last summer. I am sorry about the topic looks confused as I wrote the "running system", in fact the cluster can be shutdown on weekend freely. So I don't mind stop this system or not while I could touch them, just wish I can fix the problem.
After HDS's Engr finished the configuration of the new disks on 9570V by 2P+1 method for 5 Raid5 sets, that operator made a new vg and added these 5 sets pv to the new vg, and tried to add this resource to the cluster by a common operating step, when he changed to Node-2, he could not import newvg with that error message. after that, he tried for several times, so I thought that it's not import to me about how and what he did with the these pv. Anyway, they just began to use this vg only on side of Node-1 from that, and there's normal about this new vxfs on the newvg.
When I take over this case, what I am thinking is try to keep the vxfs they're using, I mean, the structure of the newvg. But just focus on how to import the vg information correctlly on Node-2. A HDS's guy told me that I should install a MC/SG's critical patch(PHSS_34505) for that error message, but my question is: if this patch was necessary for the vg import indeed, why there was no problem with other PV created on 2004, on 2004, HP did not announced the PHSS_34505 for solving the error about "
Is there any Buddy knows about this patch related LVM problem in cluster conf work?
Then please give me your hand.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2008 12:10 AM
03-07-2008 12:10 AM
Re: Problem of adding a new VG to the running cluster
What I understood is the VG is already activated on one of the node in normal mode and is in use.
Use need to do the following steps to make it usable in cluster.
1. unmount the mount points and deactivate the VG.
#umount
#vgchange -a n
2. make the VG cluster aware and activate in exclusive mode.
#vgchange -c y
#vgchange -a e
3. mount the mount points
4. update the package control file for the new vg and its mount points on both nodes.
5. create the map and copy the map file to the adoptive node
#vgexport -v -p -s -m /tmp/newvg.map newvg
#copy the map file to the adoptive node.
On the adoptive node.
1. create the vg directory and group file
#mkdir /dev/newvg
#mknod /dev/newvg/group c 64
2. import the map file and activate it in read only mode to confirm the import of map file and then deactivate it.
#vgimport -v -s -m /tmp/newvg.map newvg
#vgchange -a r newvg
#vgdisplay newvg
#vgchange -a n newvg
**** the above steps is to activate the VG without halting the cluster and package.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2008 12:11 AM
03-07-2008 12:11 AM
Re: Problem of adding a new VG to the running cluster
http://www11.itrc.hp.com/service/patch/patchDetail.do?patchid=PHSS_34505&sel={hpux:11.11,}&BC=main|pdb|search|
This is a MCSG patch, and there is no fix for this VG related issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2008 05:33 AM
03-07-2008 05:33 AM
Re: Problem of adding a new VG to the running cluster
TO check, on each node, use the command:
# ll /dev/*/group
... and check all minor numbers. If one pair is not unique on that node, correct it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2008 10:42 PM
03-09-2008 10:42 PM
Re: Problem of adding a new VG to the running cluster
Thanks for your kindness and suggestion, especially Mr.SIJU VADAKKANDAVIDA gave me the operations way step by step.
I have same light as yours about the MC/SG patch and I had learned about error message that said Couldn't set the unique id for volume group "/dev/vgxx". As Stephen's talk,
I wanna rebuild lvmvg file as
# rm /dev/slvmvg
# mv /etc/lvmtab /etc/lvmtab.old
# vgscan â v
But I am not sure whether it's opportune and safe to exec these commond on a running system. I mean, if I delete the slvmvg file, whether my vg00 will be dameged, I afraid that.
Anyway, everything would be verificated through this weekend's work.
And I will let you know how the things going on after that.
Thank you about your help again and have a nice day.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2008 11:47 PM
03-09-2008 11:47 PM
Re: Problem of adding a new VG to the running cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-12-2008 01:37 AM
03-12-2008 01:37 AM
Re: Problem of adding a new VG to the running cluster
Hello, Mr
I want to ask sth about the step you gave me, this step was doubted by vendor's SE.
there are:
Node-1
# vgdisplay /dev/vgnn
# vgchange â a n /dev/vgnn
Exclusive modeã
# vgchange -c y /dev/vgnn
# vgchange -a e /dev/vgnn
Edit Node-1â s control.sh
And mount the file system associated the vg
#ã mount -F vxfs /dev/vgnn/lvolxx /fsname
Make and deliver map file
# vgexport -v -p -s -m /tmp/vgnn.map /dev/vgnn
# rcp â p /tmp/vgnn.map host-2:/tmp/vgnn.map
# vgchange -a r /dev/vgnn
# vgdisplay /dev/vgnn
# vgchange -a n /dev/vgnn
Then Node-2
# mkdir /dev/vgnn
# mknod /dev/vgnn/group c 64 0xyy0000
# vgimport -v -s -m /tmp/vgnn.map /dev/vgnn
# vgchange -a r /dev/vgnn
# vgdisplay /dev/vgnn
# vgchange -a n /dev/vgnn
I added your step in the docs and submit it for checking, and was questioned by vendor's SE. I just told that vgchange -a r means make the vg is read-only mode for avoiding shit happens. But I am not sure that it can work or make vg effectived for both nodes. Would please tell me how did you consider about this?
Sorry for bothering you again and apreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 01:09 AM
03-15-2008 01:09 AM
Re: Problem of adding a new VG to the running cluster
vgchange -a r vgname , just activates the VG in read only mode to confirm the VG is exectly same as on the primary node by using "vgdisplay -v vgname"
And vgchange -a n vgname deactivate the VG, means ur brigning it to the normal state.
---------
U ask ur SE to contact HP support center to validate the steps, it is not shit. :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-17-2008 06:16 PM
03-17-2008 06:16 PM
Re: Problem of adding a new VG to the running cluster
Appreciated your passions and kindness.
I had finished my operations and share it to you now.
I wanna try to prove that the PHSS_34505 was not necessary to this system but I have to obey my PM's directions and the vendor's SE looked like be more believable than me. Fine, that's ok. We just patch it. And after reboot, the CL couldn't up at all.
So analisys the syslog and this MC/SG patch seems related with sendmail. And as vendor's support center's suggestion, we confirmed the /etc/inetd.conf and removed the comment# in front of the /..ident stream tcp wait bin /usr/lbin/identd identd../ then inetd -c for restarting.
ok, the cluster were up normailly. Then I stopped the cluster and exec the next step for enabling the new added VG. BTW: when we install this patch, it modified the cmclconfig without warning, so if anybody thought this patch was necessary to themselves, he'd better backup the /etc/cmcluster/cmclconfig before you install patch PHSS_34505. okay, the LVM's process was:
#vgchange -a n vgxx
# vgexport -p -v -s -m /tmp/vgxx.map /dev/vgxx
# rcp â p /tmp/vgxx.map node-2:/tmp/vgxx.map
root@node-2/#ll /dev/*/group unique id confirm
root@node-2/#export /dev/vgxx
vgexport: Volume group "/dev/vgxx" is still active.
vgexport: Couldn't export volume group "/dev/vgxx".
root@node-2/# vgchange -c y /dev/vgxx
vgchange: The volume group "/dev/vgxx" is active on this system.
Cannot perform requested change.
root@node-2/# vgchange -a n /dev/vgxx
Volume group "/dev/vgxx" has been successfully changed.
root@node-2/# vgexport /dev/vgxx
root@node-2# mkdir /dev/vgxx
root@node-2/# mknod /dev/vgxx/group c 64 0xNN0000
root@node-2/# vgimport -v -s -m /temp/vgxx.map /dev/vgxx
Beginning the import process on Volume Group "/dev/vgxx".
Logical volume "/dev/vgxx/lvolyy" has been successfully created with lv number 1.
Volume group "/dev/vgxx" has been successfully created.
root@node-2/# vgchange -c n /dev/vgxx
Performed Configuration change.
Volume group "/dev/vgxx" has been successfully changed.
root@node-2/# vgchange -a y /dev/vgxx
Activated volume group
Volume group "/dev/vgxx" has been successfully changed.
root@node-2/# vgdisplay /dev/vgxx
then vgdisplay; vgcfgbackup; mkfs; mount fs... and vgxx worked normally. After deactived vgxx, I modified the control.sh and added new vg/lv/fs/M.P. in it and rcp -p it to Node-2.
After actived the lock vg, I run the cmcheckconf for cmclconf.ascii, it told me that /..Error: First cluster lock volume group /dev/vglock needs to be designated as a cluster aware volume group. ../
I can not understand why it happended after several times try hard, so I have to copy the cmclconf.ascii from Node-2 to Node-1, then run cmcheckconf again, it's done.
Because I remembered that adding vg to cluster is not related with cluster doesn't have any necessary to reconfig the cluster.
Okay, that's all and close this case which should be very easy as a basic operation.
And thanks everyone again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-17-2008 07:36 PM