HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Compelling reason to list cluster aware VGs in...
Operating System - HP-UX
1838389
Members
3153
Online
110125
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2005 12:25 AM
04-20-2005 12:25 AM
Hi,
it's been a while since I last added a VG,
that's why I forgot.
I tried to retrieve the settings from the clusterconf binary by running cmviewconf.
In cmviewconf's output don't appear any VGs,
why I'm tempted to beleive that there ain't no need to configure in any new cluster aware VGs,
as long as they already posses a "cluster bit",
which I already assigned by vgchange -c y.
The used PVs as well as device files were already distributed to all nodes that would host the package's VGs by a vgex/import.
The VG can be activated exclusively as well as read only on failover nodes, and the LVs with FS on them can be mounted on each node.
Referring to the relevant variable VOLUME_GROUP in the manpage of cmquerycl
it doesn't get clear to me whether its use is mandatory or optional.
VOLUME_GROUP Name of volume group to be marked cluster aware.
The volume group will be used by clustered appli-
cations via the vgchange -a e command, which marks
the volume group for exclusive access. Multiple
VOLUME_GROUP keywords may be specified. By de-
fault, cmquerycl will specify each VOLUME_GROUP
that is accessible by two or more nodes within the
cluster. This volume group will be initialized to
be part of the cluster such that the volume group
can only be activated via the vgchange -a e op-
tion.
The point why I'm asking is because,
were I forced to place an entry of every
cluster-used VG in the clusterconf binary
then I would need to bring down the whole cluster.
While if it were optional it would suffice
to increase the arrays VG, LV, FS, FS_MOUNT_OPT
in the affected packages' control scripts,
and maybe only bring those packages down.
Toodleloo
Ralph
it's been a while since I last added a VG,
that's why I forgot.
I tried to retrieve the settings from the clusterconf binary by running cmviewconf.
In cmviewconf's output don't appear any VGs,
why I'm tempted to beleive that there ain't no need to configure in any new cluster aware VGs,
as long as they already posses a "cluster bit",
which I already assigned by vgchange -c y.
The used PVs as well as device files were already distributed to all nodes that would host the package's VGs by a vgex/import.
The VG can be activated exclusively as well as read only on failover nodes, and the LVs with FS on them can be mounted on each node.
Referring to the relevant variable VOLUME_GROUP in the manpage of cmquerycl
it doesn't get clear to me whether its use is mandatory or optional.
VOLUME_GROUP Name of volume group to be marked cluster aware.
The volume group will be used by clustered appli-
cations via the vgchange -a e command, which marks
the volume group for exclusive access. Multiple
VOLUME_GROUP keywords may be specified. By de-
fault, cmquerycl will specify each VOLUME_GROUP
that is accessible by two or more nodes within the
cluster. This volume group will be initialized to
be part of the cluster such that the volume group
can only be activated via the vgchange -a e op-
tion.
The point why I'm asking is because,
were I forced to place an entry of every
cluster-used VG in the clusterconf binary
then I would need to bring down the whole cluster.
While if it were optional it would suffice
to increase the arrays VG, LV, FS, FS_MOUNT_OPT
in the affected packages' control scripts,
and maybe only bring those packages down.
Toodleloo
Ralph
Madness, thy name is system administration
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2005 01:21 AM
04-20-2005 01:21 AM
Solution
The "reconfiguring a cluster" section of the manual - http://docs.hp.com/en/B3936-90079/ch07s04.html#ciibdgcg
shows that a VG can be added or removed while the cluster is running (dependent package must be halted).
If a VOLUME_GROUP is de-listed from a cluster configuration file which matches the cluster binary file, and cmapplyconf is performed on the cluster configuration file, those VGs that are de-listed will have their cluster status revoked - even if the VG is currently active! (I just verified this with testing). This obviously causes a bit of confusion for cluster administrators. For this reason, ALWAYS list all VGs which you intend to remain owned/operated by the cluster, even if you use "vgchange -c y" to "cluster" a volume group.
cmviewconf does not report any VGs because the binary doesn't contain a list. But cmgetconf will recreate the cluster configuration file including the VOLUME_GROUP references because it looks at all of the disks to identify VGs owned by the cluster ID.
shows that a VG can be added or removed while the cluster is running (dependent package must be halted).
If a VOLUME_GROUP is de-listed from a cluster configuration file which matches the cluster binary file, and cmapplyconf is performed on the cluster configuration file, those VGs that are de-listed will have their cluster status revoked - even if the VG is currently active! (I just verified this with testing). This obviously causes a bit of confusion for cluster administrators. For this reason, ALWAYS list all VGs which you intend to remain owned/operated by the cluster, even if you use "vgchange -c y
cmviewconf does not report any VGs because the binary doesn't contain a list. But cmgetconf will recreate the cluster configuration file including the VOLUME_GROUP references because it looks at all of the disks to identify VGs owned by the cluster ID.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2005 02:12 AM
04-20-2005 02:12 AM
Re: Compelling reason to list cluster aware VGs in clusterconf.ascii?
Thanks Steven for shedding some light.
Meanwhile I was able to play on our system,
and I've just verified for myself that my assumption was correct.
The cluster daemons (esp. cmlvmd) don't seem to care for the current contents of the the clusterconf binary and only regard what was present when they were started.
I could even leave all running (including the package that was to be extended by a new VG), and simply change the package's control script to provide entries for the new VG.
I then vgchanged -a e the VG on the node the package was running on, mounted the new LVs.
After that the package could be halted, started and switched over without any problem.
However, to be on the safe side I finally reinitialized the whole cluster.
Meanwhile I was able to play on our system,
and I've just verified for myself that my assumption was correct.
The cluster daemons (esp. cmlvmd) don't seem to care for the current contents of the the clusterconf binary and only regard what was present when they were started.
I could even leave all running (including the package that was to be extended by a new VG), and simply change the package's control script to provide entries for the new VG.
I then vgchanged -a e the VG on the node the package was running on, mounted the new LVs.
After that the package could be halted, started and switched over without any problem.
However, to be on the safe side I finally reinitialized the whole cluster.
Madness, thy name is system administration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2005 04:19 AM
04-20-2005 04:19 AM
Re: Compelling reason to list cluster aware VGs in clusterconf.ascii?
You can add vg's lv's and filesystems on the fly.
The downside is - to truly test the cntl scripts on all nodes - does require an outage.
Case in point - I made a typo once on a filesystem - where I had 2 LV[26]'s! - this caused one of them not to be mounted...
Interesting thing was - I would have expected the package to fail - and it didn't....
Here's a process I use on a running cluster:
1. Build new sapdata12 on svr003:
vgextend /dev/vg50 with devs from SAN Admin
lvcreate -L 103536 -n sapdata12 /dev/vg50
mkdir /oracle/IPR/sapdata12
mkfs -F vxfs -o largefiles /dev/vg50/rsapdata12
vgexport -s -v -p -m /tmp/vg50.map /dev/vg50
rcp /tmp/vg50.map svr004:/tmp/vg50.map
mount /dev/vg50/sapdata12 /oracle/PR/sapdata12
2. On svr004:
vgexport vg50
mkdir /dev/vg50
mknod /dev/vg50/group c 64 0x4a0000
vgimport -s -v -m /tmp/vg50.map /dev/vg50
3. Back to svr003:
add to /etc/cmcluster/PRDBCI/prdbci.cntl :
LV[28]="/dev/vg50/sapdata12"; FS[28]="/oracle/PR/sapdata12"
rcp /etc/cmcluster/PRDBCI/iprdbci.cntl svr004: /etc/cmcluster/PRDBCI/prdbci.cntl
rcp /tmp/vg50.map svr005:/tmp/vg50.map
4. On backup server svr005:
vgexport vg50
mkdir /dev/vg50
mknod /dev/vg50/group c 64 0x4a0000
vgimport -s -v -m /tmp/vg50.map /dev/vg50
mkdir /oracle/PR/sapdata12
chown -R oraipr:dba /oracle/PR/sapdata12
add to /usr/openv/netbackup/scripts/PR/PR_fstab :
VOLUME = /dev/vg50/sapdata12 /oracle/PR/sapdata12
Rgds...Geoff
The downside is - to truly test the cntl scripts on all nodes - does require an outage.
Case in point - I made a typo once on a filesystem - where I had 2 LV[26]'s! - this caused one of them not to be mounted...
Interesting thing was - I would have expected the package to fail - and it didn't....
Here's a process I use on a running cluster:
1. Build new sapdata12 on svr003:
vgextend /dev/vg50 with devs from SAN Admin
lvcreate -L 103536 -n sapdata12 /dev/vg50
mkdir /oracle/IPR/sapdata12
mkfs -F vxfs -o largefiles /dev/vg50/rsapdata12
vgexport -s -v -p -m /tmp/vg50.map /dev/vg50
rcp /tmp/vg50.map svr004:/tmp/vg50.map
mount /dev/vg50/sapdata12 /oracle/PR/sapdata12
2. On svr004:
vgexport vg50
mkdir /dev/vg50
mknod /dev/vg50/group c 64 0x4a0000
vgimport -s -v -m /tmp/vg50.map /dev/vg50
3. Back to svr003:
add to /etc/cmcluster/PRDBCI/prdbci.cntl :
LV[28]="/dev/vg50/sapdata12"; FS[28]="/oracle/PR/sapdata12"
rcp /etc/cmcluster/PRDBCI/iprdbci.cntl svr004: /etc/cmcluster/PRDBCI/prdbci.cntl
rcp /tmp/vg50.map svr005:/tmp/vg50.map
4. On backup server svr005:
vgexport vg50
mkdir /dev/vg50
mknod /dev/vg50/group c 64 0x4a0000
vgimport -s -v -m /tmp/vg50.map /dev/vg50
mkdir /oracle/PR/sapdata12
chown -R oraipr:dba /oracle/PR/sapdata12
add to /usr/openv/netbackup/scripts/PR/PR_fstab :
VOLUME = /dev/vg50/sapdata12 /oracle/PR/sapdata12
Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP