- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Cluster OPS shared volume groups
Operating System - HP-UX
1819736
Members
2848
Online
109606
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-20-2003 02:48 AM
тАО03-20-2003 02:48 AM
Hi,
We have oracle 9i RAC installed on HP OPS cluster.
We have configured the shared volume groups for oracle in this way:
Server node(wrapap):
vgchange -a n vg_ops
vgchange -c n vg_ops
Client node(dbsrvwrp):
vgchange -a n vg_ops
vgchange -c n vg_ops
Server node:
cmcheckconf -k -v -C /etc/cmcluster/cmclconf.ascii
vgchange -a y vg_ops
cmapplyconf -k -v -C /etc/cmcluster/cmclconf.ascii
vgchange -a n vg_ops
cmruncl
vgchange -S y -c y vg_ops
vgchange -a s vg_ops
Client node:
vgchange -a s vg_ops
If we do:
vgdisplay -v vg_ops
we see:
wrapap Server
dbsrvwrp Client
We would like to know what happens to the volume group if the Server or the Client crash.
Also, we would like to know what happen to ORACLE in case of crash of one node of cluster.
thanks
We have oracle 9i RAC installed on HP OPS cluster.
We have configured the shared volume groups for oracle in this way:
Server node(wrapap):
vgchange -a n vg_ops
vgchange -c n vg_ops
Client node(dbsrvwrp):
vgchange -a n vg_ops
vgchange -c n vg_ops
Server node:
cmcheckconf -k -v -C /etc/cmcluster/cmclconf.ascii
vgchange -a y vg_ops
cmapplyconf -k -v -C /etc/cmcluster/cmclconf.ascii
vgchange -a n vg_ops
cmruncl
vgchange -S y -c y vg_ops
vgchange -a s vg_ops
Client node:
vgchange -a s vg_ops
If we do:
vgdisplay -v vg_ops
we see:
wrapap Server
dbsrvwrp Client
We would like to know what happens to the volume group if the Server or the Client crash.
Also, we would like to know what happen to ORACLE in case of crash of one node of cluster.
thanks
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-20-2003 07:22 AM
тАО03-20-2003 07:22 AM
Re: Cluster OPS shared volume groups
Giada,
Although I haven't had specific updates on 9i RAC, the base principles are the same. Regarding the server/client descriptions are really more semantics and doesn't have a great bearing on the volume management. Since raw lvols are required and the volume groups are active on both systems simultaneously, the first system to boot/activate the shared vg becomes the server, subsequent nodes 'sync' with the server as clients - essentially one node is identified as the master.
The only real consequence of this node designation is it identifies which node has the cluster lock disk. In the event of a loss of heartbeat (on a 2 node cluster) the nodes identify who is the master of the lock disk and that system will remain booted, the other system will shut down to reduce/eliminate chances of corruption to the logical volume data files.
If the 'server' should happen to shutdown/crash, it will deativate the shared vgs during shutdown. The cluster will reform with the remaining nodes(s) (messages can be seen in syslog.log) reform the cluster and the remaining node will become the server.
If the client should shutdown/crash, you shouldn't really see any change on the server.
I hope this helps clear things up, if not let me know what question(s) remain and I'll do my best to provide clarification.
Keith
Although I haven't had specific updates on 9i RAC, the base principles are the same. Regarding the server/client descriptions are really more semantics and doesn't have a great bearing on the volume management. Since raw lvols are required and the volume groups are active on both systems simultaneously, the first system to boot/activate the shared vg becomes the server, subsequent nodes 'sync' with the server as clients - essentially one node is identified as the master.
The only real consequence of this node designation is it identifies which node has the cluster lock disk. In the event of a loss of heartbeat (on a 2 node cluster) the nodes identify who is the master of the lock disk and that system will remain booted, the other system will shut down to reduce/eliminate chances of corruption to the logical volume data files.
If the 'server' should happen to shutdown/crash, it will deativate the shared vgs during shutdown. The cluster will reform with the remaining nodes(s) (messages can be seen in syslog.log) reform the cluster and the remaining node will become the server.
If the client should shutdown/crash, you shouldn't really see any change on the server.
I hope this helps clear things up, if not let me know what question(s) remain and I'll do my best to provide clarification.
Keith
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-20-2003 08:57 AM
тАО03-20-2003 08:57 AM
Re: Cluster OPS shared volume groups
OPS allows similtaneous writes to the same vg. Upon failover the writes continue for one node. This will result in a slower system only.
MC/ServiceGuard II class has a lab for Oracle Parallel Server. Here is their testing procedure:
1) Start the cluster but not the packages.
1) Verify the cluster is running and both oracle instances are running. Both nodes?
SVRMGR> connect internal
SVRMGR> select * from v$active_instances
2) Now start the packages.
cmrunpkg -n node1 OPS1pkg
cmviewcl -v
cmrunpkg -n node2 OPS2pkg
cmviewcl -v
3) Test basic cluster reformation:
power off node1
How long did node2 take to reform with OPS2pkg?
Repeat with node2.
4) Test internal failure and cluster reformation.
Kill -9 cmcld on node1
Repeat on node2
Should see TOC and dump.
Kill -9 lmon process on node1.
Repeat on node2.
5) Simultaneously from both nodes run:
ins_rows_1 (* node 1 *)
ins_rows_2 (* node 2 *)
Kill lmon daemon on node1.
For both nodes:
tail -f syslog.log
tail -f /oracle.../alert_OPS##.ora
MC/ServiceGuard II class has a lab for Oracle Parallel Server. Here is their testing procedure:
1) Start the cluster but not the packages.
1) Verify the cluster is running and both oracle instances are running. Both nodes?
SVRMGR> connect internal
SVRMGR> select * from v$active_instances
2) Now start the packages.
cmrunpkg -n node1 OPS1pkg
cmviewcl -v
cmrunpkg -n node2 OPS2pkg
cmviewcl -v
3) Test basic cluster reformation:
power off node1
How long did node2 take to reform with OPS2pkg?
Repeat with node2.
4) Test internal failure and cluster reformation.
Kill -9 cmcld on node1
Repeat on node2
Should see TOC and dump.
Kill -9 lmon process on node1.
Repeat on node2.
5) Simultaneously from both nodes run:
ins_rows_1 (* node 1 *)
ins_rows_2 (* node 2 *)
Kill lmon daemon on node1.
For both nodes:
tail -f syslog.log
tail -f /oracle.../alert_OPS##.ora
Support Fatherhood - Stop Family Law
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-20-2003 10:20 AM
тАО03-20-2003 10:20 AM
Solution
The server/client state for a VG which activated as "shared" in a SLVM configuration does not really matter from an application point of view.
The 1st node that activates the VG gets "server", all other "clients". The server handles some administrative things, e.g. getting and propagating stale extent information, maintaining the MCR, etc. If the server fails, one of the clients gets the role of the server and takes over its repsonsibilities. That's it.
BTW, it does not have anything to do with the cluster lock disk handling.
Best regards...
Dietmar.
The 1st node that activates the VG gets "server", all other "clients". The server handles some administrative things, e.g. getting and propagating stale extent information, maintaining the MCR, etc. If the server fails, one of the clients gets the role of the server and takes over its repsonsibilities. That's it.
BTW, it does not have anything to do with the cluster lock disk handling.
Best regards...
Dietmar.
"Logic is the beginning of wisdom; not the end." -- Spock (Star Trek VI: The Undiscovered Country)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP