<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cluster OPS shared volume groups in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931786#M111735</link>
    <description>OPS allows similtaneous writes to the same vg.  Upon failover the writes continue for one node.  This will result in a slower system only.&lt;BR /&gt;&lt;BR /&gt;MC/ServiceGuard II class has a lab for Oracle Parallel Server.  Here is their testing procedure:&lt;BR /&gt;&lt;BR /&gt;1) Start the cluster but not the packages.&lt;BR /&gt;&lt;BR /&gt;1) Verify the cluster is running and both oracle instances are running.  Both nodes?&lt;BR /&gt;&lt;BR /&gt;SVRMGR&amp;gt; connect internal&lt;BR /&gt;SVRMGR&amp;gt; select * from v$active_instances&lt;BR /&gt;&lt;BR /&gt;2) Now start the packages.&lt;BR /&gt;&lt;BR /&gt;cmrunpkg -n node1 OPS1pkg&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;cmrunpkg -n node2 OPS2pkg&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;3) Test basic cluster reformation:&lt;BR /&gt;&lt;BR /&gt;power off node1&lt;BR /&gt;How long did node2 take to reform with OPS2pkg?&lt;BR /&gt;Repeat with node2.&lt;BR /&gt;&lt;BR /&gt;4) Test internal failure and cluster reformation.&lt;BR /&gt;&lt;BR /&gt;Kill -9 cmcld on node1&lt;BR /&gt;Repeat on node2&lt;BR /&gt;Should see TOC and dump.&lt;BR /&gt;&lt;BR /&gt;Kill -9 lmon process on node1.&lt;BR /&gt;Repeat on node2.&lt;BR /&gt;&lt;BR /&gt;5) Simultaneously from both nodes run:&lt;BR /&gt;&lt;BR /&gt;ins_rows_1 (* node 1 *)&lt;BR /&gt;ins_rows_2 (* node 2 *)&lt;BR /&gt;Kill lmon daemon on node1.&lt;BR /&gt;&lt;BR /&gt;For both nodes:&lt;BR /&gt;&lt;BR /&gt;tail -f syslog.log&lt;BR /&gt;tail -f /oracle.../alert_OPS##.ora&lt;BR /&gt;</description>
    <pubDate>Thu, 20 Mar 2003 16:57:42 GMT</pubDate>
    <dc:creator>Michael Steele_2</dc:creator>
    <dc:date>2003-03-20T16:57:42Z</dc:date>
    <item>
      <title>Cluster OPS shared volume groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931784#M111733</link>
      <description>Hi,&lt;BR /&gt;We have oracle 9i RAC installed on HP OPS cluster.&lt;BR /&gt;We have configured the shared volume groups for oracle in this way:&lt;BR /&gt;&lt;BR /&gt;Server node(wrapap):&lt;BR /&gt;vgchange -a n vg_ops &lt;BR /&gt;vgchange -c n vg_ops &lt;BR /&gt;&lt;BR /&gt;Client node(dbsrvwrp):&lt;BR /&gt;vgchange -a n vg_ops &lt;BR /&gt;vgchange -c n vg_ops &lt;BR /&gt;&lt;BR /&gt;Server node:&lt;BR /&gt;&lt;BR /&gt;cmcheckconf -k -v -C /etc/cmcluster/cmclconf.ascii&lt;BR /&gt;&lt;BR /&gt;vgchange -a y vg_ops &lt;BR /&gt;&lt;BR /&gt;cmapplyconf -k -v -C /etc/cmcluster/cmclconf.ascii &lt;BR /&gt;&lt;BR /&gt;vgchange -a n vg_ops &lt;BR /&gt;cmruncl&lt;BR /&gt;vgchange -S y -c y vg_ops &lt;BR /&gt;vgchange -a s vg_ops &lt;BR /&gt;&lt;BR /&gt;Client node:&lt;BR /&gt;&lt;BR /&gt;vgchange -a s vg_ops &lt;BR /&gt;&lt;BR /&gt;If we do:&lt;BR /&gt;&lt;BR /&gt;vgdisplay -v vg_ops&lt;BR /&gt;&lt;BR /&gt;we see:&lt;BR /&gt;&lt;BR /&gt;   wrapap       Server&lt;BR /&gt;   dbsrvwrp     Client&lt;BR /&gt;&lt;BR /&gt;We would like to know what happens to the volume group  if the Server or the Client crash.&lt;BR /&gt;Also, we would like to know what happen to ORACLE in case of crash of one node of cluster. &lt;BR /&gt;&lt;BR /&gt;thanks&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Mar 2003 10:48:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931784#M111733</guid>
      <dc:creator>Giada Bonfà</dc:creator>
      <dc:date>2003-03-20T10:48:55Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster OPS shared volume groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931785#M111734</link>
      <description>Giada,&lt;BR /&gt;&lt;BR /&gt;Although I haven't had specific updates on 9i RAC, the base principles are the same.  Regarding the server/client descriptions are really more semantics and doesn't have a great bearing on the volume management. Since raw lvols are required and the volume groups are active on both systems simultaneously, the first system to boot/activate the shared vg becomes the server, subsequent nodes 'sync' with the server as clients - essentially one node is identified as the master. &lt;BR /&gt;&lt;BR /&gt;The only real consequence of this node designation is it identifies which node has the cluster lock disk. In the event of a loss of heartbeat (on a 2 node cluster) the nodes identify who is the master of the lock disk and that system will remain booted, the other system will shut down to reduce/eliminate chances of corruption to the logical volume data files. &lt;BR /&gt;&lt;BR /&gt;If the 'server' should happen to shutdown/crash, it will deativate the shared vgs during shutdown. The cluster will reform with the remaining nodes(s) (messages can be seen in syslog.log) reform the cluster and the remaining node will become the server.&lt;BR /&gt;&lt;BR /&gt;If the client should shutdown/crash, you shouldn't really see any change on the server. &lt;BR /&gt;&lt;BR /&gt;I hope this helps clear things up, if not let me know what question(s) remain and I'll do my best to provide clarification.&lt;BR /&gt;&lt;BR /&gt;Keith</description>
      <pubDate>Thu, 20 Mar 2003 15:22:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931785#M111734</guid>
      <dc:creator>keith persons</dc:creator>
      <dc:date>2003-03-20T15:22:35Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster OPS shared volume groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931786#M111735</link>
      <description>OPS allows similtaneous writes to the same vg.  Upon failover the writes continue for one node.  This will result in a slower system only.&lt;BR /&gt;&lt;BR /&gt;MC/ServiceGuard II class has a lab for Oracle Parallel Server.  Here is their testing procedure:&lt;BR /&gt;&lt;BR /&gt;1) Start the cluster but not the packages.&lt;BR /&gt;&lt;BR /&gt;1) Verify the cluster is running and both oracle instances are running.  Both nodes?&lt;BR /&gt;&lt;BR /&gt;SVRMGR&amp;gt; connect internal&lt;BR /&gt;SVRMGR&amp;gt; select * from v$active_instances&lt;BR /&gt;&lt;BR /&gt;2) Now start the packages.&lt;BR /&gt;&lt;BR /&gt;cmrunpkg -n node1 OPS1pkg&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;cmrunpkg -n node2 OPS2pkg&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;3) Test basic cluster reformation:&lt;BR /&gt;&lt;BR /&gt;power off node1&lt;BR /&gt;How long did node2 take to reform with OPS2pkg?&lt;BR /&gt;Repeat with node2.&lt;BR /&gt;&lt;BR /&gt;4) Test internal failure and cluster reformation.&lt;BR /&gt;&lt;BR /&gt;Kill -9 cmcld on node1&lt;BR /&gt;Repeat on node2&lt;BR /&gt;Should see TOC and dump.&lt;BR /&gt;&lt;BR /&gt;Kill -9 lmon process on node1.&lt;BR /&gt;Repeat on node2.&lt;BR /&gt;&lt;BR /&gt;5) Simultaneously from both nodes run:&lt;BR /&gt;&lt;BR /&gt;ins_rows_1 (* node 1 *)&lt;BR /&gt;ins_rows_2 (* node 2 *)&lt;BR /&gt;Kill lmon daemon on node1.&lt;BR /&gt;&lt;BR /&gt;For both nodes:&lt;BR /&gt;&lt;BR /&gt;tail -f syslog.log&lt;BR /&gt;tail -f /oracle.../alert_OPS##.ora&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Mar 2003 16:57:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931786#M111735</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-03-20T16:57:42Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster OPS shared volume groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931787#M111736</link>
      <description>The server/client state for a VG which activated as "shared" in a SLVM configuration does not really matter from an application point of view.&lt;BR /&gt;&lt;BR /&gt;The 1st node that activates the VG gets "server", all other "clients". The server handles some administrative things, e.g. getting and propagating stale extent information, maintaining the MCR, etc. If the server fails, one of the clients gets the role of the server and takes over its repsonsibilities. That's it.&lt;BR /&gt;&lt;BR /&gt;BTW, it does not have anything to do with the cluster lock disk handling.&lt;BR /&gt;&lt;BR /&gt;Best regards...&lt;BR /&gt; Dietmar.</description>
      <pubDate>Thu, 20 Mar 2003 18:20:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-ops-shared-volume-groups/m-p/2931787#M111736</guid>
      <dc:creator>Dietmar Konermann</dc:creator>
      <dc:date>2003-03-20T18:20:51Z</dc:date>
    </item>
  </channel>
</rss>

