<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: cluster document in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461503#M673137</link>
    <description>Shalom himacs,&lt;BR /&gt;&lt;BR /&gt;This is my favorite. &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/6033/HPServiceguardClusterConfig_WP.pdf" target="_blank"&gt;http://docs.hp.com/en/6033/HPServiceguardClusterConfig_WP.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;It works well on IA64/IPF servers.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Thu, 16 Jul 2009 19:58:31 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2009-07-16T19:58:31Z</dc:date>
    <item>
      <title>cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461500#M673134</link>
      <description>Hi gurus,&lt;BR /&gt;&lt;BR /&gt;please provide document for cluster configuration for integrity servers..&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;  regards&lt;BR /&gt;himacs</description>
      <pubDate>Thu, 16 Jul 2009 18:42:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461500#M673134</guid>
      <dc:creator>himacs</dc:creator>
      <dc:date>2009-07-16T18:42:34Z</dc:date>
    </item>
    <item>
      <title>Re: cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461501#M673135</link>
      <description>This is a good document to have under your hand if you are going to do clustering:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/B3936-90135/B3936-90135.pdf" target="_blank"&gt;http://docs.hp.com/en/B3936-90135/B3936-90135.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;HTH</description>
      <pubDate>Thu, 16 Jul 2009 18:58:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461501#M673135</guid>
      <dc:creator>Mel Burslan</dc:creator>
      <dc:date>2009-07-16T18:58:35Z</dc:date>
    </item>
    <item>
      <title>Re: cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461502#M673136</link>
      <description>check this doc...&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/B3936-90143/B3936-90143.pdf" target="_blank"&gt;http://docs.hp.com/en/B3936-90143/B3936-90143.pdf&lt;/A&gt;</description>
      <pubDate>Thu, 16 Jul 2009 19:14:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461502#M673136</guid>
      <dc:creator>Roopesh Francis_1</dc:creator>
      <dc:date>2009-07-16T19:14:05Z</dc:date>
    </item>
    <item>
      <title>Re: cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461503#M673137</link>
      <description>Shalom himacs,&lt;BR /&gt;&lt;BR /&gt;This is my favorite. &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/6033/HPServiceguardClusterConfig_WP.pdf" target="_blank"&gt;http://docs.hp.com/en/6033/HPServiceguardClusterConfig_WP.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;It works well on IA64/IPF servers.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 16 Jul 2009 19:58:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461503#M673137</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-07-16T19:58:31Z</dc:date>
    </item>
    <item>
      <title>Re: cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461504#M673138</link>
      <description>Since you were not that specific, you may want to go here:  &lt;A href="http://docs.hp.com/en/ha.html" target="_blank"&gt;http://docs.hp.com/en/ha.html&lt;/A&gt; and select the most appropriate doc.</description>
      <pubDate>Thu, 16 Jul 2009 23:09:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461504#M673138</guid>
      <dc:creator>Serviceguard for Linux</dc:creator>
      <dc:date>2009-07-16T23:09:54Z</dc:date>
    </item>
    <item>
      <title>Re: cluster document</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461505#M673139</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;Here is the one I created&lt;BR /&gt;&lt;BR /&gt;Both nodes&lt;BR /&gt;Root mirror&lt;BR /&gt;&lt;BR /&gt;# dont allow the os to auto activate the vgs&lt;BR /&gt;# (vg00 is auto activated)&lt;BR /&gt;vi /etc/lvmrc&lt;BR /&gt;VG_AUTO_ACTIVATE=0&lt;BR /&gt;customer_vgactivation&lt;BR /&gt;{&lt;BR /&gt;# If there are any LOCAL vgs, activate here&lt;BR /&gt;/usr/sbin/vgchange -a y /dev/vglocal-not-cluster-vg&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# let the cluster auto start after reboots&lt;BR /&gt;vi /etc/rc.config.d/cmcluster&lt;BR /&gt;AUTOSTART_CMCLD=1&lt;BR /&gt;&lt;BR /&gt;NTP config /etc/ntp.conf&lt;BR /&gt;Primary and secondary time sources&lt;BR /&gt;Nptq â  p&lt;BR /&gt;/sbin/init.d/xntpd start/stop/status&lt;BR /&gt;Xntpq&lt;BR /&gt;/etc/rc.config.d/netdaemons â   TIME_SERVER=1&lt;BR /&gt;&lt;BR /&gt;.rhosts or cmclnodelist (/etc/cmcluster)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Node 1&lt;BR /&gt;-------&lt;BR /&gt;0) Create LUN, ioscan, insf -e&lt;BR /&gt;1) pvcreate -f /dev/rdks/c1t1d0 ( Assume this is a SAN LUN vdisk)&lt;BR /&gt;2) mkdir /vg01&lt;BR /&gt;3) mknod /dev/vg01/group c 64 0x010000  (0x## this number must match in node2, step 2 under Node 2)&lt;BR /&gt;4) vgcreate /dev/vg01 /dev/dsk/c1t1d0&lt;BR /&gt;5) lvcreate -L 2048 -n lvora /dev/vg01&lt;BR /&gt;6) newfs -F vxfs /dev/vg01/rlvora&lt;BR /&gt;7) mkdir /oracle&lt;BR /&gt;8) mount -F vxfs /dev/vg01/lvora /oracle&lt;BR /&gt;9) vgexport -v -s -p -m /tmp/vg01.map /dev/vg01&lt;BR /&gt;10) scp /tmp/vg01.map node2:/tmp&lt;BR /&gt;&lt;BR /&gt;Node 2&lt;BR /&gt;------&lt;BR /&gt;1) mkdir /dev/vg01&lt;BR /&gt;2) mknod /dev/vg01/group c 64 0x010000  (0x##   This number must be same as Step 3 of Node 1)&lt;BR /&gt;3) vgimport -v -s -m /tmp/vg01.map /dev/vg01&lt;BR /&gt;4) mkdir /oracle&lt;BR /&gt;5) vgchange -a r /dev/vg01&lt;BR /&gt;6) mount â  F vxfs â  o ro /dev/vg01/lvora /oracle&lt;BR /&gt;7) vgcfgbackup /dev/vg01&lt;BR /&gt;8) vgchange -a n /dev/vg01&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;node1&lt;BR /&gt;------&lt;BR /&gt;lan0 heartbeat 192.168.1.2 &lt;BR /&gt;lan1 primary   A.B.C.101&lt;BR /&gt;lan2 standby&lt;BR /&gt;&lt;BR /&gt;node2&lt;BR /&gt;------&lt;BR /&gt;lan0 heartbeat 192.168.1.2&lt;BR /&gt;lan1 primary  A.B.C.102&lt;BR /&gt;lan2 standby&lt;BR /&gt;&lt;BR /&gt;add lan0 and lan1 to /etc/hosts&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;in Node 1 and Node 2 (for ftp/rcp) cmclnodelist is Identicall on all nodes&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;#Custer configuration steps&lt;BR /&gt;#1)&lt;BR /&gt;vi /etc/cmcluster/cmclnodelist (create in node1 and node2)&lt;BR /&gt;node1 root&lt;BR /&gt;node2 root&lt;BR /&gt;&lt;BR /&gt;chmod 444 cmclnodelist&lt;BR /&gt;rcp cmclnodelist node2:$PWD&lt;BR /&gt;&lt;BR /&gt;in Node 1&lt;BR /&gt;---------&lt;BR /&gt;#2)  Create cluster config file&lt;BR /&gt;cd /etc/cmcluster&lt;BR /&gt;cmquerycl -v -C cmclconfig.ascii -n node1 -n node2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;vi cmclconfig.ascii  # change ( minimum it needs these 2)&lt;BR /&gt;CLUSTER_NAME psoft&lt;BR /&gt;NODE_TIMEOUT 2000000 # 2 seconds, good to have 2-3 secs&lt;BR /&gt;&lt;BR /&gt;# change other parameters&lt;BR /&gt;# Add the vgs&lt;BR /&gt;VOLUME-GROUPS&lt;BR /&gt;/dev/vg01&lt;BR /&gt;&lt;BR /&gt;#3) check for errors &lt;BR /&gt;cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii&lt;BR /&gt;&lt;BR /&gt;#4) compile cmclconfig.ascii to binary and distribute the binary file to all the other cluter nodes&lt;BR /&gt;cmapplyconf -v -C /etc/cmcluster/cmclconfig.ascii&lt;BR /&gt;&lt;BR /&gt;#5) Start the cluster and view&lt;BR /&gt;cmruncl â  v  (first time start, 100% node attendance is required. For other times to start cluster in one node cmruncl â  v â  n node1&lt;BR /&gt;cmviewcl&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;now a basic cluster is up.&lt;BR /&gt;to halt&lt;BR /&gt;cmhaltl -v&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;now start creating the packages.&lt;BR /&gt;Each Package needs 2 files 1) pkg conf file and 2) pkg script control file &lt;BR /&gt;&lt;BR /&gt;create pkg conf file&lt;BR /&gt;mkdir /etc/cmcluster/jdeprod&lt;BR /&gt;cd /etc/cmcluster/jdepro&lt;BR /&gt;cmmakepkg -v -p jdeprod.conf&lt;BR /&gt;&lt;BR /&gt;vi jdeprod.conf&lt;BR /&gt;PACKAGE_NAME   jdeprod&lt;BR /&gt;PACKAGE_TYPE   FAILOVER&lt;BR /&gt;FAILOVER_POLICY   CONFIGURED NODE&lt;BR /&gt;FAILBACK_POLICY   MANUAL&lt;BR /&gt;NODE_NAME   node1&lt;BR /&gt;NODE_NAME   node2&lt;BR /&gt;AUTO_RUN   YES&lt;BR /&gt;LOCAL_LAN_FAILOVER_ALLOWED YES&lt;BR /&gt;RUN_SCRIPT   /etc/cmcluster/clustername/pkgname/jdeprod.ctl start&lt;BR /&gt;RUN_SCRIPT_TIMEOUT  NO_TIMEOUT&lt;BR /&gt;HALT_SCRIPT   /etc/cmcluster/clustername/pkgname/jdeprod.ctl stop&lt;BR /&gt;HALT_SCRIPT_TIMEOUT  NO_TIMEOUT&lt;BR /&gt;SERVICE_NAME   orapromon&lt;BR /&gt;SERVICE_HALT_TIMEOUT  120&lt;BR /&gt;SERVICE_NAME   jdepromon&lt;BR /&gt;SERVICE_HALT_TIMEOUT  120&lt;BR /&gt;SUBNET    204.187.168.0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;create pkg control script&lt;BR /&gt;cd /etc/cmcluster/jdepro&lt;BR /&gt;cmmakepkg -v -s jdepro.cntl&lt;BR /&gt;&lt;BR /&gt;add all the vgs, lvs, fs and mount points used by the pkg&lt;BR /&gt;VG[0]=vg01&lt;BR /&gt;VG{1}=vg02&lt;BR /&gt;&lt;BR /&gt;Still working on to finish the pkg conf and pkg control script.&lt;BR /&gt;</description>
      <pubDate>Sun, 19 Jul 2009 05:30:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-document/m-p/4461505#M673139</guid>
      <dc:creator>Basheer_2</dc:creator>
      <dc:date>2009-07-19T05:30:01Z</dc:date>
    </item>
  </channel>
</rss>

