<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Issues with starting cluster in Georgraphic Redundancy Configuration in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116426#M708809</link>
    <description>Ok, so why bohter with the toremcluster if it is just the vancluster htat i san issue?&lt;BR /&gt;Anyway, what version of SG are you using and what patch level for Sg do you hav einstalled?&lt;BR /&gt;do what /usr/lbin/cmcld to get this.&lt;BR /&gt;&lt;BR /&gt;What happens when you do a cmquerycl on this node?&lt;BR /&gt;cmquerycl -n vanemsc&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 12 Nov 2003 03:13:47 GMT</pubDate>
    <dc:creator>melvyn burnard</dc:creator>
    <dc:date>2003-11-12T03:13:47Z</dc:date>
    <item>
      <title>Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116422#M708805</link>
      <description>Hi Folks,&lt;BR /&gt;&lt;BR /&gt;I am having issues starting the cluster in a single node cluster configuration i.e. Geogrphic Redundancy.  Here is the output of cmviewcl -v on the node that is having issues.&lt;BR /&gt;&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER      STATUS&lt;BR /&gt;vancluster   down&lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE&lt;BR /&gt;  vanemsc     down         unknown&lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME&lt;BR /&gt;    PRIMARY      unknown      0/0/0/0      lan0&lt;BR /&gt;    STANDBY      unknown      0/6/0/0      lan2&lt;BR /&gt;&lt;BR /&gt;UNOWNED_PACKAGES&lt;BR /&gt;&lt;BR /&gt;    PACKAGE      STATUS       STATE        PKG_SWITCH   NODE&lt;BR /&gt;    sncPkg       down                                   unowned&lt;BR /&gt;&lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        unknown&lt;BR /&gt;      Failback        unknown&lt;BR /&gt;&lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   NODE_NAME    NAME&lt;BR /&gt;      Subnet     unknown  vanemsc     135.93.27.0&lt;BR /&gt;&lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME&lt;BR /&gt;      Primary      down                      vanemsc&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Here is the output of the node that is active.&lt;BR /&gt;&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER      STATUS&lt;BR /&gt;toremcluster up&lt;BR /&gt;&lt;BR /&gt;  NODE         STATUS       STATE&lt;BR /&gt;  toremsc      up           running&lt;BR /&gt;&lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS       PATH         NAME&lt;BR /&gt;    PRIMARY      up           0/0/0/0      lan0&lt;BR /&gt;    STANDBY      up           0/2/0/0      lan2&lt;BR /&gt;&lt;BR /&gt;    PACKAGE      STATUS       STATE        PKG_SWITCH   NODE&lt;BR /&gt;    sncPkg       up           running      enabled      toremsc&lt;BR /&gt;&lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;&lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   MAX_RESTARTS  RESTARTS   NAME&lt;BR /&gt;      Service    up          Unlimited         0   sncMonitor&lt;BR /&gt;      Subnet     up                                135.92.27.0&lt;BR /&gt;&lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME&lt;BR /&gt;      Primary      up           enabled      toremsc      (current)&lt;BR /&gt;&lt;BR /&gt;The showtop command on the node that is having issues is indicates standby and the one that is working is indicated as active ACTIVE.:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Here is the output of the syslog.log file from the node that is having issues. Also, it dumps a core file in the /var/adm/cmluster/.&lt;BR /&gt;&lt;BR /&gt;Nov 11 16:48:22 vanemsc : su : + 0 ems-root&lt;BR /&gt;Nov 11 17:59:15 vanemsc : su : + 1 ems-root&lt;BR /&gt;Nov 11 18:07:41 vanemsc CM-CMD[11496]: cmruncl&lt;BR /&gt;Nov 11 18:07:41 vanemsc cmclconfd[11502]: Executing "/usr/lbin/cmcld" for node vanemsc&lt;BR /&gt;Nov 11 18:07:41 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.&lt;BR /&gt;Nov 11 18:07:36 vanemsc : su : + 1 ems-root&lt;BR /&gt;Nov 11 18:07:41 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads&lt;BR /&gt;Nov 11 18:07:42 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.&lt;BR /&gt;Nov 11 18:07:42 vanemsc cmcld: Warning. No cluster lock is configured.&lt;BR /&gt;Nov 11 18:07:42 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmclconfd[11502]: The ServiceGuard daemon, /usr/lbin/cmcld[11503], died upon receiving the signal 6.&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmsrvassistd[11507]: Lost connection to the cluster daemon.&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmsrvassistd[11509]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmsrvassistd[11507]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connec&lt;BR /&gt;tion abort&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmclconfd[11512]: Unable to lookup any node information in CDB: Connection refused&lt;BR /&gt;Nov 11 18:07:44 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 11 19:12:51 vanemsc : su : + 3 ems-root&lt;BR /&gt;Nov 11 19:13:07 vanemsc CM-CMD[29065]: cmhaltcl -v&lt;BR /&gt;Nov 11 19:13:25 vanemsc CM-CMD[29070]: cmhaltcl -n vanemsc&lt;BR /&gt;Nov 11 19:13:42 vanemsc CM-CMD[29097]: cmhaltcl -f vanemsc&lt;BR /&gt;Nov 11 19:13:48 vanemsc CM-CMD[29098]: cmhaltcl&lt;BR /&gt;Nov 11 19:18:32 vanemsc : su : + 3 ems-ems&lt;BR /&gt;Nov 11 19:23:19 vanemsc : su : + 1 ems-root&lt;BR /&gt;Nov 11 19:23:49 vanemsc CM-CMD[2584]: cmhaltpkg -v -n vanemsc sncPkg&lt;BR /&gt;Nov 11 19:33:30 vanemsc CM-CMD[5174]: cmhaltnode&lt;BR /&gt;Nov 11 19:37:39 vanemsc CM-CMD[6399]: cmrunpkg -n vanemsc&lt;BR /&gt;Nov 11 19:37:51 vanemsc CM-CMD[6400]: cmrunpkg -v -n vanemsc sncPkg&lt;BR /&gt;Nov 11 19:37:57 vanemsc CM-CMD[6471]: cmruncl&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmclconfd[6477]: Executing "/usr/lbin/cmcld" for node vanemsc&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmcld: Warning. No cluster lock is configured.&lt;BR /&gt;Nov 11 19:37:58 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146&lt;BR /&gt;Nov 11 19:38:01 vanemsc cmclconfd[6477]: The ServiceGuard daemon, /usr/lbin/cmcld[6478], died upon receiving the signal 6.&lt;BR /&gt;Nov 11 19:38:01 vanemsc cmsrvassistd[6484]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 11 19:38:01 vanemsc cmsrvassistd[6482]: Lost connection to the cluster daemon.&lt;BR /&gt;Nov 11 19:38:01 vanemsc cmsrvassistd[6482]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connect&lt;BR /&gt;ion abort&lt;BR /&gt;Nov 11 19:38:01 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 11 19:41:04 vanemsc : su : + 1 ems-ems&lt;BR /&gt;Nov 11 19:56:56 vanemsc : su : + 1 ems-root&lt;BR /&gt;&lt;BR /&gt;Please advise me as to what the cause of the problem could be.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;NBA</description>
      <pubDate>Tue, 11 Nov 2003 15:32:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116422#M708805</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-11T15:32:10Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116423#M708806</link>
      <description>Since your cluster isn't up the packages aren't yet an issue.  Check your cluster this way:&lt;BR /&gt;&lt;BR /&gt;cmcheckconf -v -C /etc/cmcluster/cluster.ascii&lt;BR /&gt; &lt;BR /&gt;Please attach this file:&lt;BR /&gt; &lt;BR /&gt;/etc/cmcluster/cluster.ascii</description>
      <pubDate>Tue, 11 Nov 2003 18:52:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116423#M708806</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-11T18:52:12Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116424#M708807</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;"single node cluster configuration i.e. Geogrphic Redundancy."&lt;BR /&gt;&lt;BR /&gt;I'm sorry - to me, these are mutually exclusive terms.&lt;BR /&gt;&lt;BR /&gt;Please explain.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff</description>
      <pubDate>Tue, 11 Nov 2003 19:58:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116424#M708807</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2003-11-11T19:58:57Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116425#M708808</link>
      <description>Oouuu.  I like that.  Mutally Exclusive.</description>
      <pubDate>Tue, 11 Nov 2003 20:08:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116425#M708808</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-11T20:08:22Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116426#M708809</link>
      <description>Ok, so why bohter with the toremcluster if it is just the vancluster htat i san issue?&lt;BR /&gt;Anyway, what version of SG are you using and what patch level for Sg do you hav einstalled?&lt;BR /&gt;do what /usr/lbin/cmcld to get this.&lt;BR /&gt;&lt;BR /&gt;What happens when you do a cmquerycl on this node?&lt;BR /&gt;cmquerycl -n vanemsc&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Nov 2003 03:13:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116426#M708809</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2003-11-12T03:13:47Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116427#M708810</link>
      <description>Focus all of your attention on the first error encountered.&lt;BR /&gt;The first error that was encountered in the syslog.log file was this:&lt;BR /&gt;cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146&lt;BR /&gt;&lt;BR /&gt;This issue has been addressed in patches for Serviceguard versions A.11.13 and A.11.14&lt;BR /&gt;&lt;BR /&gt;Use 'what /usr/lbin/cmcld | grep Date" to determine the version and patch level of Serviceguard loaded.&lt;BR /&gt;If SG is not patched, consider loading one:&lt;BR /&gt;PHSS_28849 :A.11.13&lt;BR /&gt;PHSS_29915 :A.11.14&lt;BR /&gt;&lt;BR /&gt;-sd</description>
      <pubDate>Wed, 12 Nov 2003 09:04:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116427#M708810</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2003-11-12T09:04:23Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116428#M708811</link>
      <description>Guys, &lt;BR /&gt;&lt;BR /&gt;Thanks for the swift response.  Here is the output of the cmcheckconf -v -C /etc/cmcluster/sncCluster.ascii&lt;BR /&gt;&lt;BR /&gt;cmcheckconf -v -C /etc/cmcluster/sncCluster.ascii&lt;BR /&gt;&lt;BR /&gt;Checking cluster file: /etc/cmcluster/sncCluster.ascii&lt;BR /&gt;Checking nodes ... Done&lt;BR /&gt;Checking existing configuration ...&lt;BR /&gt;Done&lt;BR /&gt;Gathering configuration information ........... Done&lt;BR /&gt;Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.&lt;BR /&gt;Warning: Disks which do not have IDs cannot be included in the topology description.&lt;BR /&gt;Use pvcreate(1m) to initialize disks for use with LVM, or&lt;BR /&gt;use vxdiskadm(1m) to initalize disks for use with VxVM.&lt;BR /&gt;Cluster vancluster is an existing cluster&lt;BR /&gt;Checking for inconsistencies .. Done&lt;BR /&gt;Cluster vancluster is an existing cluster&lt;BR /&gt;Maximum configured packages parameter is 8.&lt;BR /&gt;Configuring 1 package(s).&lt;BR /&gt;7 package(s) can be added to this cluster.&lt;BR /&gt;Modifying configuration on node vanemsc&lt;BR /&gt;Modifying the cluster configuration for cluster vancluster.&lt;BR /&gt;Validating update for /cluster - value information is identical.&lt;BR /&gt;Modifying node vanemsc in cluster vancluster.&lt;BR /&gt;&lt;BR /&gt;Verification completed with no errors found.&lt;BR /&gt;Use the cmapplyconf command to apply the configuration.&lt;BR /&gt;&lt;BR /&gt;Here is the output of the what /usr/lbin/cmcld | grep Date&lt;BR /&gt;A.11.09 Date: 05/16/2001; PATCH: PHSS_24033.&lt;BR /&gt;&lt;BR /&gt;The other facts I would like to bring to your attention is when I give a command er_status on both the nodes, it indicates "DOWN".</description>
      <pubDate>Wed, 12 Nov 2003 09:45:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116428#M708811</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-12T09:45:43Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116429#M708812</link>
      <description>Regarding disk c0t1d0, the course of action is clearly indicated, it needs to pvcreate'd, or, if you're using VXVM then use vxdiskadm.&lt;BR /&gt;&lt;BR /&gt;Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.&lt;BR /&gt;Warning: Disks which do not have IDs cannot be included in the topology description.&lt;BR /&gt;Use pvcreate(1m) to initialize disks for use with LVM, or&lt;BR /&gt;use vxdiskadm(1m) to initalize disks for use with VxVM.&lt;BR /&gt;&lt;BR /&gt;############################################&lt;BR /&gt;&lt;BR /&gt;LVM or VXVM has to be completed without error before any ServiceGuard work can be accomplished.  So go back to either and start your SG afterwards.</description>
      <pubDate>Wed, 12 Nov 2003 10:27:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116429#M708812</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-12T10:27:47Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116430#M708813</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;The disk /dev/dsk/c0t1d0 is a HP DVD-ROM.&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;NBA</description>
      <pubDate>Wed, 12 Nov 2003 10:46:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116430#M708813</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-12T10:46:26Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116431#M708814</link>
      <description>The disk message is just a warning, you do not have to use pvcreate on it.&lt;BR /&gt;If as you say you have 11.09, this is 7 patches out of date, you should obtain PHSS_27158 and install that, then try again.</description>
      <pubDate>Wed, 12 Nov 2003 10:55:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116431#M708814</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2003-11-12T10:55:28Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116432#M708815</link>
      <description>Ok then, the same LVM info. exists on both servers for each vg?  Right?&lt;BR /&gt;&lt;BR /&gt;If so then apply the cluster bin file and start the cluster without starting the packages.&lt;BR /&gt;&lt;BR /&gt;######################################&lt;BR /&gt;&lt;BR /&gt;Here's how to sync up the alt. node with the pri. node's LVM info.&lt;BR /&gt;&lt;BR /&gt;pri node:&lt;BR /&gt;&lt;BR /&gt;copy down minor number of vg on pri. node&lt;BR /&gt;ll /dev/vg##/group&lt;BR /&gt;vgchange -a n /dev/vg##&lt;BR /&gt;vgexport -s -m /tmp/lvm_map /dev/vg##&lt;BR /&gt;ftp /tmp/lvm_map over to alt node&lt;BR /&gt;&lt;BR /&gt;alt node:&lt;BR /&gt;&lt;BR /&gt;mkdir /dev/vg##&lt;BR /&gt;mknod /dev/vg##/group c 64 0x0#0000&lt;BR /&gt;vgimport -s -m /tmp/lvm_map /dev/vg##&lt;BR /&gt;&lt;BR /&gt;##########################################&lt;BR /&gt;&lt;BR /&gt;Done - cmquerycl -n nodea -n nodeb -C cluster.ascii (* Right? *)&lt;BR /&gt;&lt;BR /&gt;Done - cmcheckconf -C /etc/cmcluster/cluster.ascii (* Right? *)&lt;BR /&gt;&lt;BR /&gt;##########################################&lt;BR /&gt;&lt;BR /&gt;Then its just this:&lt;BR /&gt;&lt;BR /&gt;cmapplyconf -C /etc/cmcluster/cluster.ascii&lt;BR /&gt;cmruncl&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;Verify the cluster is up but not the packages.</description>
      <pubDate>Wed, 12 Nov 2003 10:56:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116432#M708815</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-12T10:56:00Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116433#M708816</link>
      <description>Melvyn,&lt;BR /&gt;&lt;BR /&gt;Do I really have to install the patch PHSS_27158.  The reason why I am asking is the working toremsc is also having the same version and patch level and doesn't report any problem with starting the cluster on it.</description>
      <pubDate>Wed, 12 Nov 2003 11:25:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116433#M708816</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-12T11:25:59Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116434#M708817</link>
      <description>NBA - its critical to have the same patch levels and versions within the cluster.</description>
      <pubDate>Wed, 12 Nov 2003 11:34:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116434#M708817</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-12T11:34:09Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116435#M708818</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;Sorry for any misunderstanding.&lt;BR /&gt;&lt;BR /&gt;The toremsc reports the following information for the "what /usr/lbin/cmcld | grep Date&lt;BR /&gt;&lt;BR /&gt;A.11.09 Date: 05/16/2001; PATCH : PHSS_24033&lt;BR /&gt;&lt;BR /&gt;similar to the version and patch level on the vanemsc  which is having cluster issuesie. &lt;BR /&gt;A.11.09 Date: 05/16/2001; PATCH : PHSS_24033.&lt;BR /&gt;&lt;BR /&gt;My question is do I really have to install the patch PHSS 27158 when the toremsc is working fine without the PHSS 27158?&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Nov 2003 12:28:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116435#M708818</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-12T12:28:47Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116436#M708819</link>
      <description>NBA:  Here is the patch description but I don't see a reason for using it either.  HP has replaced PHSS_24033 with PHSS_27158 for OPS reasons.&lt;BR /&gt; &lt;BR /&gt;Are you using OPS?&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.pdb" target="_blank"&gt;http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.pdb&lt;/A&gt;|patch.breadcrumb.search|&amp;amp;patchid=PHSS_27158&amp;amp;context=hpux:800:11:00&lt;BR /&gt;&lt;BR /&gt;#########################################&lt;BR /&gt;&lt;BR /&gt;Where you able to bring the cluster up without the packages?</description>
      <pubDate>Wed, 12 Nov 2003 13:33:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116436#M708819</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-12T13:33:39Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116437#M708820</link>
      <description>These patches fix a lot of issues, some not listed in the patch text.&lt;BR /&gt;If you read the text file for this patch PHSS_27158 you will see:&lt;BR /&gt;&lt;BR /&gt;At cmcld start up, i.e. cmrunnode or cmruncl, syslog shows this message,&lt;BR /&gt; "cmcld: Assertion failed: pnet != NULL, file:comm_link.c, line: 140."&lt;BR /&gt; &lt;BR /&gt;cmcld immediately aborts and dumps core.&lt;BR /&gt;&lt;BR /&gt;I think this is youir problem?&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Nov 2003 13:34:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116437#M708820</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2003-11-12T13:34:04Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116438#M708821</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;I am not using OPS edition, because the system is based on Informix database. &lt;BR /&gt;&lt;BR /&gt;Melvyn,&lt;BR /&gt;&lt;BR /&gt;I have to get a clearance from the customer, before I could install the patch PHS 27158 on the system.  Meanwhile, I am open to any other bright ideas.&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;NBA</description>
      <pubDate>Wed, 12 Nov 2003 14:06:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116438#M708821</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-12T14:06:31Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116439#M708822</link>
      <description>Folks,&lt;BR /&gt;&lt;BR /&gt;I installed the patch PHSS_27158 on the vanemsc server.  Somehow, the system reports the same error as I saw with the patch PHSS_24033.  &lt;BR /&gt;&lt;BR /&gt;Here is the output of the what /usr/lbin/cmcld command.&lt;BR /&gt;&lt;BR /&gt;vanemsc:what /usr/lbin/cmcld | grep Date&lt;BR /&gt;         A.11.09   Date: 01/15/2003; PATCH: PHSS_27158&lt;BR /&gt;&lt;BR /&gt;Here is the excerpt from the syslog.log file of the vanemsc server.&lt;BR /&gt;&lt;BR /&gt;Nov 13 16:33:57 vanemsc CM-CMD[23180]: cmruncl -v&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmclconfd[23205]: Executing "/usr/lbin/cmcld" for node vanemsc&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.&lt;BR /&gt;Nov 13 16:27:04 vanemsc : su : + 2 ems-ems&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmcld: Warning. No cluster lock is configured.&lt;BR /&gt;Nov 13 16:33:58 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmsrvassistd[23244]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmsrvassistd[23228]: Lost connection to the cluster daemon.&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmsrvassistd[23228]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connection abort&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmclconfd[23351]: Unable to lookup any node information in CDB: Connection refused&lt;BR /&gt;Nov 13 16:34:02 vanemsc cmclconfd[23205]: The ServiceGuard daemon, /usr/lbin/cmcld[23206], died upon receiving the signal 6.&lt;BR /&gt;Nov 13 16:34:29 vanemsc CM-CMD[25026]: cmrunnode vanemsc&lt;BR /&gt;Nov 13 16:34:29 vanemsc cmclconfd[25031]: Executing "/usr/lbin/cmcld" for node vanemsc&lt;BR /&gt;Nov 13 16:34:29 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.&lt;BR /&gt;Nov 13 16:34:29 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads&lt;BR /&gt;Nov 13 16:34:30 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.&lt;BR /&gt;Nov 13 16:34:30 vanemsc cmcld: Warning. No cluster lock is configured.&lt;BR /&gt;Nov 13 16:34:30 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146&lt;BR /&gt;Nov 13 16:34:32 vanemsc cmsrvassistd[25037]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 13 16:34:32 vanemsc cmsrvassistd[25038]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer&lt;BR /&gt;Nov 13 16:34:32 vanemsc cmsrvassistd[25036]: Lost connection to the cluster daemon.&lt;BR /&gt;Nov 13 16:34:32 vanemsc cmsrvassistd[25036]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connection abort&lt;BR /&gt;Nov 13 16:34:32 vanemsc cmclconfd[25031]: The ServiceGuard daemon, /usr/lbin/cmcld[25032], died upon receiving the signal 6.&lt;BR /&gt;Nov 13 16:35:00 vanemsc CM-CMD[25090]: cmmodpkg -e sncPkg&lt;BR /&gt;Nov 13 16:35:00 vanemsc CM-CMD[25110]: cmmodpkg -e -n vanemsc sncPkg&lt;BR /&gt;&lt;BR /&gt;I will continue my investigation.  Meanwhile, if you have any ideas, please let me know.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Thu, 13 Nov 2003 11:56:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116439#M708822</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-13T11:56:16Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116440#M708823</link>
      <description>I would endeavor to mirror the same patches on both nodes before proceeding.&lt;BR /&gt;&lt;BR /&gt;I would also assume corrupted cluster binaries, work to get one node up and copy the data over to the other.&lt;BR /&gt;&lt;BR /&gt;Try this command from :&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0xb02a4b3ef09fd611abdb0090277a778c%2C00.html&amp;amp;admit=716493758+1068745972259+28353475" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0xb02a4b3ef09fd611abdb0090277a778c%2C00.html&amp;amp;admit=716493758+1068745972259+28353475&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;#########################################&lt;BR /&gt;&lt;BR /&gt;# cmquerycl -C config.ascii -n lr006b04 -n lr006b05&lt;BR /&gt;&lt;BR /&gt;This will NOT list any VGs that are already "clustered" but it will tell you if the nodes have concurrent versions of serviceguard, whether the nodes can communicate via the hacl ports (/etc/services), whether the security files (~/.rhosts or /etc/cmcluster/cmclnodelist) allows the communication etc.&lt;BR /&gt;If this command does not work - at least one fundamental system configuration issue exists which prevents ServiceGuard from operating properly in the present state.</description>
      <pubDate>Thu, 13 Nov 2003 12:59:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116440#M708823</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-11-13T12:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: Issues with starting cluster in Georgraphic Redundancy Configuration</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116441#M708824</link>
      <description>#Michael,&lt;BR /&gt;&lt;BR /&gt;At the moment, the toremsc is the only server that is monitoring the network without any problems. It will be difficult to convince the customer to bring down the package and cluster to install the patch PHSS_27158.  Therefore, I will continue to troubleshoot the vanemsc.&lt;BR /&gt;&lt;BR /&gt;Here is the output of the command from the vanemsc&lt;BR /&gt; cmquerycl -C config.ascii -n vanemsc -n toremsc&lt;BR /&gt;&lt;BR /&gt;Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.&lt;BR /&gt;Warning: The disk at /dev/dsk/c0t2d0 on node toremsc does not have an ID, or a disk label.&lt;BR /&gt;Warning: Disks which do not have IDs cannot be included in the topology description.&lt;BR /&gt;Use pvcreate(1m) to initialize disks for use with LVM, or&lt;BR /&gt;use vxdiskadm(1m) to initalize disks for use with VxVM.&lt;BR /&gt;Warning: Network interface lan3 on node toremsc couldn't talk to itself.&lt;BR /&gt;Warning: Network interface lan4 on node toremsc couldn't talk to itself.&lt;BR /&gt;&lt;BR /&gt;Node Names:    toremsc&lt;BR /&gt;               vanemsc&lt;BR /&gt;&lt;BR /&gt;Bridged networks:&lt;BR /&gt;&lt;BR /&gt;1       lan0           (toremsc)&lt;BR /&gt;        lan2           (toremsc)&lt;BR /&gt;&lt;BR /&gt;2       lan1           (toremsc)&lt;BR /&gt;&lt;BR /&gt;3       lan0           (vanemsc)&lt;BR /&gt;&lt;BR /&gt;4       lan1           (vanemsc)&lt;BR /&gt;        lan2           (vanemsc)&lt;BR /&gt;&lt;BR /&gt;IP subnets:&lt;BR /&gt;&lt;BR /&gt;135.93.27.0            lan0  (vanemsc)&lt;BR /&gt;&lt;BR /&gt;64.251.200.0           lan1  (vanemsc)&lt;BR /&gt;&lt;BR /&gt;135.92.27.0            lan0  (toremsc)&lt;BR /&gt;                       lan1  (toremsc)&lt;BR /&gt;&lt;BR /&gt;Possible Heartbeat IPs:&lt;BR /&gt;&lt;BR /&gt;Possible Cluster Lock Devices:&lt;BR /&gt;&lt;BR /&gt;LVM volume groups:&lt;BR /&gt;&lt;BR /&gt;/dev/vg00               vanemsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg01               vanemsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg02               vanemsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg03               vanemsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg00               toremsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg01               toremsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg02               toremsc&lt;BR /&gt;&lt;BR /&gt;/dev/vg03               toremsc&lt;BR /&gt;&lt;BR /&gt;Warning: No possible heartbeat networks found.&lt;BR /&gt;         All nodes must be connected to at least one common network.&lt;BR /&gt;         This may be due to DLPI not being installed.&lt;BR /&gt;Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.&lt;BR /&gt;Minimum network configuration requirements are:&lt;BR /&gt; - 2 or more heartbeat networks OR&lt;BR /&gt; - 1 heartbeat network with local switch OR&lt;BR /&gt; - 1 heartbeat network with serial line.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;NOTE: Please ignore the warnings on disk c0t1d0 and c0t2d0 as these are DVD ROM drives.&lt;BR /&gt;Also, since these servers are configured on Geographic Redundancy, they do not have hearbeat LAN configured, please ignore the Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.</description>
      <pubDate>Thu, 13 Nov 2003 13:36:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/issues-with-starting-cluster-in-georgraphic-redundancy/m-p/3116441#M708824</guid>
      <dc:creator>NBA</dc:creator>
      <dc:date>2003-11-13T13:36:15Z</dc:date>
    </item>
  </channel>
</rss>

