<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Mixed O/S Cluster - cmcheckconf Failing in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760927#M660679</link>
    <description>You also appear to have configuration issues wioth the shared VG's, you should sort those out.&lt;BR /&gt;As for the networking, do they all have the same network mask?&lt;BR /&gt;What happens if you change the IP-MONITOR to ON?&lt;BR /&gt;Was anything logged in either nodes syslog.log?&lt;BR /&gt;&lt;BR /&gt;You may need to enable some enhanced logging for this.&lt;BR /&gt;</description>
    <pubDate>Thu, 03 Mar 2011 17:07:08 GMT</pubDate>
    <dc:creator>melvyn burnard</dc:creator>
    <dc:date>2011-03-03T17:07:08Z</dc:date>
    <item>
      <title>Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760924#M660676</link>
      <description>Primary node is 11.23, second node is 11.31, both are running SG A.11.19.0.&lt;BR /&gt;&lt;BR /&gt;# cmcheckconf -v -C /tmp/junk.out&lt;BR /&gt;Begin cluster verification...&lt;BR /&gt;Checking cluster file: /tmp/junk.out&lt;BR /&gt;Defaulting MAX_CONFIGURED_PACKAGES to 300.&lt;BR /&gt;Checking nodes ... Done&lt;BR /&gt;Checking existing configuration ... Done&lt;BR /&gt;Defaulting MAX_CONFIGURED_PACKAGES to 300.&lt;BR /&gt;Gathering storage information&lt;BR /&gt;Found 21 devices on node a300sua4&lt;BR /&gt;Found 11 devices on node a300sua8&lt;BR /&gt;Analysis of 32 devices should take approximately 5 seconds&lt;BR /&gt;0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%&lt;BR /&gt;Found 8 volume groups on node a300sua4&lt;BR /&gt;Found 7 volume groups on node a300sua8&lt;BR /&gt;Analysis of 15 volume groups should take approximately 1 seconds&lt;BR /&gt;0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%&lt;BR /&gt;Volume group /dev/vg21 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg21 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg31 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg31 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg32 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg32 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg33 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg33 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Gathering network information&lt;BR /&gt;Beginning network probing (this may take a while)&lt;BR /&gt;Completed network probing&lt;BR /&gt;Failed to evaluate network&lt;BR /&gt;Gathering polling target information&lt;BR /&gt;cmcheckconf: Unable to reconcile configuration file /tmp/junk.out&lt;BR /&gt; with discovered configuration information.&lt;BR /&gt;&lt;BR /&gt;Here is the entire contents of junk.out (comments removed)&lt;BR /&gt;&lt;BR /&gt;CLUSTER_NAME            a300cu38&lt;BR /&gt;HOSTNAME_ADDRESS_FAMILY         IPV4&lt;BR /&gt;QS_HOST                 a0300qrmp6&lt;BR /&gt;QS_POLLING_INTERVAL     300000000&lt;BR /&gt;NODE_NAME               a300sua4&lt;BR /&gt;NETWORK_INTERFACE       lan4&lt;BR /&gt;STATIONARY_IP   10.20.209.224&lt;BR /&gt;NETWORK_INTERFACE       lan1&lt;BR /&gt;HEARTBEAT_IP    169.254.2.224&lt;BR /&gt;NETWORK_INTERFACE       lan3&lt;BR /&gt;HEARTBEAT_IP    169.254.1.224&lt;BR /&gt;NETWORK_INTERFACE       lan0&lt;BR /&gt;NODE_NAME               a300sua8&lt;BR /&gt;NETWORK_INTERFACE       lan4&lt;BR /&gt;STATIONARY_IP   10.20.209.228&lt;BR /&gt;NETWORK_INTERFACE       lan1&lt;BR /&gt;HEARTBEAT_IP    169.254.2.228&lt;BR /&gt;NETWORK_INTERFACE       lan3&lt;BR /&gt;HEARTBEAT_IP    169.254.1.228&lt;BR /&gt;NETWORK_INTERFACE       lan0&lt;BR /&gt;MEMBER_TIMEOUT          14000000&lt;BR /&gt;AUTO_START_TIMEOUT      600000000&lt;BR /&gt;NETWORK_POLLING_INTERVAL        2000000&lt;BR /&gt;NETWORK_FAILURE_DETECTION               INOUT&lt;BR /&gt;NETWORK_AUTO_FAILBACK           YES&lt;BR /&gt;SUBNET 169.254.2.0&lt;BR /&gt;IP_MONITOR OFF&lt;BR /&gt;SUBNET 169.254.1.0&lt;BR /&gt;IP_MONITOR OFF&lt;BR /&gt;SUBNET 10.20.209.0&lt;BR /&gt;IP_MONITOR OFF&lt;BR /&gt;MAX_CONFIGURED_PACKAGES         300&lt;BR /&gt;VOLUME_GROUP           /dev/vg21&lt;BR /&gt;VOLUME_GROUP           /dev/vg31&lt;BR /&gt;VOLUME_GROUP           /dev/vg32&lt;BR /&gt;VOLUME_GROUP           /dev/vg33&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Why would it be failing to evaluate the network setup?  I have triple checked those IP's and made sure the entries are also in the host file and that all are pingable from each of the nodes.  The junk.out file was created using cmquerycl, then edited slightly (cluster name, added quorum server, commented out disk lock, corrected heartbeat/stationary errors from cmquerycl, turned off IP level monitoring).  &lt;BR /&gt;&lt;BR /&gt;Am I missing something?  &lt;BR /&gt;</description>
      <pubDate>Thu, 03 Mar 2011 16:23:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760924#M660676</guid>
      <dc:creator>Craig Johnson_1</dc:creator>
      <dc:date>2011-03-03T16:23:26Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760925#M660677</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;I don't think a mixed OS environment for SG is supported.&lt;BR /&gt;&lt;BR /&gt;For example if you want to run oracle server on both nodes the binaries would be different and there is no guarantee the database would fail over without corrupting your data.&lt;BR /&gt;&lt;BR /&gt;Where did you get the idea this was supported?&lt;BR /&gt;&lt;BR /&gt;You can with same OS run SG in mixed versions of SG.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 03 Mar 2011 16:47:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760925#M660677</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2011-03-03T16:47:11Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760926#M660678</link>
      <description>We do support mixed OS versions with Serviceguard, as follows:&lt;BR /&gt;   * Beginning with Serviceguard A.11.18 (and A.11.19), a Serviceguard cluster may contain a mix of nodes running HP-UX 11i v2 and 11i v3, with the following restrictions, requirements and recommendations:&lt;BR /&gt;          o It is strongly recommended that all nodes are running equivalent Serviceguard patch levels. For example, for Serviceguard A.11.18, PHSS_38423 or later for 11i v2 and PHSS_38424 or later for 11i v3. If nodes in the cluster have different Serviceguard patch levels, then any new functionality introduced in the later patches may not be available in the cluster.&lt;BR /&gt;          o Some 11i v3 features cannot be used in a mixed OS cluster, such as LVM 2.0 volume groups and Agile I/O addressing. 11i v3 Native Multipathing is supported to be used.&lt;BR /&gt;          o SGeRAC is not supported in a mixed OS cluster (as Oracle does not support that).&lt;BR /&gt;          o All the nodes on a given HP-UX version should be running the same Fusion release, at the same patch level, that is, the 11i v2 nodes should all be running the same 11i v2 Fusion release at the same patch level, and the 11i v3 nodes should all be running the same 11i v3 Fusion release at the same patch level.&lt;BR /&gt;          o It is your responsibility to ensure that your other applications work properly in a mixed OS cluster.&lt;BR /&gt;          o Refer to the September 2008 (or later) revision of the Serviceguard A.11.18 Release Notes for additional information on mixed OS cluster. For A.11.19, refer to the Serviceguard A.11.19 Release Notes. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Mar 2011 17:03:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760926#M660678</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-03-03T17:03:12Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760927#M660679</link>
      <description>You also appear to have configuration issues wioth the shared VG's, you should sort those out.&lt;BR /&gt;As for the networking, do they all have the same network mask?&lt;BR /&gt;What happens if you change the IP-MONITOR to ON?&lt;BR /&gt;Was anything logged in either nodes syslog.log?&lt;BR /&gt;&lt;BR /&gt;You may need to enable some enhanced logging for this.&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Mar 2011 17:07:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760927#M660679</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-03-03T17:07:08Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760928#M660680</link>
      <description>another suggestion, if you create the asii file again, and ONLY change the cluster name, test that configuration with cmcheckconf. It may be that there is an inadvertent typo in your config changes</description>
      <pubDate>Thu, 03 Mar 2011 17:19:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760928#M660680</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-03-03T17:19:04Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760929#M660681</link>
      <description>I could try that, however, the networks are discovered incorrectly (heartbeat and stationary are reversed) and there is that pesky FIRST_CLUSTER_LOCK_VG entry and missing QS_HOST.  I have no choice but to edit the file, at least a little.</description>
      <pubDate>Thu, 03 Mar 2011 17:59:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760929#M660681</guid>
      <dc:creator>Craig Johnson_1</dc:creator>
      <dc:date>2011-03-03T17:59:30Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760930#M660682</link>
      <description>OK, I did try what you suggested, but I added "-c a300cu38 -q a0300qrmp6" to the query.  It failed unless I removed the "-c a300cu38".  So I had to edit the cluster name (only) and then tried to run a cmcheckconf.  Same error as before.&lt;BR /&gt;&lt;BR /&gt;Gathering network information&lt;BR /&gt;Beginning network probing (this may take a while)&lt;BR /&gt;Completed network probing&lt;BR /&gt;Failed to evaluate network&lt;BR /&gt;Gathering polling target information&lt;BR /&gt;cmcheckconf: Unable to reconcile configuration file /tmp/asciinew.out&lt;BR /&gt; with discovered configuration information.&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Mar 2011 18:16:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760930#M660682</guid>
      <dc:creator>Craig Johnson_1</dc:creator>
      <dc:date>2011-03-03T18:16:36Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760931#M660683</link>
      <description>Now I also tried correcting the heartbeat/stationary stuff and it still fails.&lt;BR /&gt;&lt;BR /&gt;This was our TEST cluster.  I have since managed to get this working on two other clusters, one DEV and one QA.  There is something fishy about this one.</description>
      <pubDate>Thu, 03 Mar 2011 18:21:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760931#M660683</guid>
      <dc:creator>Craig Johnson_1</dc:creator>
      <dc:date>2011-03-03T18:21:13Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760932#M660684</link>
      <description>sounds weird. verify that the network config is the same, especially things like subnet masks etc.&lt;BR /&gt;do ifconfig on each lan and compare the two servers outputs</description>
      <pubDate>Thu, 03 Mar 2011 19:04:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760932#M660684</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-03-03T19:04:40Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760933#M660685</link>
      <description>I remember something in the documentation that you should not make any changes to your cluster when it is a mixed cluster (either different OS versions or serviceguard versions) and the commands must be done on the node to start stop and start packages on the node which has the newest version of the OS and serviceguard.</description>
      <pubDate>Fri, 04 Mar 2011 05:05:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760933#M660685</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2011-03-04T05:05:32Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760934#M660686</link>
      <description>Emil, this is true when you have different versions of Serviceguard during a rolling upgrade, but the commands will stop you making configuration changes in this situation. This is NOT true when there are mixed versions of HP-UX.&lt;BR /&gt;&lt;BR /&gt;Back to the original problem. Firstly the LVM errors are fatal and this should be fixed first. I do not think this has an impact on the network probing but there is a small possibility it does. Therefore, just fix this first before moving onto the network errors.&lt;BR /&gt;&lt;BR /&gt;Do the cmclconfd daemons log any errors in the syslog files on any of the nodes? This is what I would look at first.</description>
      <pubDate>Fri, 04 Mar 2011 08:55:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760934#M660686</guid>
      <dc:creator>John Bigg</dc:creator>
      <dc:date>2011-03-04T08:55:21Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760935#M660687</link>
      <description>During testing: &lt;BR /&gt;mv /etc/nsswitch.conf /etc/nsswitch.conf.ORIG&lt;BR /&gt;cp /etc/nsswitch.file /etc/nsswitch.conf &lt;BR /&gt;&lt;BR /&gt;Insure -all- the fixed IPs that are assigned to NICs on both nodes are listed in /etc/hosts, and aliased to the simple hostname of the sponsoring host.  This is crucial (and documented in the Managing Serviceguard manual)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The cmquerycl -c option (in cmquerycl) causes SG to include cluster binary configuration information, which may not be accurate.&lt;BR /&gt;What does the following produce?&lt;BR /&gt;cmquerycl -v -w full -n a300sua4 -n a300sua8</description>
      <pubDate>Fri, 04 Mar 2011 13:53:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760935#M660687</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2011-03-04T13:53:17Z</dc:date>
    </item>
    <item>
      <title>Re: Mixed O/S Cluster - cmcheckconf Failing</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760936#M660688</link>
      <description>$ cmquerycl -v -w full -n a300sua4 -n a300sua8&lt;BR /&gt;Gathering storage information&lt;BR /&gt;Found 94 devices on node a300sua4&lt;BR /&gt;Found 99 devices on node a300sua8&lt;BR /&gt;Analysis of 193 devices should take approximately 11 seconds&lt;BR /&gt;0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%&lt;BR /&gt;Note: Disks were discovered which are not in use by either LVM or VxVM.&lt;BR /&gt;      Use pvcreate(1M) to initialize a disk for LVM or,&lt;BR /&gt;      use vxdiskadm(1M) to initialize a disk for VxVM.&lt;BR /&gt;Volume group /dev/vg21 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg21 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg31 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg31 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg32 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg32 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Volume group /dev/vg33 is configured differently on node a300sua4 than on node a300sua8&lt;BR /&gt;Volume group /dev/vg33 is configured differently on node a300sua8 than on node a300sua4&lt;BR /&gt;Gathering network information&lt;BR /&gt;Beginning network probing (this may take a while)&lt;BR /&gt;Completed network probing&lt;BR /&gt;Gathering polling target information&lt;BR /&gt;&lt;BR /&gt;Node Names:    a300sua4&lt;BR /&gt;               a300sua8&lt;BR /&gt;&lt;BR /&gt;Bridged networks (full probing performed):&lt;BR /&gt;&lt;BR /&gt;1       lan1           (a300sua4)&lt;BR /&gt;        lan1           (a300sua8)&lt;BR /&gt;&lt;BR /&gt;2       lan0           (a300sua4)&lt;BR /&gt;        lan3           (a300sua4)&lt;BR /&gt;        lan3           (a300sua8)&lt;BR /&gt;&lt;BR /&gt;3       lan2           (a300sua4)&lt;BR /&gt;        lan2           (a300sua8)&lt;BR /&gt;&lt;BR /&gt;4       lan4           (a300sua4)&lt;BR /&gt;        lan4           (a300sua8)&lt;BR /&gt;        lan0           (a300sua8)&lt;BR /&gt;&lt;BR /&gt;IP subnets:&lt;BR /&gt;&lt;BR /&gt;IPv4:&lt;BR /&gt;&lt;BR /&gt;169.254.2.0        lan1      (a300sua4)&lt;BR /&gt;                   lan1      (a300sua8)&lt;BR /&gt;&lt;BR /&gt;10.20.37.0         lan2      (a300sua4)&lt;BR /&gt;                   lan2      (a300sua8)&lt;BR /&gt;&lt;BR /&gt;169.254.1.0        lan3      (a300sua4)&lt;BR /&gt;                   lan3      (a300sua8)&lt;BR /&gt;&lt;BR /&gt;10.20.209.0        lan4      (a300sua4)&lt;BR /&gt;                   lan4      (a300sua8)&lt;BR /&gt;&lt;BR /&gt;IPv6:&lt;BR /&gt;&lt;BR /&gt;Possible Heartbeat IPs:&lt;BR /&gt;&lt;BR /&gt;IPv4:&lt;BR /&gt;&lt;BR /&gt;169.254.2.0                       169.254.2.224       (a300sua4)&lt;BR /&gt;                                  169.254.2.228       (a300sua8)&lt;BR /&gt;&lt;BR /&gt;10.20.37.0                        10.20.37.124        (a300sua4)&lt;BR /&gt;                                  10.20.37.228        (a300sua8)&lt;BR /&gt;&lt;BR /&gt;169.254.1.0                       169.254.1.224       (a300sua4)&lt;BR /&gt;                                  169.254.1.228       (a300sua8)&lt;BR /&gt;&lt;BR /&gt;10.20.209.0                       10.20.209.224       (a300sua4)&lt;BR /&gt;                                  10.20.209.228       (a300sua8)&lt;BR /&gt;&lt;BR /&gt;IPv6:&lt;BR /&gt;&lt;BR /&gt;Route Connectivity (full probing performed):&lt;BR /&gt;&lt;BR /&gt;IPv4:&lt;BR /&gt;&lt;BR /&gt;1   169.254.2.0&lt;BR /&gt;&lt;BR /&gt;2   10.20.37.0&lt;BR /&gt;&lt;BR /&gt;3   169.254.1.0&lt;BR /&gt;&lt;BR /&gt;4   10.20.209.0&lt;BR /&gt;&lt;BR /&gt;Possible IP Monitor Subnets:&lt;BR /&gt;&lt;BR /&gt;IPv4:&lt;BR /&gt;&lt;BR /&gt;10.20.209.0        Polling Target 10.20.209.1&lt;BR /&gt;&lt;BR /&gt;IPv6:&lt;BR /&gt;&lt;BR /&gt;Possible Cluster Lock Devices:&lt;BR /&gt;&lt;BR /&gt;NO CLUSTER LOCK:                        28 seconds&lt;BR /&gt;&lt;BR /&gt;LVM volume groups:&lt;BR /&gt;&lt;BR /&gt;/dev/vg00               a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg01               a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg21               a300sua4&lt;BR /&gt;                        a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg09               a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg40               a300sua4&lt;BR /&gt;                        a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg31               a300sua4&lt;BR /&gt;                        a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg32               a300sua4&lt;BR /&gt;                        a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg33               a300sua4&lt;BR /&gt;                        a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg00               a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg01               a300sua8&lt;BR /&gt;&lt;BR /&gt;LVM physical volumes:&lt;BR /&gt;&lt;BR /&gt;/dev/vg00&lt;BR /&gt;/dev/dsk/c1t2d0s2  0/4/1/0.0.0.2.0               a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg01&lt;BR /&gt;/dev/dsk/c1t0d0    0/4/1/0.0.0.0.0               a300sua4&lt;BR /&gt;/dev/dsk/c1t1d0    0/4/1/0.0.0.1.0               a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg21&lt;BR /&gt;/dev/dsk/c29t11d5  0/7/1/0.98.78.19.2.11.5       a300sua4&lt;BR /&gt;/dev/dsk/c28t11d5  0/3/1/0.97.125.19.2.11.5      a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/disk/disk90   64000/0xfa00/0x2e             a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg09&lt;BR /&gt;/dev/dsk/c34t0d1   0/3/1/0.99.80.19.0.0.1        a300sua4&lt;BR /&gt;/dev/dsk/c36t0d1   0/7/1/0.100.80.19.0.0.1       a300sua4&lt;BR /&gt;/dev/dsk/c40t0d1   0/3/1/0.99.5.19.0.0.1         a300sua4&lt;BR /&gt;/dev/dsk/c41t0d1   0/7/1/0.100.87.19.0.0.1       a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/vg40&lt;BR /&gt;/dev/dsk/c41t0d0   0/7/1/0.100.87.19.0.0.0       a300sua4&lt;BR /&gt;/dev/dsk/c34t0d0   0/3/1/0.99.80.19.0.0.0        a300sua4&lt;BR /&gt;/dev/dsk/c36t0d0   0/7/1/0.100.80.19.0.0.0       a300sua4&lt;BR /&gt;/dev/dsk/c40t0d0   0/3/1/0.99.5.19.0.0.0         a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c3t0d0    0/3/1/0.99.80.19.0.0.0        a300sua8&lt;BR /&gt;/dev/dsk/c1t0d0    0/7/1/0.100.80.19.0.0.0       a300sua8&lt;BR /&gt;/dev/dsk/c5t0d0    0/3/1/0.99.5.19.0.0.0         a300sua8&lt;BR /&gt;/dev/dsk/c7t0d0    0/7/1/0.100.87.19.0.0.0       a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg31&lt;BR /&gt;/dev/dsk/c28t13d4  0/3/1/0.97.125.19.2.13.4      a300sua4&lt;BR /&gt;/dev/dsk/c29t13d4  0/7/1/0.98.78.19.2.13.4       a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/disk/disk91   64000/0xfa00/0x2f             a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg32&lt;BR /&gt;/dev/dsk/c16t2d2   0/3/1/0.97.125.19.8.2.2       a300sua4&lt;BR /&gt;/dev/dsk/c11t8d0   0/7/1/0.98.78.19.8.8.0        a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/disk/disk98   64000/0xfa00/0x36             a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg33&lt;BR /&gt;/dev/dsk/c28t13d5  0/3/1/0.97.125.19.2.13.5      a300sua4&lt;BR /&gt;/dev/dsk/c15t6d5   0/3/1/0.97.125.19.6.6.5       a300sua4&lt;BR /&gt;/dev/dsk/c29t13d5  0/7/1/0.98.78.19.2.13.5       a300sua4&lt;BR /&gt;/dev/dsk/c10t2d0   0/7/1/0.98.78.19.5.2.0        a300sua4&lt;BR /&gt;&lt;BR /&gt;/dev/disk/disk94   64000/0xfa00/0x32             a300sua8&lt;BR /&gt;/dev/disk/disk92   64000/0xfa00/0x30             a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg00&lt;BR /&gt;/dev/disk/disk52_p264000/0xfa00/0x6              a300sua8&lt;BR /&gt;&lt;BR /&gt;/dev/vg01&lt;BR /&gt;/dev/disk/disk53   64000/0xfa00/0x7              a300sua8&lt;BR /&gt;&lt;BR /&gt;LVM logical volumes:&lt;BR /&gt;&lt;BR /&gt;Volume groups on a300sua4:&lt;BR /&gt;&lt;BR /&gt;Volume groups on a300sua8:&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Mar 2011 14:11:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mixed-o-s-cluster-cmcheckconf-failing/m-p/4760936#M660688</guid>
      <dc:creator>Craig Johnson_1</dc:creator>
      <dc:date>2011-03-04T14:11:10Z</dc:date>
    </item>
  </channel>
</rss>

