<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ServiceGuard Lan cards in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487750#M673471</link>
    <description>1.) The lan1/lan2 switch is a Single Point of Failure in your current configuration. If it fails, the applications of the cluster will be inaccessible (isolated from the rest of your network). &lt;BR /&gt;&lt;BR /&gt;However, because the cluster heartbeat has an alternate route, the cluster will not start failing packages over nor rebooting nodes, so the cluster will remain ready to resume service as soon as the switch is fixed.&lt;BR /&gt;&lt;BR /&gt;If your switch has enough built-in fault tolerance (multiple switch port modules, controllers &amp;amp; power supplies) so that a failure of the entire switch is unlikely enough for your purposes, this may be acceptable.&lt;BR /&gt;&lt;BR /&gt;2.) An independent heartbeat connection is always a good thing, so your lan0 configuration is good.&lt;BR /&gt;&lt;BR /&gt;Without getting more hardware, I don't think you can improve your lan1/lan2 configuration.&lt;BR /&gt;&lt;BR /&gt;Getting another switch for data traffic would improve fault tolerance: you would trunk the two data traffic switches together, then connect lan1's from all nodes to one switch and lan2's to the other. With this configuration, the failure of one switch becomes survivable:&lt;BR /&gt;&lt;BR /&gt;- heartbeat switch failure: no problem, heartbeat goes through the data subnet too.&lt;BR /&gt;&lt;BR /&gt;- lan1 switch failure: no problem, all nodes failover to lan2 and keep serving clients; heartbeat on lan1 fails over to lan2 too.&lt;BR /&gt;&lt;BR /&gt;- lan2 switch failure: just like lan1 switch&lt;BR /&gt;&lt;BR /&gt;NIC failures are no problem either:&lt;BR /&gt;&lt;BR /&gt;- lan0 NIC failure in any node: no problem, heartbeat on data subnet allows the system to keep running normally until the next scheduled maintenance break.&lt;BR /&gt;&lt;BR /&gt;- lan1 or lan2 NIC failure in any node: no problem, that node just fails over to the other NIC, and the trunk connection between the data switches allows the data to pass from one switch to the other.&lt;BR /&gt;&lt;BR /&gt;If your nodes allow On-Line Replacement of NICs, you could even replace them without stopping any of the nodes.&lt;BR /&gt;&lt;BR /&gt;NOTE: Serviceguard A.11.15 is obsolete, and HPUX 11i v1 is approaching its end-of-life. To ensure a painless upgrade in the future, you should first upgrade to the latest version of Serviceguard available for 11i v1 as soon as convenient. The newer versions will have supported upgrade paths to newer OS versions.&lt;BR /&gt;&lt;BR /&gt;You can upgrade Serviceguard as a rolling upgrade (one node at a time), but you cannot make any cluster configuration changes while the nodes are not all at the same Serviceguard version.&lt;BR /&gt;&lt;BR /&gt;MK</description>
    <pubDate>Fri, 28 Aug 2009 08:51:48 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2009-08-28T08:51:48Z</dc:date>
    <item>
      <title>ServiceGuard Lan cards</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487748#M673469</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;Our customer has 3 x rp3440 servers  running HPUX 11i v1 and SericeGuard A.11.15.&lt;BR /&gt;The 3 x nodes forms a cluster managed by S.G.&lt;BR /&gt;I have configured the 3 x NICs of each node as the following :&lt;BR /&gt;- lan0 is defined as HeartBeat_IP and connexted to first switch (HUB) in an idependant vlan (10.10.1.1)&lt;BR /&gt;- lan1 is  defined as HeartBeat_IP and connected to a second switch in another vlan&lt;BR /&gt;(192.168.1.100) ; this subnet in the package  for data traffic &lt;BR /&gt; - lan2 is defined as StandBy card connected to the second switch and in the same vlan as lan1 .&lt;BR /&gt;NB : no connection between the above 2 x vlans.&lt;BR /&gt;1- What will be the cluster situation in case of failure of the second switch (lan1/lan2).&lt;BR /&gt;2- What is the best configuration of the 3 x NICs through S.G. and the corresponding physical network connections.&lt;BR /&gt;&lt;BR /&gt;Thanks and Regards&lt;BR /&gt;Roger</description>
      <pubDate>Fri, 28 Aug 2009 07:52:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487748#M673469</guid>
      <dc:creator>Roro_2</dc:creator>
      <dc:date>2009-08-28T07:52:01Z</dc:date>
    </item>
    <item>
      <title>Re: ServiceGuard Lan cards</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487749#M673470</link>
      <description>1- As you define lan0 as heartbeat then your cluster will function normally. The package state will depend on your config, your package may be down if you monitor your subnet. Using one switch for data network means a SPOF , so you may think to use redundant switches also...&lt;BR /&gt;2-)If you have 3 cards, i think the best way is using 1 card for cluster interconnect, and the other two for data network. &lt;BR /&gt;</description>
      <pubDate>Fri, 28 Aug 2009 08:31:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487749#M673470</guid>
      <dc:creator>Turgay Cavdar</dc:creator>
      <dc:date>2009-08-28T08:31:43Z</dc:date>
    </item>
    <item>
      <title>Re: ServiceGuard Lan cards</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487750#M673471</link>
      <description>1.) The lan1/lan2 switch is a Single Point of Failure in your current configuration. If it fails, the applications of the cluster will be inaccessible (isolated from the rest of your network). &lt;BR /&gt;&lt;BR /&gt;However, because the cluster heartbeat has an alternate route, the cluster will not start failing packages over nor rebooting nodes, so the cluster will remain ready to resume service as soon as the switch is fixed.&lt;BR /&gt;&lt;BR /&gt;If your switch has enough built-in fault tolerance (multiple switch port modules, controllers &amp;amp; power supplies) so that a failure of the entire switch is unlikely enough for your purposes, this may be acceptable.&lt;BR /&gt;&lt;BR /&gt;2.) An independent heartbeat connection is always a good thing, so your lan0 configuration is good.&lt;BR /&gt;&lt;BR /&gt;Without getting more hardware, I don't think you can improve your lan1/lan2 configuration.&lt;BR /&gt;&lt;BR /&gt;Getting another switch for data traffic would improve fault tolerance: you would trunk the two data traffic switches together, then connect lan1's from all nodes to one switch and lan2's to the other. With this configuration, the failure of one switch becomes survivable:&lt;BR /&gt;&lt;BR /&gt;- heartbeat switch failure: no problem, heartbeat goes through the data subnet too.&lt;BR /&gt;&lt;BR /&gt;- lan1 switch failure: no problem, all nodes failover to lan2 and keep serving clients; heartbeat on lan1 fails over to lan2 too.&lt;BR /&gt;&lt;BR /&gt;- lan2 switch failure: just like lan1 switch&lt;BR /&gt;&lt;BR /&gt;NIC failures are no problem either:&lt;BR /&gt;&lt;BR /&gt;- lan0 NIC failure in any node: no problem, heartbeat on data subnet allows the system to keep running normally until the next scheduled maintenance break.&lt;BR /&gt;&lt;BR /&gt;- lan1 or lan2 NIC failure in any node: no problem, that node just fails over to the other NIC, and the trunk connection between the data switches allows the data to pass from one switch to the other.&lt;BR /&gt;&lt;BR /&gt;If your nodes allow On-Line Replacement of NICs, you could even replace them without stopping any of the nodes.&lt;BR /&gt;&lt;BR /&gt;NOTE: Serviceguard A.11.15 is obsolete, and HPUX 11i v1 is approaching its end-of-life. To ensure a painless upgrade in the future, you should first upgrade to the latest version of Serviceguard available for 11i v1 as soon as convenient. The newer versions will have supported upgrade paths to newer OS versions.&lt;BR /&gt;&lt;BR /&gt;You can upgrade Serviceguard as a rolling upgrade (one node at a time), but you cannot make any cluster configuration changes while the nodes are not all at the same Serviceguard version.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Fri, 28 Aug 2009 08:51:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487750#M673471</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2009-08-28T08:51:48Z</dc:date>
    </item>
    <item>
      <title>Re: ServiceGuard Lan cards</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487751#M673472</link>
      <description>Shalom Roger,&lt;BR /&gt;&lt;BR /&gt;What you have asked for is an opinion, do not expect unity.&lt;BR /&gt;&lt;BR /&gt;If I had two nodes, 3 NIC each.&lt;BR /&gt;&lt;BR /&gt;Two NIC on the Corporate LAN bonded with Auto Port Aggregation(APA). NIC three on a private hub. Two heartbeats one configured Corporate, 1 configured on the private LAN&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 28 Aug 2009 09:16:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-lan-cards/m-p/4487751#M673472</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-08-28T09:16:51Z</dc:date>
    </item>
  </channel>
</rss>

