<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: 50% packet loss problem. in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640640#M50164</link>
    <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;which operation or command do you use, that reports '50% packet loss' ?&lt;BR /&gt;&lt;BR /&gt;TCPIP PING &lt;OTHER-NODE&gt; ?&lt;BR /&gt;&lt;BR /&gt;If so, what does your TCPIP interface config look like (TCPIP SHOW INT) ? Did you configure both interfaces on both nodes into the same subnet ? TCPIP might do round-robin on transmit and - if it can't reach the other node on one of the IP interfaces - this would account for 50% of the packets lost.&lt;BR /&gt;&lt;BR /&gt;Consider to also check your SCS channels with&lt;BR /&gt;&lt;BR /&gt;$ MC SCACP&amp;gt; SHOW CHANNEL&lt;BR /&gt;&lt;BR /&gt;and make sure, that each node has 2 channels open to the other node (via the cross-over cable and the network connection).&lt;BR /&gt;&lt;BR /&gt;Volker.&lt;/OTHER-NODE&gt;</description>
    <pubDate>Tue, 04 Oct 2005 02:20:52 GMT</pubDate>
    <dc:creator>Volker Halle</dc:creator>
    <dc:date>2005-10-04T02:20:52Z</dc:date>
    <item>
      <title>50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640634#M50158</link>
      <description>I clustered two ES47 (OpenVms 7.3.2) together and using 2 PCI to Dual 10/100 UTP ehet to connect the system together:  I used one port as the hearbeat and the other as the network connection.  My problem is that I consistently got the 50% packet loss issue between the two system.  Any suggestions?  Thanks. &lt;BR /&gt;&lt;BR /&gt;Thomas.</description>
      <pubDate>Mon, 03 Oct 2005 14:44:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640634#M50158</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-03T14:44:31Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640635#M50159</link>
      <description>If this is a new installation, check the network speed/duplex setting.  Use&lt;BR /&gt;&lt;BR /&gt;$ MCR LANCP SHOW DEVICE /CHAR&lt;BR /&gt;&lt;BR /&gt;You can set and and define line speed and duplex here.  Set is the current boot, Define is the permanent configuration for the next reboot.  For example:&lt;BR /&gt;&lt;BR /&gt;$ MC LANCP SET DEVICE EIA0 /SPEED=100/FULL_DUPLEX&lt;BR /&gt;&lt;BR /&gt;I've found that hard setting both the host and the network equipment leads to less problems.&lt;BR /&gt;&lt;BR /&gt;What do you define as the heartbeat?  Are you using a cross over cable?  This should also be configured manually.  &lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;</description>
      <pubDate>Mon, 03 Oct 2005 15:03:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640635#M50159</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-10-03T15:03:33Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640636#M50160</link>
      <description>Hi Andy,&lt;BR /&gt;&lt;BR /&gt;Thank you very much for your quick response.  Yes, I am using the crossover cable to connect the two server together.  This "heartbeat" connection is to determine if the primary is down, then the secondary suppose to kick in.  Let me try the command that I got from you and I'll get back with you.  Thanks again Andy.&lt;BR /&gt;&lt;BR /&gt;Thomas.</description>
      <pubDate>Mon, 03 Oct 2005 15:10:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640636#M50160</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-03T15:10:04Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640637#M50161</link>
      <description>I am using the cross-over cable to connect the two server.  How do I manually configure this in OpenVMS?  Thanks.</description>
      <pubDate>Mon, 03 Oct 2005 17:38:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640637#M50161</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-03T17:38:04Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640638#M50162</link>
      <description>To  manually configure the cross connection:  plug in the cable and make sure both ends are configured for the same network speed and duplex setting.  As above but substitute DEFINE for SET to configure the permanent database.&lt;BR /&gt;&lt;BR /&gt;The default behavior of OpenVMS in a cluster configuration is to use all available paths for cluster traffic.  Having a cross over cable in the second NIC will spare you from having problems if some bright networking type decides to "quickly do some switch maintenance during lunch when no one will notice a  brief outage."  Your systems will also pass cluster traffic on the primary interface.  You won't need to configure this behavior, it will happen.  &lt;BR /&gt;&lt;BR /&gt;I've assumed that you have a DE-602 NIC based on the information in your question.  The interfaces will show up as EIA0 and EIB0.  For a an even more robust configuration, add a second DE-602, connect the second interface with cross over cable and use LAN Failover or FailSAFE IP on the first 2 interfaces.  &lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;</description>
      <pubDate>Mon, 03 Oct 2005 22:15:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640638#M50162</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-10-03T22:15:14Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640639#M50163</link>
      <description>Hi Thomas,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; consistently got the 50% packet loss &lt;BR /&gt;&lt;BR /&gt;Is the packet loss resulted in using IP? If so, it might be due to the setup of interfaces and routing table. Please post your interfaces and routing table.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 04 Oct 2005 01:36:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640639#M50163</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2005-10-04T01:36:48Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640640#M50164</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;which operation or command do you use, that reports '50% packet loss' ?&lt;BR /&gt;&lt;BR /&gt;TCPIP PING &lt;OTHER-NODE&gt; ?&lt;BR /&gt;&lt;BR /&gt;If so, what does your TCPIP interface config look like (TCPIP SHOW INT) ? Did you configure both interfaces on both nodes into the same subnet ? TCPIP might do round-robin on transmit and - if it can't reach the other node on one of the IP interfaces - this would account for 50% of the packets lost.&lt;BR /&gt;&lt;BR /&gt;Consider to also check your SCS channels with&lt;BR /&gt;&lt;BR /&gt;$ MC SCACP&amp;gt; SHOW CHANNEL&lt;BR /&gt;&lt;BR /&gt;and make sure, that each node has 2 channels open to the other node (via the cross-over cable and the network connection).&lt;BR /&gt;&lt;BR /&gt;Volker.&lt;/OTHER-NODE&gt;</description>
      <pubDate>Tue, 04 Oct 2005 02:20:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640640#M50164</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-10-04T02:20:52Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640641#M50165</link>
      <description>Andy,&lt;BR /&gt;&lt;BR /&gt;I tried the define, but that did not resolve the issue yet.  Here is the Char for the interfaces for both servers:&lt;BR /&gt;&lt;BR /&gt;LANCP&amp;gt; show dev /char&lt;BR /&gt;&lt;BR /&gt;Device Characteristics EIA0:&lt;BR /&gt;                  Value  Characteristic&lt;BR /&gt;                  -----  --------------&lt;BR /&gt;                   1500  Device buffer size&lt;BR /&gt;                 Normal  Controller mode&lt;BR /&gt;               External  Internal loopback mode&lt;BR /&gt;      00-13-21-08-18-1A  Hardware LAN address&lt;BR /&gt;                         Multicast address list&lt;BR /&gt;                CSMA/CD  Communication medium&lt;BR /&gt;      FF-FF-FF-FF-FF-FF  Current LAN address&lt;BR /&gt;                    128  Minimum receive buffers&lt;BR /&gt;                    256  Maximum receive buffers&lt;BR /&gt;                    Yes  Full duplex enable&lt;BR /&gt;                    Yes  Full duplex operational&lt;BR /&gt;            TwistedPair  Line media type&lt;BR /&gt;                    100  Line speed (mbps)&lt;BR /&gt;    Disabled/No Failset  Logical LAN state&lt;BR /&gt;                      0  Failover priority&lt;BR /&gt;&lt;BR /&gt;Device Characteristics EIB0:&lt;BR /&gt;                  Value  Characteristic&lt;BR /&gt;                  -----  --------------&lt;BR /&gt;                   1500  Device buffer size&lt;BR /&gt;                 Normal  Controller mode&lt;BR /&gt;               External  Internal loopback mode&lt;BR /&gt;      00-13-21-08-18-1B  Hardware LAN address&lt;BR /&gt;                         Multicast address list&lt;BR /&gt;                CSMA/CD  Communication medium&lt;BR /&gt;      FF-FF-FF-FF-FF-FF  Current LAN address&lt;BR /&gt;                    128  Minimum receive buffers&lt;BR /&gt;                    256  Maximum receive buffers&lt;BR /&gt;                    Yes  Full duplex enable&lt;BR /&gt;                    Yes  Full duplex operational&lt;BR /&gt;            TwistedPair  Line media type&lt;BR /&gt;                    100  Line speed (mbps)&lt;BR /&gt;    Disabled/No Failset  Logical LAN state&lt;BR /&gt;                      0  Failover priority&lt;BR /&gt;&lt;BR /&gt;LANCP&amp;gt; show dev /char&lt;BR /&gt;&lt;BR /&gt;Device Characteristics EIA0:&lt;BR /&gt;                  Value  Characteristic&lt;BR /&gt;                  -----  --------------&lt;BR /&gt;                   1500  Device buffer size&lt;BR /&gt;                 Normal  Controller mode&lt;BR /&gt;               External  Internal loopback mode&lt;BR /&gt;      00-13-21-08-18-00  Hardware LAN address&lt;BR /&gt;                         Multicast address list&lt;BR /&gt;                CSMA/CD  Communication medium&lt;BR /&gt;      FF-FF-FF-FF-FF-FF  Current LAN address&lt;BR /&gt;                    128  Minimum receive buffers&lt;BR /&gt;                    256  Maximum receive buffers&lt;BR /&gt;                    Yes  Full duplex enable&lt;BR /&gt;                    Yes  Full duplex operational&lt;BR /&gt;            TwistedPair  Line media type&lt;BR /&gt;                    100  Line speed (mbps)&lt;BR /&gt;    Disabled/No Failset  Logical LAN state&lt;BR /&gt;                      0  Failover priority&lt;BR /&gt;&lt;BR /&gt;Device Characteristics EIB0:&lt;BR /&gt;                  Value  Characteristic&lt;BR /&gt;                  -----  --------------&lt;BR /&gt;                   1500  Device buffer size&lt;BR /&gt;                 Normal  Controller mode&lt;BR /&gt;               External  Internal loopback mode&lt;BR /&gt;      00-13-21-08-18-01  Hardware LAN address&lt;BR /&gt;                         Multicast address list&lt;BR /&gt;                CSMA/CD  Communication medium&lt;BR /&gt;      FF-FF-FF-FF-FF-FF  Current LAN address&lt;BR /&gt;                    128  Minimum receive buffers&lt;BR /&gt;                    256  Maximum receive buffers&lt;BR /&gt;                    Yes  Full duplex enable&lt;BR /&gt;                    Yes  Full duplex operational&lt;BR /&gt;            TwistedPair  Line media type&lt;BR /&gt;                    100  Line speed (mbps)&lt;BR /&gt;    Disabled/No Failset  Logical LAN state&lt;BR /&gt;                      0  Failover priority&lt;BR /&gt;Volker,&lt;BR /&gt;&lt;BR /&gt;Yes I used Ping command and get the packet loss error.  Here is my TCPIP interface config look like for both servers:&lt;BR /&gt;&lt;BR /&gt;UNITY1$ tcpip&lt;BR /&gt;TCPIP&amp;gt; show int&lt;BR /&gt;                                                           Packets&lt;BR /&gt;Interface   IP_Addr         Network mask          Receive          Send     MTU&lt;BR /&gt;&lt;BR /&gt; IE0        10.10.10.10     255.0.0.0                  10             6    1500&lt;BR /&gt; IE1        10.1.1.1        255.0.0.0                   6             6    1500&lt;BR /&gt; LO0        127.0.0.1       255.0.0.0                   0             0    4096&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;TCPIP&amp;gt; show int&lt;BR /&gt;                                                           Packets&lt;BR /&gt;Interface   IP_Addr         Network mask          Receive          Send     MTU&lt;BR /&gt;&lt;BR /&gt; IE0        10.10.10.20     255.0.0.0                   5             9    1500&lt;BR /&gt; IE1        10.1.1.2        255.0.0.0                   5             6    1500&lt;BR /&gt; LO0        127.0.0.1       255.0.0.0                  16            16    4096&lt;BR /&gt;TCPIP&amp;gt; exit&lt;BR /&gt;  &lt;BR /&gt;Also, here is my SCS channels for Server 1 and 2 respectively:&lt;BR /&gt;&lt;BR /&gt;UNITY1$ mc scacp&lt;BR /&gt;SCACP&amp;gt; show channel&lt;BR /&gt;&lt;BR /&gt;UNITY1 PEA0 Channel Summary  4-OCT-2005 10:08:12.66:&lt;BR /&gt;&lt;BR /&gt;  Remote  LAN Dev Channel  Total    ECS     Priority       Buffer   Delay   Load&lt;BR /&gt;    Total              ---- Most Recent ----&lt;BR /&gt;   Node   Loc Rmt  State  Errors   State    Cur  Mgt  Hops  Size    (uSec)  Clas&lt;BR /&gt;s  Pkts(S+R)    CH Opened Time     CH Closed Time&lt;BR /&gt;   ----   --- ---  -----  ------   -----    ---  ---  ----  ----    ------  ----&lt;BR /&gt;-  ---------    --------------     --------------&lt;BR /&gt;  UNITY2  EIA EIA  Open        2  N(T,P,S)    0    0    2   1426   28605.4   100&lt;BR /&gt;      20666  04-OCT 09:41:56.66  (No time)&lt;BR /&gt;  UNITY2  EIB EIB  Open        2  Y(T,P,F)    0    0    2   1426   21534.0   100&lt;BR /&gt;      48916  04-OCT 09:41:52.41  (No time)&lt;BR /&gt;  UNITY1  LCL LCL  Open        1  Y(T,P,F)    0    0    2   1426     253.3     0&lt;BR /&gt;       2729  04-OCT 09:41:48.39  (No time)&lt;BR /&gt;&lt;BR /&gt;UNITY2 PEA0 Channel Summary  4-OCT-2005 10:07:27.51:&lt;BR /&gt;&lt;BR /&gt;  Remote  LAN Dev Channel  Total    ECS     Priority       Buffer   Delay   Load&lt;BR /&gt;    Total              ---- Most Recent ----&lt;BR /&gt;   Node   Loc Rmt  State  Errors   State    Cur  Mgt  Hops  Size    (uSec)  Clas&lt;BR /&gt;s  Pkts(S+R)    CH Opened Time     CH Closed Time&lt;BR /&gt;   ----   --- ---  -----  ------   -----    ---  ---  ----  ----    ------  ----&lt;BR /&gt;-  ---------    --------------     --------------&lt;BR /&gt;  UNITY1  EIA EIA  Open        5  Y(T,P,F)    0    0    2   1426    5119.2   100&lt;BR /&gt;      20520  04-OCT 09:41:56.52  (No time)&lt;BR /&gt;  UNITY1  EIB EIB  Open        4  Y(T,P,F)    0    0    2   1426    4394.5   100&lt;BR /&gt;      48704  04-OCT 09:41:52.28  (No time)&lt;BR /&gt;  UNITY2  LCL LCL  Open        1  Y(T,P,F)    0    0    2   1426     253.3     0&lt;BR /&gt;       2649  04-OCT 09:41:49.15  (No time)&lt;BR /&gt;&lt;BR /&gt;Michael,&lt;BR /&gt;&lt;BR /&gt;Here is my Route table:&lt;BR /&gt;&lt;BR /&gt;TCPIP&amp;gt; show route&lt;BR /&gt;&lt;BR /&gt;                             DYNAMIC&lt;BR /&gt;&lt;BR /&gt;Type           Destination                           Gateway&lt;BR /&gt;&lt;BR /&gt;AN    0.0.0.0                               10.10.10.254&lt;BR /&gt;AN    10.0.0.0/8                            10.10.10.10&lt;BR /&gt;AN    10.0.0.0/8                            10.1.1.1&lt;BR /&gt;AH    10.1.1.1                              10.1.1.1&lt;BR /&gt;AH    10.10.10.10                           10.10.10.10&lt;BR /&gt;AH    127.0.0.1                             127.0.0.1&lt;BR /&gt;TCPIP&amp;gt;&lt;BR /&gt;&lt;BR /&gt;For server 2:&lt;BR /&gt;&lt;BR /&gt;UNITY2$ tcpip&lt;BR /&gt;TCPIP&amp;gt; show route&lt;BR /&gt;&lt;BR /&gt;                             DYNAMIC&lt;BR /&gt;&lt;BR /&gt;Type           Destination                           Gateway&lt;BR /&gt;&lt;BR /&gt;AN    0.0.0.0                               10.10.10.254&lt;BR /&gt;AN    10.0.0.0/8                            10.10.10.20&lt;BR /&gt;AN    10.0.0.0/8                            10.1.1.2&lt;BR /&gt;AH    10.1.1.2                              10.1.1.2&lt;BR /&gt;AH    10.10.10.20                           10.10.10.20&lt;BR /&gt;AH    127.0.0.1                             127.0.0.1&lt;BR /&gt;TCPIP&amp;gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks all for your quick response.  I really appreciate all your help.  Hopefully, my issue will be resolved soon.&lt;BR /&gt;&lt;BR /&gt;Thomas.</description>
      <pubDate>Tue, 04 Oct 2005 10:12:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640641#M50165</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-04T10:12:42Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640642#M50166</link>
      <description>If the network is otherwise idle, duplex mismatch will not result in ping losses.  The reason being ping only has one packet in flight at one time, so there is no chance to have both ends trying to talk at the same time, so there is no chance to have the failure of CSMA/CD in the half-duplex case.&lt;BR /&gt;&lt;BR /&gt;So, something else must be wrong.&lt;BR /&gt;&lt;BR /&gt;It might be good to make sure that the cable is truly good - if you can verify its operation somewhere else do so.</description>
      <pubDate>Tue, 04 Oct 2005 11:44:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640642#M50166</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2005-10-04T11:44:40Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640643#M50167</link>
      <description>Here you go&lt;BR /&gt;&lt;BR /&gt;IE0 10.10.10.10 255.0.0.0 10 6 1500&lt;BR /&gt;IE1 10.1.1.1 255.0.0.0 6 6 1500&lt;BR /&gt;&lt;BR /&gt;Both of these addresses are in the same subnet but the second interface is only connected to the other ES-47 by a cross over cable.  You don't need TCPIP configured to support cluster traffic (SCS).&lt;BR /&gt;&lt;BR /&gt;$ TCPIP SET NOINTERFACE IE1&lt;BR /&gt;$ TCPIP SET CONFIG NOINTERFACE IE1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;</description>
      <pubDate>Tue, 04 Oct 2005 13:02:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640643#M50167</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-10-04T13:02:41Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640644#M50168</link>
      <description>ok, i'm not a VMS networking expert but...&lt;BR /&gt;&lt;BR /&gt;does VMS really "like" having multiple physical NICs configured into the same subnet? &lt;BR /&gt;&lt;BR /&gt;does VMS only accept traffic destined for a given IP address on the interface on which that address is configured?&lt;BR /&gt;&lt;BR /&gt;perhaps the ping responses (or requests) are getting round-robined, and hitting the "wrong" interface?&lt;BR /&gt;&lt;BR /&gt;if it doesn't mess-up anything else, you might try putting the interfaces in separate subnets (at both ends)</description>
      <pubDate>Tue, 04 Oct 2005 13:14:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640644#M50168</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2005-10-04T13:14:42Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640645#M50169</link>
      <description>&amp;gt;&amp;gt;does VMS really "like" having multiple physical NICs configured into the same subnet?&lt;BR /&gt;&lt;BR /&gt;You can have mutiple NICs in the same subnet.  See &lt;A href="http://h71000.www7.hp.com/openvms/journal/v2/articles/tcpip.html" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v2/articles/tcpip.html&lt;/A&gt; for some FailSAFE IP configuration ideas.  &lt;BR /&gt;&lt;BR /&gt;In this case, the second NIC is connected by a cross over cable to the other ES-47.  Any IP traffic using this interface won't reach the network.  TCPIP seems to be dividing outbound traffic between the two interfaces configured in the same subnet.  &lt;BR /&gt;&lt;BR /&gt;Since the second interface is only there for cluster traffic, the easiest solution is to disable TCPIP on these interfaces.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 04 Oct 2005 13:28:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640645#M50169</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-10-04T13:28:16Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640646#M50170</link>
      <description>Andy,&lt;BR /&gt;&lt;BR /&gt;It worked!  My question is what if I still want to use TCPIP on the EI1, then how should I configure so that I do not have any packet loss?  &lt;BR /&gt;&lt;BR /&gt;Thanks Andy, and all you guys are really helpful.&lt;BR /&gt;&lt;BR /&gt;Regards.&lt;BR /&gt;&lt;BR /&gt;Thomas.</description>
      <pubDate>Tue, 04 Oct 2005 13:53:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640646#M50170</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-04T13:53:22Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640647#M50171</link>
      <description>&amp;gt;&amp;gt;&amp;gt; what if I still want to use TCPIP on the EI1&lt;BR /&gt;&lt;BR /&gt;What are trying to accomplish here?  With a 2 node cluster a cross over cable to carry cluster traffic is about a reliable an interface as you can get.  You also keep unnecessary TCPIP traffic off the LAN segment. &lt;BR /&gt;&lt;BR /&gt;If you want to connect these interfaces to the same network as your EI0 interface you  create a single point of failure, unless you use also redundant physical switches cross connected.  Note to network specialists, two VLANs on the same physical super switch do not count as redundant.  &lt;BR /&gt;&lt;BR /&gt;You could configure another subnet, 192.168.1.1 &amp;amp; .2 with a /24 mask for example on this interface if have business requirements.  You could also add additional networking equipment and configure failSAFE IP amoung all four interfaces.  Cluster traffic will automatically find these paths.    Matt Muggeridge's article in the link above has nice tips on configuring failSAFE IP in a clustered environment.  &lt;BR /&gt;&lt;BR /&gt;Your business requirements should drive the decision here.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Tue, 04 Oct 2005 14:17:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640647#M50171</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-10-04T14:17:45Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640648#M50172</link>
      <description>Thanks Andy!  I think your analysis resolved my issue.  I learned alots from this forum.  Thanks guys!  Btw, how do I rate the points for the solution?  Thomas.</description>
      <pubDate>Tue, 04 Oct 2005 15:10:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640648#M50172</guid>
      <dc:creator>thomas_220</dc:creator>
      <dc:date>2005-10-04T15:10:49Z</dc:date>
    </item>
    <item>
      <title>Re: 50% packet loss problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640649#M50173</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;You can assign points by viewing your posting while logged in and next to each reply there is a box to fill in with the points for relevance.  Then there is a button somewhere above or below to submit the points assignment.  &lt;BR /&gt;&lt;BR /&gt;Robert</description>
      <pubDate>Tue, 04 Oct 2005 17:01:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/50-packet-loss-problem/m-p/3640649#M50173</guid>
      <dc:creator>Robert_Boyd</dc:creator>
      <dc:date>2005-10-04T17:01:10Z</dc:date>
    </item>
  </channel>
</rss>

