<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Very bad Performance over native DECnet in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000613#M53002</link>
    <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;If I do same test by using the scssystemid instead of the nodename the performance increases, but is still not good enough.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;What do you mean by this ? What is the difference between using SCSSYSTEMID or nodename for the DTSEND test ? Is it selecting NSP via OSI transport ? Does the node name and the SCSSYSTEMID differ ?&lt;BR /&gt;&lt;BR /&gt;What do the numbers mean in your spreadsheet ?&lt;BR /&gt;&lt;BR /&gt;Your DTSEND is sending large packets in one direction and small ones in the other. This may make a difference. You can specify /TYPE=ECHO to have DTR send back the whole packet.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
    <pubDate>Fri, 01 Sep 2006 01:29:18 GMT</pubDate>
    <dc:creator>Volker Halle</dc:creator>
    <dc:date>2006-09-01T01:29:18Z</dc:date>
    <item>
      <title>Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000605#M52994</link>
      <description>Hi community&lt;BR /&gt;&lt;BR /&gt;We found a very strange behaviour during DECnet Copy.&lt;BR /&gt;I attached a Excel sheet with the results of my measurings with DTSEND.&lt;BR /&gt;&lt;BR /&gt;We are generally using DECnet over IP.&lt;BR /&gt;The two nodes ABC001 and ABC002 are Cluster Members. So if we copy something from one of those nodes to the another one, nativ DECnet is in use. In all the other cases DECnet over Ip is in use&lt;BR /&gt;&lt;BR /&gt;If I use DTSEND form ABC001 to ABC002 the performance is very very bad. If I do same test by using the scssystemid instead of the nodename the performance increases, but is still not good enough. &lt;BR /&gt;If I do same test, but from ABC002 to ABC001 the performance is much better.&lt;BR /&gt;&lt;BR /&gt;The Nodes ABC001, ABC002 and DEF002 are in same IP subnet (so all tests marked with N/A are not possible, because we do not have any DECnet Routers).&lt;BR /&gt;&lt;BR /&gt;Does somebody have a idea, where the problem is?&lt;BR /&gt;&lt;BR /&gt;We checked allready the tower informations with MC DECNET_REGISTER, we flushed the cache (mc ncl flush sess contr nam cache entr "*")&lt;BR /&gt;&lt;BR /&gt;The measuring results are not really reproducible, because we have a lot of DECnet traffic during working hours. So if I do dame test multiple time the results are different for any masuring. &lt;BR /&gt;I will try to meassure again this night and I hope that we have more and better reproducible figures.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Thu, 31 Aug 2006 09:56:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000605#M52994</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-08-31T09:56:18Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000606#M52995</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;I have had many situations where the underlying problem was a duplex mismatch somewhere in the network.&lt;BR /&gt;&lt;BR /&gt;What made it seem to appear randomly was the question of other traffic on the network. &lt;BR /&gt;&lt;BR /&gt;Another possibility is collisions somewhere in the network.&lt;BR /&gt;&lt;BR /&gt;As a start, I would review the error counters in the path that the DECnet traffic is taking (note that this may be different than the path that it is taking when it is routed as DECnet over IP).&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 31 Aug 2006 10:36:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000606#M52995</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-08-31T10:36:03Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000607#M52996</link>
      <description>&lt;BR /&gt;You can check counter and speed duplex setting in LANCP&lt;BR /&gt;&lt;BR /&gt;$ MC LANCP&lt;BR /&gt;&amp;gt; show dev /counter&lt;BR /&gt;&amp;gt; show dev /char&lt;BR /&gt;&lt;BR /&gt;Recent versions of VMS have reportedly improved autonegotiation, however I still hard set both the server and the switch.&lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;</description>
      <pubDate>Thu, 31 Aug 2006 10:50:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000607#M52996</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2006-08-31T10:50:34Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000608#M52997</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;my first guess is: lost packets.&lt;BR /&gt;&lt;BR /&gt;The DECnet re-transmission timing is very poor when compared with other network protocols. Once a DECnet packet is lost, it takes a looong time until that packet gets re-transmitted. For COPY or DTSEND, this looks lik 'bad performance', but in reality, nothing is being transmitted during the timeout period.&lt;BR /&gt;&lt;BR /&gt;With DECnet Phase IV, you could easily do MC NCP SHOW NODE destination-node COUNTERS from the source node and you would look for 'Response Timeouts'.&lt;BR /&gt;&lt;BR /&gt;With DECnet-Plus, it gets a little tricky. The easiest way is to use MCR NET$MGMT (needs DECwindows display) and look at Tasks -&amp;gt; Show Known Node Counters and look for Retransmitted PDUs to the destination node of your DTSEND test.&lt;BR /&gt;&lt;BR /&gt;You can also drill down on the (NSP or OSI) transport -&amp;gt; local NSAP -&amp;gt; remote NSAP -&amp;gt; NSAP address of dest - then Actions -&amp;gt; Zoom will show you the counters. Look for 'Retransmitted PDUs' and/or 'Duplicate PDUs received'.&lt;BR /&gt;&lt;BR /&gt;Another way to verify, if you are loosing packets, would be to run MONI DECNET or MONI PROC/TOPBIO while DTSEND is running. If you do not see a constant rate of IOs, the chances are high, that you're loosing packets in the network path and have to wait for re-transmissons.&lt;BR /&gt;&lt;BR /&gt;Once you confirm, that this is the reason for the perceived 'bad performance', then comes the interesting part of trying to find out, where the packets get lost.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 31 Aug 2006 11:30:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000608#M52997</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-08-31T11:30:26Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000609#M52998</link>
      <description>Hi Robert and Andy&lt;BR /&gt;&lt;BR /&gt;First thing what we did was to check the counters in LANCP. There is no problem like Collisions, Frame Check errors or something like that. All the counters are 0, except the counters for packets/bytes received/sent.&lt;BR /&gt;&lt;BR /&gt;We are using Cisco Switches and the ports where our machines are connected to, are set to 100M Full Duplex. The Interfaces are also set to 100M full duplex, done in console mode. &lt;BR /&gt;&lt;BR /&gt;Each of the two problem machines has 2 dual NIC's. We configured FailSafe IP and all 4 lines are configured for DECnet&lt;BR /&gt;&lt;BR /&gt;I think this is not a hardware issue, because as you can see on the excel sheet, that some connections from remote machines using DECnet over IP are so fast, as expected. So we (me and the Swiss OpenVMS Ambassador) are think, that this is a problem with name resolution, lost packets, towers or something like this. (but what?)&lt;BR /&gt;&lt;BR /&gt;I think we will follow the instructions of Volker.&lt;BR /&gt;&lt;BR /&gt;I compared the NCL scripts in sys$specific of the two machines. They are identical, except the addresses. &lt;BR /&gt;&lt;BR /&gt;Our good luck is, that this are two testmachines (GS1280).  But in our case, testmachine means, that there is a test team (40 people) and a development crew (80 people). For us, the Systemmanagement those machines are like a production machine, because we have to announce changes many days before we do them.  Even during night, we can't do there something like a reboot without preannounce. &lt;BR /&gt;&lt;BR /&gt;This afternoon we started to look at the CDI caches, we tried to use sys$update_decnet_migrate (show path to local:.nodename), but we don't have yet some brainy results. &lt;BR /&gt;I think we will start to follow the instructions of Volker, but I canâ  t do it before tomorrow.&lt;BR /&gt;&lt;BR /&gt;... but still any input is very welcome.&lt;BR /&gt;&lt;BR /&gt;Heinz&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 31 Aug 2006 12:20:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000609#M52998</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-08-31T12:20:04Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000610#M52999</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;what you can do now, is:&lt;BR /&gt;&lt;BR /&gt;NCL&amp;gt; SHOW NSP LOCAL NSAP *&lt;BR /&gt;&lt;BR /&gt;note local NSAP address&lt;BR /&gt;&lt;BR /&gt;NCL&amp;gt; SHOW NSP LOCAL NSAP local_nsap REMOTE NSAP *  retransmitted pdus, duplicate pdus received&lt;BR /&gt;&lt;BR /&gt;Repeat the same for OSI TRANSPORT ...&lt;BR /&gt;&lt;BR /&gt;If all those counters are 0, you can forget about my theory. If not, we'll see...&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 31 Aug 2006 12:40:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000610#M52999</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-08-31T12:40:37Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000611#M53000</link>
      <description>To test for packet lost try using PING with a large packet size, say 10,000 bytes and a count of 100 packets&lt;BR /&gt;&lt;BR /&gt;With a duplex mismatch you can see any where from 7% to 20% packet loss&lt;BR /&gt;&lt;BR /&gt;If you have both NSP and OSI transports enabled on a node but only DECnet over IP working between the nodes  then you will get a 30 second delay in the beginning as DECnet trys NCP first, timesout, and then tries DECnet over IP.  We have a 6 node cluster with 3 nodes on one subnet and 3 on the other.  Within a subnet NSP works but between the subnets only DECnet over IP works.  We had to remove the nodes on the other subnet from the DECnet_Register so that it would only try DECnet over IP.&lt;BR /&gt;&lt;BR /&gt;You may need to check the DECnet_Register to make sure the address data is correct.&lt;BR /&gt;</description>
      <pubDate>Thu, 31 Aug 2006 19:11:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000611#M53000</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2006-08-31T19:11:30Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000612#M53001</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We had similar problems but with Tru64. Setting the 100M Full Duplex on the console doesnt work for Tru64, and may still be an issue with OVMS.&lt;BR /&gt;&lt;BR /&gt;Try ftping from each machine a very large file to NLA0:[000000] if you have the network setup at 100M full duplex across the switches and hosts then TCPIP will transfer at over 9MB/sec. However if you find only one system is getting that and the other is getting much much less then you know that one system is probably running in half duplex.&lt;BR /&gt;&lt;BR /&gt;In T64 you have to force it at the OS level as the console setting of the duplex is ignored.&lt;BR /&gt;&lt;BR /&gt;Robert.</description>
      <pubDate>Thu, 31 Aug 2006 19:22:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000612#M53001</guid>
      <dc:creator>Robert Walker_8</dc:creator>
      <dc:date>2006-08-31T19:22:43Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000613#M53002</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;If I do same test by using the scssystemid instead of the nodename the performance increases, but is still not good enough.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;What do you mean by this ? What is the difference between using SCSSYSTEMID or nodename for the DTSEND test ? Is it selecting NSP via OSI transport ? Does the node name and the SCSSYSTEMID differ ?&lt;BR /&gt;&lt;BR /&gt;What do the numbers mean in your spreadsheet ?&lt;BR /&gt;&lt;BR /&gt;Your DTSEND is sending large packets in one direction and small ones in the other. This may make a difference. You can specify /TYPE=ECHO to have DTR send back the whole packet.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 01 Sep 2006 01:29:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000613#M53002</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-09-01T01:29:18Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000614#M53003</link>
      <description>There are a few differences between DECNET over IP and straight DECNET V that can cause this:&lt;BR /&gt;&lt;BR /&gt;- by default DECNET users larger packets that can lead to packet loss on a flaky network.&lt;BR /&gt;- DECNET uses an OSI IS-IS router. This can be an old router (DECxyz) buried in the network somewhere that may only have a 10Mb link.&lt;BR /&gt;- The DNS name lookup is different. This can cause slow link establishment as the DNS lookup list time outs.&lt;BR /&gt;&lt;BR /&gt;I found the easiest way out is to force DECNET over IP by fiddling the address towers with DECNETREG or just removing the node from DECNETREG. I think there are better ways using NCL if you have got the time. Donâ  t forget to do a NCL FLUSH CACHEâ ¦. Also the back translation for proxy access may change,&lt;BR /&gt;&lt;BR /&gt;   Tim&lt;BR /&gt;</description>
      <pubDate>Fri, 01 Sep 2006 02:26:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000614#M53003</guid>
      <dc:creator>Tim Hughes_3</dc:creator>
      <dc:date>2006-09-01T02:26:45Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000615#M53004</link>
      <description>Hello Heinz,&lt;BR /&gt;&lt;BR /&gt;It's worth working through all the layers involved. At the hardware layer do you have throughput issues with any other protocol on the same adapter (SCS, TCPIP etc.)? If not and if all the counters look OK (both in the switch and on the systems) then let's assume that the hardware layer is probably OK.&lt;BR /&gt;&lt;BR /&gt;Next layer would be the virtual interface layer if you're using LAN failover. LANCP should show you the information you need.&lt;BR /&gt;&lt;BR /&gt;I'm wondering if it's the DECnet transport layer. I generally set up DECnet-Plus to use either NSP (my preference) or OSI transport. I suggest that you modify the address tower information with DECNET_REGISTER and ensure that the target local (native DECnet) nodes only have a single address tower entry using NSP transport. Don't forget the infamous MCR NCL FLUSH SESSION CONTROL NAMING CACHE ENTRY "*" afterwards.&lt;BR /&gt;&lt;BR /&gt;You can try flipping over to the OSI transport layer later if you wish, but I have (a few years ago) seen some odd performance issues with OSI transport and using BACKUP over DECdfs - that was related to large packets. Back then I found using NSP would be fine and using OSI TRANSPORT wasn't.&lt;BR /&gt;&lt;BR /&gt;Setting up address tower entries for both transports will generally double up all the timeouts as the communications path will try one, then the other.&lt;BR /&gt;&lt;BR /&gt;I usually remove the IP naming entries and have the IP name resolution provided by the local hosts file by setting the DOMAIN name server to 127.0.0.1 in NET$CONFIGURE.&lt;BR /&gt;&lt;BR /&gt;Some of the stuff in here might help: &lt;A href="http://h71000.www7.hp.com/openvms/journal/v5/index.html#decnet" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v5/index.html#decnet&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;It's difficult to guess what to try without being there and seeing it! Good luck.&lt;BR /&gt;&lt;BR /&gt;Cheers, Colin.</description>
      <pubDate>Fri, 01 Sep 2006 03:15:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000615#M53004</guid>
      <dc:creator>Colin Butcher</dc:creator>
      <dc:date>2006-09-01T03:15:30Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000616#M53005</link>
      <description>I checked the following:&lt;BR /&gt;&lt;BR /&gt;As recommended by Volker I used NET$MGMT to look at 'Retransmitted PDUs' and 'Duplicate PDUs'&lt;BR /&gt;&lt;BR /&gt;Duplicate PDUs      31714&lt;BR /&gt;Retransmitted PDUs   2468&lt;BR /&gt;&lt;BR /&gt;The counters are increasing during a DTSEND test.&lt;BR /&gt;&lt;BR /&gt;Monitor PROCESS/TOPBIO shows values between 200 and 2000 and is never nearly constant.&lt;BR /&gt;Monitor DECnet displays values between 1 and 6000&lt;BR /&gt;&lt;BR /&gt;The output from &lt;BR /&gt;$ MC NCL show nsp local nsap 39756F11510031AA0004007EC520 -&lt;BR /&gt;  remote nsap 39756F11510031AA0004007AC520 all&lt;BR /&gt;looks as follows:&lt;BR /&gt;&lt;BR /&gt;Identifiers&lt;BR /&gt;&lt;BR /&gt;    Name                              = 39756F11510031AA0004007AC520&lt;BR /&gt;&lt;BR /&gt;Status&lt;BR /&gt;&lt;BR /&gt;    NSAP Address                      = 39:756:11-51-00-31:AA-00-04-00-7A-C5:20 (LOCAL:.GDC140)&lt;BR /&gt;    UID                               = 3C15766D-37A3-11DB-87F0-001321081234&lt;BR /&gt;&lt;BR /&gt;Counters&lt;BR /&gt;&lt;BR /&gt;    Creation Time                     = 2006-08-29-19:13:36.480+00:00I0.150&lt;BR /&gt;    Remote Protocol Errors            = 4&lt;BR /&gt;    Total Octets Received             = 2078857053&lt;BR /&gt;    Total Octets Sent                 = 1514367784&lt;BR /&gt;    PDUs Received                     = 2392972&lt;BR /&gt;    PDUs Sent                         = 2046628&lt;BR /&gt;    Duplicate PDUs Received           = 2650&lt;BR /&gt;    Retransmitted PDUs                = 271&lt;BR /&gt;    Connects Received                 = 38&lt;BR /&gt;    Connects Sent                     = 14&lt;BR /&gt;    Rejects Received                  = 0&lt;BR /&gt;    Rejects Sent                      = 0&lt;BR /&gt;    User PDUs Discarded               = 0&lt;BR /&gt;    User Octets Received              = 2054571743&lt;BR /&gt;    User Octets Sent                  = 1496421003&lt;BR /&gt;    User PDUs Received                = 1589613&lt;BR /&gt;    User PDUs Sent                    = 1093773&lt;BR /&gt;&lt;BR /&gt;If I look at OSI Transport I cant find any NSAP which shows duplicate PDUs received or retransmitted PDUs&lt;BR /&gt;&lt;BR /&gt;So far I think Volker is right, it seems, that we are loosing packets.&lt;BR /&gt;&lt;BR /&gt;I tried also to follow the instructions of Cass. I pinged the another node as follows:&lt;BR /&gt;$  tcpip ping gdc141/number=100/packet_size=10000 &lt;BR /&gt;&lt;BR /&gt;----gdc141 PING Statistics----&lt;BR /&gt;100 packets transmitted, 100 packets received, 0% packet loss&lt;BR /&gt;round-trip (ms)  min/avg/max = 2/2/3 ms&lt;BR /&gt;&lt;BR /&gt;it seems, that We dont have packet loss over TCPIP&lt;BR /&gt;&lt;BR /&gt;To Robert: I think we dont have the problem that the switch port setting does not correspond with the NIC settings. &lt;BR /&gt;We had this problem a long time ago. Anyway I will let the network guis check the switchports.&lt;BR /&gt;&lt;BR /&gt;To Volker: The number in my spreadsheet are the line utilization displayed by DTSEND&lt;BR /&gt;&lt;BR /&gt;Any ideas how to continue ?&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Fri, 01 Sep 2006 03:58:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000616#M53005</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-09-01T03:58:05Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000617#M53006</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;you didn't answer my question about the meaning of using 'systemid' and 'nodename' for the DTSEND test.&lt;BR /&gt;&lt;BR /&gt;You said that both nodes have 4 LAN interfaces, all configured for DECnet. When using IP (PING test), you may be only using a subset of those interfaces. When using DECnet, they may all be used (round-robin ?!).&lt;BR /&gt;&lt;BR /&gt;Are these 4 network interfaces connected to the same LAN segment or to different LAN segments ? Do they all work, i.e. does a packet sent out via one of those interface to the other node really get received by the other node ?&lt;BR /&gt;&lt;BR /&gt;Individual NCL LOOP tests via those 4 local LAN interfaces to each of the 4 remote LAN interfaces may tell you  more.&lt;BR /&gt;&lt;BR /&gt;MC NCL LOOP MOP CIRC csmacd-n ADDRESS aa-bb-cc-dd-ee-ff&lt;BR /&gt;&lt;BR /&gt;Issue those tests for all CSMACD circuits n=0...3&lt;BR /&gt;to each of the 4 remote LAN interfaces&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 01 Sep 2006 04:38:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000617#M53006</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-09-01T04:38:23Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000618#M53007</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;Monitor PROCESS/TOPBIO shows values between 200 and 2000 and is never nearly constant.&lt;BR /&gt;Monitor DECnet displays values between 1 and 6000&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;When I run a DTSEND test (using the 30 sec default time) between 2 systems on our (more or less empty) LAN, I get a pretty contstant packet/sec rates shown by MONI DECNET. You may want to verify this using a pair of nodes between which you're seeing acceptable performance (and no lost packets). MONI PROC/TOPBIO should even be a better indicator, as that will not count other DECnet traffic.&lt;BR /&gt;&lt;BR /&gt;If MONITOR DECnet shows 6000 packets/sec, this is the rate you should get during DTSEND tests. DTSEND sends the packets as fast as possible, but I'm sure, it does not keep a number of QIOs outstanding (the buffered I/O count does not vary) - so it sends them one-by-one after receiving the response from the remote end. &lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 01 Sep 2006 04:56:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000618#M53007</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-09-01T04:56:50Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000619#M53008</link>
      <description>To Volker&lt;BR /&gt;&lt;BR /&gt;We were using the scssystemid to be sure that nsp is in use and we found a difference, as you can see in the excel..&lt;BR /&gt;&lt;BR /&gt;The interfaces are 2 dual DE600. From each DE600 one cable is connected to switch1 and one cable is connected to switch2. I dont have knowledge about the network topolgy. This because the network management is in the hands of a partner company. But all 4 interfaces of both machines are in same subnet.&lt;BR /&gt;&lt;BR /&gt;We opened a call with the network management, that they check the switchports for errors and to verify, that all ports are set to 100 M full duplex.&lt;BR /&gt;&lt;BR /&gt;I will start now to do further tests.&lt;BR /&gt;&lt;BR /&gt;1. I will try the ncl loop commands to find out if only one interface has the problems &lt;BR /&gt;2. I will ensure, that only the NSP tower is defined in decnet_register and do the tests &lt;BR /&gt;3. I will ensure that onli the OSI tower is defined in decnet_register.&lt;BR /&gt;&lt;BR /&gt;Regards &lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Fri, 01 Sep 2006 07:15:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000619#M53008</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-09-01T07:15:45Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000620#M53009</link>
      <description>I got the idea from Colin &lt;BR /&gt;&lt;BR /&gt;I did again tests with DTSEND between the two cluster members, where we have our problem. I documented my measuring in the attached excel sheet.&lt;BR /&gt;&lt;BR /&gt;It seems that the problem is the OSI transport.&lt;BR /&gt;&lt;BR /&gt;If I define only the NSP tower, everything seems well&lt;BR /&gt;If I define only the TP4 tower, the results are bad ... see Excel &lt;BR /&gt;If I define both towers, the results are bad&lt;BR /&gt;If I use a node in another subnet, it will use DECnet over IP. The results are good.&lt;BR /&gt;&lt;BR /&gt;If I use FTP to copy a 10 MB file, I can not find any speed difference on any node I tested. There is no difference if the machines are in same subnet or not (Node xyz is located at a remote site, 15 km away and in another subnet).&lt;BR /&gt;&lt;BR /&gt;So TCPIP seems not to be the problem, DECnet NSP Transport seems also to be o.k.&lt;BR /&gt;&lt;BR /&gt;OSI Transport is the problem&lt;BR /&gt;&lt;BR /&gt;Regards &lt;BR /&gt;&lt;BR /&gt;Heinz&lt;BR /&gt;</description>
      <pubDate>Fri, 01 Sep 2006 10:57:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000620#M53009</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-09-01T10:57:13Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000621#M53010</link>
      <description>Heinz,&lt;BR /&gt;&lt;BR /&gt;I still see a couple of open questions:&lt;BR /&gt;&lt;BR /&gt;- you were seeing retransmitted PDUs with NSP between the 2 nodes, yet the DTSEND performance seems to be not as bad as with OSI transport.&lt;BR /&gt;&lt;BR /&gt;- would NSP transport use the 4 routing circuits equally ? Or would only OSI transport do that ?&lt;BR /&gt;&lt;BR /&gt;- did you test the 4 DECnet circuits between the 2 nodes ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Sat, 02 Sep 2006 01:35:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000621#M53010</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-09-02T01:35:38Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000622#M53011</link>
      <description>Hi Volker&lt;BR /&gt;&lt;BR /&gt;you are right, the problem is not completly solved.&lt;BR /&gt;&lt;BR /&gt;NSP will not use the 4 routing circuits equally, but most of our network traffic &lt;BR /&gt;will use DECnet over IP and this way we use all 4 Interfaces.&lt;BR /&gt;I will continue analyzing why we have those retransmitted PDUs in Monday.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Sat, 02 Sep 2006 04:53:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000622#M53011</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-09-02T04:53:00Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000623#M53012</link>
      <description>Hello Heinz,&lt;BR /&gt;&lt;BR /&gt;That's useful progress. It confirms something I've seen before. I suggest that you log a support call to try and get it fixed. I'm assuming that you're running the 'latest' version with all the current ECOs, in which case the problem is still in there and it still shows up occasionally.&lt;BR /&gt;&lt;BR /&gt;Load balancing across all available adapters should happen with the NSP transport layer, but there are several things you need to consider. I'll assume that you're using Phase IV compatible addressing and that the address tower entries (yes you can have more than one if you have both Phase IV and Phase V style addresses) for NSP transport refer to the Phase IV compatible address, not a Phase V address.&lt;BR /&gt;&lt;BR /&gt;Are the 4 LAN adapters connected to entirely separate LANs or VLANs where the only interconnection between them is by routing? If so then you can enable a Phase IV style address on all of the adapters. Given similar adapter performance and idential 'path costs' (to borrow from Phase IV land) then I'd expect to see the OSI End Systems load balanace across all available adapters.&lt;BR /&gt;&lt;BR /&gt;If however the adapters are connected to the same LAN or VLAN then you can only have a Phase IV style address on one adapter in each LAN / VLAN because of the risk of a duplicate MAC address. Depending how the address tower entries are defined that may implicitly limit the number of adapters across which the routing layer will load balance the traffic.&lt;BR /&gt;&lt;BR /&gt;It's all jolly good fun, isn't it?&lt;BR /&gt;&lt;BR /&gt;Cheers, Colin.</description>
      <pubDate>Sat, 02 Sep 2006 06:47:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000623#M53012</guid>
      <dc:creator>Colin Butcher</dc:creator>
      <dc:date>2006-09-02T06:47:10Z</dc:date>
    </item>
    <item>
      <title>Re: Very bad Performance over native DECnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000624#M53013</link>
      <description>Hi Colin &amp;amp; Volker&lt;BR /&gt;&lt;BR /&gt;The four NIC's are all connected to different switches, but all on same subnet.&lt;BR /&gt;&lt;BR /&gt;In same subnet we have another 2 node ES45 Cluster. We did same tests on this cluster too, but we could not see the problem we have on our problem machine.&lt;BR /&gt;&lt;BR /&gt;The network guis are involved since one week, but they could not yet find the problem and are stil working on it. &lt;BR /&gt;&lt;BR /&gt;We don't have any DECnet routers. All DECnet traffic goes over DECnet over IP, except DECnet traffic from one Cluster member node to another one. &lt;BR /&gt;&lt;BR /&gt;This is just a short update. We are stil working on this problem, but with less priority&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Sun, 10 Sep 2006 09:50:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/very-bad-performance-over-native-decnet/m-p/5000624#M53013</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2006-09-10T09:50:54Z</dc:date>
    </item>
  </channel>
</rss>

