<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: TCPIP performance problems in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209895#M62166</link>
    <description>Willem,&lt;BR /&gt;Can we rule out any hardware errors on your VMS machines. Looks like the processing is intermittent...why don't we look at the following and rule them out&lt;BR /&gt;&lt;BR /&gt;1. Use Decevent and show errors&lt;BR /&gt;&lt;BR /&gt;2. Use Analyse/System and use some of the &lt;BR /&gt;   SHOW commands&lt;BR /&gt;&lt;BR /&gt;3. Review your operator log and look&lt;BR /&gt;   out for any errors.&lt;BR /&gt;&lt;BR /&gt;I bet you would have been observing your TCPIP packet movement using the MONITOR command.&lt;BR /&gt;&lt;BR /&gt;I suspect that we may have some kind of intermittent issue on your ethernet interface.&lt;BR /&gt;&lt;BR /&gt;I know this is not a solution, but this is all i could think of. I am sure our other colleagues will rope in with their thoughts.&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
    <pubDate>Fri, 05 Mar 2004 01:08:06 GMT</pubDate>
    <dc:creator>Mobeen_1</dc:creator>
    <dc:date>2004-03-05T01:08:06Z</dc:date>
    <item>
      <title>TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209894#M62165</link>
      <description>My dear collegues,&lt;BR /&gt;&lt;BR /&gt;Someone please give us a hint where to look...&lt;BR /&gt;&lt;BR /&gt;Environment: VMS and TRU64. I'm not certain about versions, but please read on , I don't think this is relevant.&lt;BR /&gt;Systems have been connected over 100Mb LAN.&lt;BR /&gt;&lt;BR /&gt;From the TRU64 machine, a string (800 to 8000 bytes in size) is sent over IP to the VMS machine, where a service reads it in chunks of arbitrary length, and process each part after all has been read, and finally sends an acknowledgement back. This whole processing must be done within 30 seconds.&lt;BR /&gt;An idea of this message is in the attachment.&lt;BR /&gt;&lt;BR /&gt;For years, this works without problem in a number of sites. The whole transaction can be done in a few seconds. This is independent of VMS version (all 7.x+).&lt;BR /&gt;&lt;BR /&gt;A few weeks ago, two sites using this software had their LAN replaced by a WAN (5Mb/s). One of these ran into serious problems after that.&lt;BR /&gt;The process _may_ run fine, but suddenly the whole transaction may take minutes to finish, for no appearant reason. As suden as this problem may occur, as sudden it may disappear.&lt;BR /&gt;Because this is the only site having problems, and it did run flawlessly before, we suspect the network to be the cause.&lt;BR /&gt;However, this has been sniffed this afternoon, and on the IP level, there seems to be no problem: between sending a stream of 1800 bytes and receipt of the IP acknowledge lay less than a second. But processing took over 40 seconds.&lt;BR /&gt;So the trouble must lay in VMS's socket handling. But this hasn't changed either...&lt;BR /&gt;Indeed, I have found reading could take tens of second by the VMS program (20 seconds to read just over 6100 bytes). But another time it took just 7...&lt;BR /&gt;&lt;BR /&gt;My must urgent questions:&lt;BR /&gt;HOW to observe the _live_ system to determine the cause of delay, preferably without rebuilding the software. If explicitly required, including some extra logging IS a possibility but I already found out it may make things even worse.&lt;BR /&gt;WHAT could cause this behaviour - and in what way can we monitor this?&lt;BR /&gt;&lt;BR /&gt;BTW: It's well possible we'll contact HP for support but we'd like to do some measurements ourselves. &lt;BR /&gt;&lt;BR /&gt;(Why that limit? I don't know. Changing it is said to be no option at the moment, since other sites don't have this problem)</description>
      <pubDate>Thu, 04 Mar 2004 15:44:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209894#M62165</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-03-04T15:44:28Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209895#M62166</link>
      <description>Willem,&lt;BR /&gt;Can we rule out any hardware errors on your VMS machines. Looks like the processing is intermittent...why don't we look at the following and rule them out&lt;BR /&gt;&lt;BR /&gt;1. Use Decevent and show errors&lt;BR /&gt;&lt;BR /&gt;2. Use Analyse/System and use some of the &lt;BR /&gt;   SHOW commands&lt;BR /&gt;&lt;BR /&gt;3. Review your operator log and look&lt;BR /&gt;   out for any errors.&lt;BR /&gt;&lt;BR /&gt;I bet you would have been observing your TCPIP packet movement using the MONITOR command.&lt;BR /&gt;&lt;BR /&gt;I suspect that we may have some kind of intermittent issue on your ethernet interface.&lt;BR /&gt;&lt;BR /&gt;I know this is not a solution, but this is all i could think of. I am sure our other colleagues will rope in with their thoughts.&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
      <pubDate>Fri, 05 Mar 2004 01:08:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209895#M62166</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2004-03-05T01:08:06Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209896#M62167</link>
      <description>After that, check the IP and TCP counters.&lt;BR /&gt;Do sysconfig -zp tcp to zero the counters,&lt;BR /&gt;run the transaction and do sysconfig -p tcp afterwards. Post the counters. Repeat this for ip too.&lt;BR /&gt;&lt;BR /&gt;If possible, run tcptrace/prot=tcp/fu/pack=10000 for the connection and post that too.&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 02:10:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209896#M62167</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-05T02:10:37Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209897#M62168</link>
      <description>It sound like NIC slows down to 10Mbs then return up to 100Mbs.&lt;BR /&gt;You told the site has replaced LAN by WAN and this may be not only casual; in some manner then host adeguate his speed: yes there isn't any logical in this but I think the original cause is WAN.&lt;BR /&gt;Check what update had made that site before change any bit of software, this can help you.&lt;BR /&gt; &lt;BR /&gt;@Antoniov&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 03:17:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209897#M62168</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-03-05T03:17:36Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209898#M62169</link>
      <description>A friend of mine had a brute-force solution for this type of problem: take a crash when you see the problem, then you have plenty of time to analyze. Of course, maybe it is not possible to do it on your site :-)&lt;BR /&gt;&lt;BR /&gt;As you said the Lan has been replaced by a Wan, it should be interisting to do a traceroute on both sides (Vms -&amp;gt; Tru64, Tru64 -&amp;gt; Vms) to see the path, and to check that when you have the problem, you still use the "corrrect" path.&lt;BR /&gt;&lt;BR /&gt;ana/sys&lt;BR /&gt;tcpip sh dev bg /various_qualifiers may help&lt;BR /&gt;&lt;BR /&gt;I am afraid this type of problem is better solved with some expert on-site.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Gerard</description>
      <pubDate>Fri, 05 Mar 2004 03:20:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209898#M62169</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2004-03-05T03:20:23Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209899#M62170</link>
      <description>Thanks for your help so far.&lt;BR /&gt;I have been granted access to that VMS system to investigate.&lt;BR /&gt;&lt;BR /&gt;VMS 7.1-2&lt;BR /&gt;TCPIP 5.0A ECO 1&lt;BR /&gt;member of a 2-node cluster.&lt;BR /&gt;&lt;BR /&gt;ANA/SYS: problem is that the process must be spotted first, and data to be retrieved within the minute it is active. So far, I missed it. No other data found.&lt;BR /&gt;MONITOR: Running (for analysis)&lt;BR /&gt;OPERATOR.LOG: No errors found&lt;BR /&gt;DIAGNOSE: there is no license for running  /ANALYZE, /TRANSLATE didn't show anything on the NIC.&lt;BR /&gt;TCPIP: see attachment and remarks at the end&lt;BR /&gt;Hardware/system parameters: I'll have to check that. Wouldn't surprise me if some system parameters were changed but I don't see why this would introduce this intermittend problem.&lt;BR /&gt;&lt;BR /&gt;I've seen a number of weird things:&lt;BR /&gt;* route for 0.0.0.0 is over two distinct gateways. Since the UNIX machine's network is specified in ROUTE that should not be a problem. But I did have trouble within a similar configuration using FTP.&lt;BR /&gt;* although service is not active, there still is a BG device.&lt;BR /&gt;* although the service has been activated several times, the is no entry in accounting...&lt;BR /&gt;* I tested (on a different port) with the same activity, and this finished within 10 seconds (and the activity showed up in accounting).&lt;BR /&gt;&lt;BR /&gt;Next: I'll check the output of monitor. Hopefully the service will be activated....&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 08:35:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209899#M62170</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-03-05T08:35:44Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209900#M62171</link>
      <description>You say &lt;BR /&gt;&lt;BR /&gt;Problems started end of 2002....&lt;BR /&gt;&lt;BR /&gt;May be the problem appears only when some load is reached ? &lt;BR /&gt;&lt;BR /&gt;You should anyway apply the last eco for Tcpip 5.0A  if you plan to call HP :-)&lt;BR /&gt;&lt;BR /&gt;May be not related to your problem, but &lt;BR /&gt;&lt;BR /&gt;tcp_recvspace = 32768&lt;BR /&gt;tcp_sendspace = 32768&lt;BR /&gt;&lt;BR /&gt;this may be raised quietly.&lt;BR /&gt;&lt;BR /&gt;Good hunt&lt;BR /&gt;&lt;BR /&gt;Gerard&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 09:14:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209900#M62171</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2004-03-05T09:14:40Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209901#M62172</link>
      <description>tcp_mssdflt = 536&lt;BR /&gt;is the default maximum segment size. Seems small. As I understand it, it is only used when Path MTU Discovery [RFC-1191] is not supported somewhere on the route to your destination. &lt;BR /&gt;&lt;BR /&gt;To see what they changed in the config, consult the file sys$specific:[tcpip$etc]sysconfigtab.dat.</description>
      <pubDate>Fri, 05 Mar 2004 11:31:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209901#M62172</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-05T11:31:32Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209902#M62173</link>
      <description>When the LAN was replaced by the WAN the problems started, so one thing to check is that the connection speeds are all hard-coded rather than using autonegotiate.  Autonegotiation can be problematic, and if there's a mismatch you can experience problems like you are seeing.</description>
      <pubDate>Fri, 05 Mar 2004 13:39:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209902#M62173</guid>
      <dc:creator>Eric Dittman</dc:creator>
      <dc:date>2004-03-05T13:39:44Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209903#M62174</link>
      <description>I'm not sure if this is relavent, but when we had a SLOW connection in establishing a telnet session to a VMS system, we ultimately found that it was the "reverse telnet lookup" that was causing our slowdown.  This was evidenced by a route on the routing table that was the same as the IP address of the system involved.&lt;BR /&gt;(example: AH  192.168.1.112   192.168.1.112)&lt;BR /&gt;  When we blew away that route, the delay dissapeared.  Since this is a dynamic route that is added by the system, it can re-appear on it's own if the system has network problems.&lt;BR /&gt;&lt;BR /&gt;Possibly changing from LAN to WAN made the extra routes appear on the routing table.  Remove the extra routes and see if your problems go away.&lt;BR /&gt;&lt;BR /&gt;Mike Naime</description>
      <pubDate>Sat, 06 Mar 2004 00:22:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209903#M62174</guid>
      <dc:creator>Mike Naime</dc:creator>
      <dc:date>2004-03-06T00:22:43Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209904#M62175</link>
      <description>Someone spotted me a cause - but I'm not convinced:&lt;BR /&gt;The message is built up and sent to the (TCP) socket in one call (that happens on TRU64). Of course, 8K is not transferred in one window - but in packets of 1460 bytes.&lt;BR /&gt;TCP will guarantee that all packets will be delivered in the right order.&lt;BR /&gt;So, it was suggested that, on the VMS-side, the application would need to wait until all data is received before reading the socket.&lt;BR /&gt;First, I wouldn't know how to accomplish this since TCP/IP isn't asynchronous (unless I could force INETACP to launch the service only when all data is received)&lt;BR /&gt;Second, I don't think this is true. I won't see the delay in the application - but in TCP it would still exist, so it would NOT speed-up the whole transfer (and it would be much harder to measure (if at least possible)).&lt;BR /&gt;&lt;BR /&gt;Anything in that direction? &lt;BR /&gt;</description>
      <pubDate>Mon, 08 Mar 2004 06:13:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209904#M62175</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-03-08T06:13:08Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209905#M62176</link>
      <description>Willem,&lt;BR /&gt;Can u please confirm if your ethernet is set to 100 Base-T instead of auto-negotiate?&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
      <pubDate>Mon, 08 Mar 2004 06:17:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209905#M62176</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2004-03-08T06:17:05Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209906#M62177</link>
      <description>Please post tcptrace/prot=tcp/fu/pack=10000 for the connection. And the counters.&lt;BR /&gt;This way we can see what is happening.</description>
      <pubDate>Mon, 08 Mar 2004 08:05:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209906#M62177</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-08T08:05:58Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209907#M62178</link>
      <description>Thnaks to all.&lt;BR /&gt;As investigations progressed, it soon turned out that the main problem for the application doesn't lay withjin TCPIP after all: it's IO that caused the problem for the application.....&lt;BR /&gt;Nevertheless, I'm not comfortable at all. Sniffing the network revealed that this not causing a problem: on the TCP-level, a message of 4000 bytes was acknowlegded within a second, and a full roundtrip (PC to PC (into emulation program) and back) was done within seconds. We didn't do measurements on VMS and within the application but from the logs so far it is calcutlated that it takes over 5 seconds to read about 2500 bytes into the application seems quite slow, compared with the speed of TCP.&lt;BR /&gt;Alas, TCPTRACE didn't work properly yet (BUFFERFULL warnings, data not saved), but as it seems that data has been received (as said: ACK within a second) I guess it's an application matter....&lt;BR /&gt;WIll try to get more.</description>
      <pubDate>Wed, 10 Mar 2004 05:38:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209907#M62178</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-03-10T05:38:22Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209908#M62179</link>
      <description>Just as an exercise, could you post the (incomplete) output of&lt;BR /&gt;tcptrace/prot=tcp/fu/pack=10000 xxxx&lt;BR /&gt;where xxxx is the name of the remote machine.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 10 Mar 2004 07:02:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209908#M62179</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-10T07:02:30Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP performance problems</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209909#M62180</link>
      <description>Hi Willem&lt;BR /&gt;&lt;BR /&gt;if TCPTRACE warns BUFFERFULL, then add /BUFFERS=500, e.g.&lt;BR /&gt;&lt;BR /&gt;Michael</description>
      <pubDate>Mon, 15 Mar 2004 06:02:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-performance-problems/m-p/3209909#M62180</guid>
      <dc:creator>Michael Stephan</dc:creator>
      <dc:date>2004-03-15T06:02:43Z</dc:date>
    </item>
  </channel>
</rss>

