<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RPC &amp;amp; Portmapper in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252545#M60285</link>
    <description>&lt;!--!*#--&gt;Thanks Jim - the developer is using svctcp_create &amp;amp; svc_register from the RPC library. He's not using the listen()function, I'm presuming the RPC library implements this. Is there a way to feed this in? If the QLIMIT is the backlog can someone explain why when I set define TCPIP$SOCKET_TRACE to 1 I recieve the following output which shows a backlog of 2 but the QLIMIT is 4. &lt;BR /&gt;&lt;BR /&gt;17:15:26.73 +socket family: 2, type: 2, proto: 0&lt;BR /&gt;17:15:26.73 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0086914&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0206911&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0206911&lt;BR /&gt;17:15:26.73 *close sock: 0x130, st: 0x1&lt;BR /&gt;17:15:26.73 +socket family: 2, type: 2, proto: 17&lt;BR /&gt;17:15:26.73 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *bind sock: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0x8004667e&lt;BR /&gt;17:15:26.73 +sendto_64 sock: 0x130, len: 60, flags: 0x0&lt;BR /&gt;17:15:26.73 -sendto_64 st: 0x1, iosb: 0x1 60 60&lt;BR /&gt;17:15:26.73 +select nfds: 128, timeout.sec: 5, timeout.usec: 0&lt;BR /&gt;17:15:26.73     assigning initial select channel&lt;BR /&gt;        socket channels upon calling select:&lt;BR /&gt;             read: 0x130&lt;BR /&gt;        socket channels upon returning from select:&lt;BR /&gt;             read: 0x130&lt;BR /&gt;17:15:26.73 -select st: 0x1, iosb 0x1 1, nfds: 1&lt;BR /&gt;17:15:26.73 +recvfrom_64 sock: 0x130, len: 400, flags: 0x0&lt;BR /&gt;17:15:26.73 -recvfrom_64 st: 0x1, iosb: 0x1 28 28&lt;BR /&gt;17:15:26.73 *close sock: 0x130, st: 0x1&lt;BR /&gt;trying to unregister any previous service&lt;BR /&gt;&lt;BR /&gt; 17:15:26.73 +socket family: 2, type: 1, proto: 6&lt;BR /&gt;17:15:26.74 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.74 *bind sock: 0x130, st: 0x1, iosb: 0x94 48&lt;BR /&gt;17:15:26.74 *bind sock: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.74 *getsockname sock: 0x130&lt;BR /&gt;17:15:26.74 *listen sock: 0x130, backlog: 2&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 23 Aug 2010 15:22:12 GMT</pubDate>
    <dc:creator>Rob Houghton</dc:creator>
    <dc:date>2010-08-23T15:22:12Z</dc:date>
    <item>
      <title>RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252539#M60279</link>
      <description>Hi, does anyone have any expierencing using RPC &amp;amp; PORTMAPPER. We have an application that is using RPC &amp;amp; PORTMAPPER for communciation and under high load, we seem to be having queueing issues. This is possibly down to the actual PORTMAPPER process. Also if the process run's with SYSPRV/BYPASS &amp;amp; OPER or a UIC &amp;lt; maxsysgroup(8) the system grinds to a halt and we expierence processes going into MUTEX. &lt;BR /&gt;&lt;BR /&gt;What I'm really after is any additional portmapper configuration that can be setup for High Performance &amp;amp; throughput.&lt;BR /&gt;&lt;BR /&gt;Im new to these forums so please go easy....&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Rob. &lt;BR /&gt;</description>
      <pubDate>Fri, 20 Aug 2010 12:55:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252539#M60279</guid>
      <dc:creator>Rob Houghton</dc:creator>
      <dc:date>2010-08-20T12:55:13Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252540#M60280</link>
      <description>Probably best to post the version of TCPIP where using along with the version of OpenVMS to.&lt;BR /&gt;&lt;BR /&gt;TCPIP v5.6 ECo5&lt;BR /&gt;OpenVMS V8.3-1H1</description>
      <pubDate>Fri, 20 Aug 2010 13:03:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252540#M60280</guid>
      <dc:creator>Rob Houghton</dc:creator>
      <dc:date>2010-08-20T13:03:15Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252541#M60281</link>
      <description>Hi Rob&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;processes going into MUTEX.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;See &lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_7120.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_7120.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Dig into SDA and check if it is a bytlm issue, or tqelm...</description>
      <pubDate>Fri, 20 Aug 2010 13:03:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252541#M60281</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2010-08-20T13:03:27Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252542#M60282</link>
      <description>The mutex (and potential quota exhaustion) may well be secondary to the error.  The goal of the quota mechanisms were to prevent a run-away application from consuming all available (shared) resources.&lt;BR /&gt;&lt;BR /&gt;RPC is a communications mechanism, and (as mentioned elsewhere in the thread) RPC and networking in general can wedge for lack of quotas.&lt;BR /&gt;&lt;BR /&gt;Portmapper is a directory service mechanism for IP, and not usually something that is (in isolation) performance critical.&lt;BR /&gt;&lt;BR /&gt;High loads are absolutely the right conditions for exposing latent bugs in application (and system) code, too.  I've seen these bugs latent for decades.  Application routines get slightly slower than encountered under debugging and test conditions, and (problematic or missing) synchronization goes off the rails, or when there are volatile variables that get corrupted, and run-away sequences can easily arise.  &lt;BR /&gt;&lt;BR /&gt;If this is your code or something you're supporting, you're going to be debugging.&lt;BR /&gt;&lt;BR /&gt;If this is a product you're using and supported by some entity, you'll want to contact the vendor.&lt;BR /&gt;&lt;BR /&gt;If this is your application code, here is some additional reading for you:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_1661.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_1661.html&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Aug 2010 13:50:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252542#M60282</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-08-20T13:50:51Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252543#M60283</link>
      <description>&lt;!--!*#--&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;Many thanks for your responses, I'm not the developer who wrote the code. Just someone who's trying to debug the performance issue. When the initial TCP socket is created there appears to be a QLIMIT which is set low to 4. What I would like to know is. Where is this LIMIT set. I've looked at the RPC programmers guide and I can't see anything in there. If I monitor this device during the operation I can see us hitting 4 on regular occassions and I do appear to be getting delays from some clients. What does TCPIP do with the outstanding connections? Is it held in a queue/buffer somewhere. Is it possible to amend the QLIMIT? I presume these attributes can be set when the socket is created.&lt;BR /&gt;&lt;BR /&gt;Device_socket:  bg9923      Type: STREAM&lt;BR /&gt;                      LOCAL                          REMOTE&lt;BR /&gt;         Port:         1023                               0&lt;BR /&gt;         Host:  *                               *&lt;BR /&gt;      Service:&lt;BR /&gt;&lt;BR /&gt;                                                           RECEIVE       SEND&lt;BR /&gt;                                   Queued I/O                    0             0&lt;BR /&gt;       Q0LEN         0             Socket buffer bytes           0             0&lt;BR /&gt;       QLEN          0             Socket buffer quota       61440         61440&lt;BR /&gt;       QLIMIT        4             Total buffer alloc            0             0&lt;BR /&gt;       TIMEO         0             Total buffer limit       491520        491520&lt;BR /&gt;       ERROR         0             Buffer or I/O waits           0             0&lt;BR /&gt;       OOBMARK       0             Buffer or I/O drops           0             0&lt;BR /&gt;                                   I/O completed             14996             0&lt;BR /&gt;                                   Bytes transferred             0             0&lt;BR /&gt;&lt;BR /&gt;  Options:  ACCEPT&lt;BR /&gt;  State:    PRIV&lt;BR /&gt;  RCV Buff: SEL ASYNC&lt;BR /&gt;  SND Buff: ASYNC&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Aug 2010 09:19:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252543#M60283</guid>
      <dc:creator>Rob Houghton</dc:creator>
      <dc:date>2010-08-23T09:19:53Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252544#M60284</link>
      <description>That QLIMIT is also known as the backlog - and the backlog value is controlled by the second parameter of the listen() function that I expect you would find in the source code for the application.&lt;BR /&gt;&lt;BR /&gt;When connections arrive more quickly than they can be processed (perform three-way handshake and become established), if the listener permits, those new connections will be held in a queue by the stack until the application's listener is ready to accomodate. Once the listener's backlog queue is full, any subsequent connection requests will be rejected until an already queued request is processed to open up a new queue slot.</description>
      <pubDate>Mon, 23 Aug 2010 14:58:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252544#M60284</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2010-08-23T14:58:34Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252545#M60285</link>
      <description>&lt;!--!*#--&gt;Thanks Jim - the developer is using svctcp_create &amp;amp; svc_register from the RPC library. He's not using the listen()function, I'm presuming the RPC library implements this. Is there a way to feed this in? If the QLIMIT is the backlog can someone explain why when I set define TCPIP$SOCKET_TRACE to 1 I recieve the following output which shows a backlog of 2 but the QLIMIT is 4. &lt;BR /&gt;&lt;BR /&gt;17:15:26.73 +socket family: 2, type: 2, proto: 0&lt;BR /&gt;17:15:26.73 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0086914&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0206911&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0xc0206911&lt;BR /&gt;17:15:26.73 *close sock: 0x130, st: 0x1&lt;BR /&gt;17:15:26.73 +socket family: 2, type: 2, proto: 17&lt;BR /&gt;17:15:26.73 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *bind sock: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.73 *ioctl sock: 0x130, req: 0x8004667e&lt;BR /&gt;17:15:26.73 +sendto_64 sock: 0x130, len: 60, flags: 0x0&lt;BR /&gt;17:15:26.73 -sendto_64 st: 0x1, iosb: 0x1 60 60&lt;BR /&gt;17:15:26.73 +select nfds: 128, timeout.sec: 5, timeout.usec: 0&lt;BR /&gt;17:15:26.73     assigning initial select channel&lt;BR /&gt;        socket channels upon calling select:&lt;BR /&gt;             read: 0x130&lt;BR /&gt;        socket channels upon returning from select:&lt;BR /&gt;             read: 0x130&lt;BR /&gt;17:15:26.73 -select st: 0x1, iosb 0x1 1, nfds: 1&lt;BR /&gt;17:15:26.73 +recvfrom_64 sock: 0x130, len: 400, flags: 0x0&lt;BR /&gt;17:15:26.73 -recvfrom_64 st: 0x1, iosb: 0x1 28 28&lt;BR /&gt;17:15:26.73 *close sock: 0x130, st: 0x1&lt;BR /&gt;trying to unregister any previous service&lt;BR /&gt;&lt;BR /&gt; 17:15:26.73 +socket family: 2, type: 1, proto: 6&lt;BR /&gt;17:15:26.74 -socket chan: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.74 *bind sock: 0x130, st: 0x1, iosb: 0x94 48&lt;BR /&gt;17:15:26.74 *bind sock: 0x130, st: 0x1, iosb: 0x1 0&lt;BR /&gt;17:15:26.74 *getsockname sock: 0x130&lt;BR /&gt;17:15:26.74 *listen sock: 0x130, backlog: 2&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Aug 2010 15:22:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252545#M60285</guid>
      <dc:creator>Rob Houghton</dc:creator>
      <dc:date>2010-08-23T15:22:12Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252546#M60286</link>
      <description>&amp;gt; I'm not the developer who wrote the code. Just someone who's trying to debug the performance issue. ...&lt;BR /&gt;&lt;BR /&gt;Don't start with a narrow view of the environment when performing application tuning.  In the inimitable words of R.J. Squirrel, "that trick never works." &lt;BR /&gt;&lt;BR /&gt;As a developer with source code access, start with DECset PCA or an analogous (and equivalently broad) coverage of the application activity, and find the code that's sitting on the biggest pile of wall-clock, and work from there.&lt;BR /&gt;&lt;BR /&gt;If you don't have the application source code, then you're going to have (more) problems and (more) effort with the performance tuning, and (if it's a commercial package) you'll often want to chat with the vendor's support folks.&lt;BR /&gt;&lt;BR /&gt;As for external tuning and monitoring, you'll also want to utilize tools such as T4 (or MONITOR directly) and watch what (external) activities are involved with the application.  (T40 or MONITOR- and SDA-based performance-monitoring activities are nowhere near what DECset PCA or instrumented code can get you, though.)  I/O, window turns, memory usage, etc.&lt;BR /&gt;&lt;BR /&gt;This could be application load.&lt;BR /&gt;&lt;BR /&gt;This could be server load.&lt;BR /&gt;&lt;BR /&gt;This could be disk fragmentation.&lt;BR /&gt;&lt;BR /&gt;This could be a network error.&lt;BR /&gt;&lt;BR /&gt;This could actually be an RPC or Portmapper issue.&lt;BR /&gt;&lt;BR /&gt;Or this could be something completely different.&lt;BR /&gt;&lt;BR /&gt;Having tuned code written by myself and by others, it's common to find the performance limits are not where I thought they were lurking, too.  While DECset PCA can provide confirmation of a theory, it can also provide performance revelations.&lt;BR /&gt;&lt;BR /&gt;Or grind the box to a halt, force a crash, and analyze the system dump, too.  If you're getting (unrelated) application processes in MUTEX states, there's likely a shared resource here that's being depleted.  If this is causing MUTEX errors on parts of the application, it could be application bugs or loading or insufficient quotas.  That could be how the application works at the current load, or it could be the scale of the application, or it could be insufficient hardware, or it could be indicative of a leak.  And it could be a hardware problem.&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Aug 2010 15:28:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252546#M60286</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-08-23T15:28:38Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252547#M60287</link>
      <description>Map out what the application network traffic is doing here, too.  RPC obviously isn't particularly cheap as procedure calling mechanisms go.  And it means you (also) need to figure out what's going on with the remote end of the connection; whether the turn-around delays are due to the network speeds and feeds, or due to the (potential lack of) speed on the processing of the remote end of the RPC call.&lt;BR /&gt;&lt;BR /&gt;FWIW...&lt;BR /&gt;&lt;BR /&gt;17:15:26.74 *bind sock: 0x130, st: 0x1, iosb: 0x94 48&lt;BR /&gt;&lt;BR /&gt;"%SYSTEM-F-DUPLNAM, duplicate name"&lt;BR /&gt;&lt;BR /&gt;If you're getting a combination of excessive RPC calls and sufficiently large numbers of process creations and problems with network connections and remote server sluggishness, you're approaching a mountain of slowness; performance can and usually will tank.</description>
      <pubDate>Mon, 23 Aug 2010 15:37:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252547#M60287</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2010-08-23T15:37:49Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252548#M60288</link>
      <description>Robs on holiday now so I'm holding this baby at present.&lt;BR /&gt;&lt;BR /&gt;I think the duplicate name is just because we were previously running the server so its trying to unregister the previous one which takes a fraction of a second to unwind itself from the various data structures in the system. &lt;BR /&gt;&lt;BR /&gt;We are now looking to make a small shrinkwrap demonstrator for the problem and then raise it with HP - a server to register itself and a client and batch job which can make lots of calls to the portmapper to find the server and show that as the number of people requesting the lookup grows beyond about 6-7 then processes are forced into this 25s or so wait time along with excessive mutex delays (I understand the need for the mutex to coordinate access to the resource, just not the effect it exhibits/causes).</description>
      <pubDate>Thu, 26 Aug 2010 07:40:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252548#M60288</guid>
      <dc:creator>Martin D Platts</dc:creator>
      <dc:date>2010-08-26T07:40:15Z</dc:date>
    </item>
    <item>
      <title>Re: RPC &amp; Portmapper</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252549#M60289</link>
      <description>RPC wasn't suitable for the amount of load we were generating. Closing thread</description>
      <pubDate>Thu, 13 Jan 2011 22:39:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rpc-amp-portmapper/m-p/5252549#M60289</guid>
      <dc:creator>Rob Houghton</dc:creator>
      <dc:date>2011-01-13T22:39:48Z</dc:date>
    </item>
  </channel>
</rss>

