<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: show system  - I/O , PID in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843366#M78391</link>
    <description>David,&lt;BR /&gt;&lt;BR /&gt;I stand corrected.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
    <pubDate>Tue, 15 Aug 2006 07:05:14 GMT</pubDate>
    <dc:creator>Robert Gezelter</dc:creator>
    <dc:date>2006-08-15T07:05:14Z</dc:date>
    <item>
      <title>show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843357#M78382</link>
      <description>Hello , I'm newbie to Open VMS and would like to know more on "show system". Pardon me. &lt;BR /&gt;&lt;BR /&gt;1. Noticed that the I/O count keeps increasing and is there a limit on this count ?&lt;BR /&gt;&lt;BR /&gt;2. I have a process name : TCPIP$SNMP_1 with a high number , so how do i reset this , if it is a concern. &lt;BR /&gt;&lt;BR /&gt;3. Is there a max PID ? when it reaches max, what happens ?&lt;BR /&gt;&lt;BR /&gt;Thanks,</description>
      <pubDate>Tue, 15 Aug 2006 02:54:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843357#M78382</guid>
      <dc:creator>dflm</dc:creator>
      <dc:date>2006-08-15T02:54:06Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843358#M78383</link>
      <description>dflm,&lt;BR /&gt;&lt;BR /&gt;the I/O count represents the number of IOs issued by that process during it's lifetime in the system. There is no limit to this count, if the numbers get real big, they may overflow the field width in the display and show up as '*********'.&lt;BR /&gt;&lt;BR /&gt;You cannot reset the IO count of a process, except by stopping that process and starting a new process to do the work, e.g. in case of SNMP, stopping and starting the SNMP service.&lt;BR /&gt;&lt;BR /&gt;The PID is just a number, which is garanteed to be unique on the local system. It consists of an index into the process vector and a sequence number. The process vector size is limited by the system parameter MAXPROCESSCNT and determines the maximum number of processes active at any time in the system. If all process entry slots are occupied, you cannot create another process and get an error message SS$_NOSLOT.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 15 Aug 2006 03:06:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843358#M78383</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-08-15T03:06:54Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843359#M78384</link>
      <description>Thanks Volker. &lt;BR /&gt;&lt;BR /&gt;So how do I stop and restart the process (in the case of TCPIP$SNMP_1)? &lt;BR /&gt;&lt;BR /&gt;How to determine if any of the processes hang ? &lt;BR /&gt;&lt;BR /&gt;Thx again ;)&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Aug 2006 04:00:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843359#M78384</guid>
      <dc:creator>dflm</dc:creator>
      <dc:date>2006-08-15T04:00:12Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843360#M78385</link>
      <description>To restart a TCPIP subsystem, you should use its shutdown/startup procedure in SYS$MANAGER.&lt;BR /&gt;In the case of SNMP its TCPIP$SNMP_SHUTDOWN.COM and TCPIP$SNMP_STARUP.COM.&lt;BR /&gt;&lt;BR /&gt;Hung processes often have 'strange' process states, e.g. RWxxx (resource wait...) for an extended period of time.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Tue, 15 Aug 2006 04:06:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843360#M78385</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2006-08-15T04:06:02Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843361#M78386</link>
      <description>Why would you want to stop TCPIP$SNMP_1 ?&lt;BR /&gt;Any problems with SNMP ? You could be using the SYS$STARTUP:TCPIP$&lt;SERVICE-NAME&gt;_SHUTDOWN.COM and ..._STARTUP.COM procedures or - even better - use @SYS$MANAGER:TCPIP$CONFIG.COM to stop and start the TCPIP services.&lt;BR /&gt;&lt;BR /&gt;To determine, if a process is hung, is much more complicated. You can at least tell, that it does not do anything, if none of it's counters in SHOW SYSTEM/PROC=xxx does increase.&lt;BR /&gt;&lt;BR /&gt;You would then need to execute some command, which would normally be serviced by that process. If that command hangs or returns some kind of timeout error, you could conclude, that the process is actually hung.&lt;BR /&gt;&lt;BR /&gt;There are also some process states (RWxxx), which indicate some kind of temporary or long-lasting resource wait problem for a process.&lt;BR /&gt;&lt;BR /&gt;Are these questions for your interest only or are you trying to diagnose and solve a real problem ?&lt;BR /&gt;&lt;BR /&gt;Volker.&lt;/SERVICE-NAME&gt;</description>
      <pubDate>Tue, 15 Aug 2006 04:08:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843361#M78386</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-08-15T04:08:17Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843362#M78387</link>
      <description>dflm,&lt;BR /&gt;&lt;BR /&gt;As someone new to OpenVMS, these are good questions.&lt;BR /&gt;&lt;BR /&gt;As Volker noted, the IO Count is an accounting of all the IO operations for a process. It will keep increasing, but you are unlikely to reach a limit (the count is stored as an unsigned, 32-bit number -- aka longword, thus the maximum value is on the order of 2**32, 4G).&lt;BR /&gt;&lt;BR /&gt;Put in perspective, if a process is executing a consistent average of 1,000 IO operations/second, it would take 46 days of continuous operation before overflow was a serious concern (at lower IO rates, the duration is accordingly smaller, at 100 IO operations/second, it is approximately 460 days).&lt;BR /&gt;&lt;BR /&gt;I would not rate this as a concern, although (tongue in cheek) as increasing hardware reliability increases the uptimes of individual OpenVMS instances, perhaps it might cause strange accounting log entries (e.g., total IO count &amp;lt;&amp;lt; than either Direct or Buffered IO count).&lt;BR /&gt;&lt;BR /&gt;The Process ID will, sooner or later, recycle. But that will take a VERY long time. I am actually not sure if anybody has observed a Process ID recycle occur in nature, even with the extended cluster uptimes that are common with OpenVMS.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Tue, 15 Aug 2006 04:29:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843362#M78387</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-08-15T04:29:47Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843363#M78388</link>
      <description>Bob Gezelter: "The Process ID will, sooner or later, recycle. But that will take a VERY long time. I am actually not sure if anybody has observed a Process ID recycle occur in nature, even with the extended cluster uptimes that are common with OpenVMS."&lt;BR /&gt;&lt;BR /&gt;In a cluster, PID recycles are common if you have a high process creation rate and only 100 or so free process slots.  I've seen it happen,moreso when kernel threads came along.&lt;BR /&gt;&lt;BR /&gt;The PID is an encoded value whose interpretation is reserved to the OS.  Don't read anything into the magnitude of the PID, all you can count on is that two concurrently existing processes will never have the same PID.</description>
      <pubDate>Tue, 15 Aug 2006 05:16:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843363#M78388</guid>
      <dc:creator>David Jones_21</dc:creator>
      <dc:date>2006-08-15T05:16:08Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843364#M78389</link>
      <description>Bob, just to show how varied things are, we have a PID that sometimes does 4~5K BIO/sec so a "$show system" quickly wanders into overflow (I/O column is BIO+DIO combined), not that the display overflow matters to us.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;John.</description>
      <pubDate>Tue, 15 Aug 2006 05:39:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843364#M78389</guid>
      <dc:creator>John Abbott_2</dc:creator>
      <dc:date>2006-08-15T05:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843365#M78390</link>
      <description>Hi Volker, &lt;BR /&gt;&lt;BR /&gt;I was concerned on I/O count keeps going up and don't know if i should keep a tab on them. As I was caught previously with the version 32767 problem before , just to be sure. &lt;BR /&gt;&lt;BR /&gt;Thanks to all of you who took time to answer my questions. Thx ;)</description>
      <pubDate>Tue, 15 Aug 2006 06:53:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843365#M78390</guid>
      <dc:creator>dflm</dc:creator>
      <dc:date>2006-08-15T06:53:17Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843366#M78391</link>
      <description>David,&lt;BR /&gt;&lt;BR /&gt;I stand corrected.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Tue, 15 Aug 2006 07:05:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843366#M78391</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-08-15T07:05:14Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843367#M78392</link>
      <description>My little addition to PIDs &amp;amp; exhausting them.&lt;BR /&gt;&lt;BR /&gt;In a cluster, PIDs tart from 20000000 (hex)&lt;BR /&gt;Separate this in first 3 - last 5 digits.&lt;BR /&gt;The first digits are the VMS instance - each node that joins this goes up by 2. The last 5 are per-instance process IDs. (we have observed the third digit becoming odd after long node-uptime, so perhaps the 3 - 5 digit split mentioned above is better represented as 23 - 41 bits).&lt;BR /&gt;&lt;BR /&gt;(btw: does anyone know if this is general, or just happened so because of some setting when the cluster formed? It certainly HAS been consistent since)&lt;BR /&gt;&lt;BR /&gt;In (nearly) 10 years now the PIDs of our most-recently booted node start with 6E.&lt;BR /&gt;So, we nearly exhausted 2, 3, 4, 5, and 6.&lt;BR /&gt;Which is 5 out of 14. (0 is for non-clustered systems, no idea where 1 comes into play)&lt;BR /&gt;A quick calculation shows that the cluster has now had "a" node boot 640 times; ANY reason. &lt;BR /&gt;It will be some time still before we find out what happens if we run out... :-)&lt;BR /&gt;If the cluster is not "politics-ed away" before, then I will be long retired!&lt;BR /&gt;&lt;BR /&gt;fwiw&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Aug 2006 15:12:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843367#M78392</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-08-15T15:12:13Z</dc:date>
    </item>
    <item>
      <title>Re: show system  - I/O , PID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843368#M78393</link>
      <description>To find high usage you don't use show system but e.g. monitor proc with options /topcpu or /topdio or /topbio. Or use VPA to analyze it afterwards.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 16 Aug 2006 00:56:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/show-system-i-o-pid/m-p/3843368#M78393</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-08-16T00:56:30Z</dc:date>
    </item>
  </channel>
</rss>

