Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

show system - I/O , PID

 
SOLVED
Go to solution
dflm
Occasional Advisor

show system - I/O , PID

Hello , I'm newbie to Open VMS and would like to know more on "show system". Pardon me.

1. Noticed that the I/O count keeps increasing and is there a limit on this count ?

2. I have a process name : TCPIP$SNMP_1 with a high number , so how do i reset this , if it is a concern.

3. Is there a max PID ? when it reaches max, what happens ?

Thanks,
11 REPLIES
Volker Halle
Honored Contributor

Re: show system - I/O , PID

dflm,

the I/O count represents the number of IOs issued by that process during it's lifetime in the system. There is no limit to this count, if the numbers get real big, they may overflow the field width in the display and show up as '*********'.

You cannot reset the IO count of a process, except by stopping that process and starting a new process to do the work, e.g. in case of SNMP, stopping and starting the SNMP service.

The PID is just a number, which is garanteed to be unique on the local system. It consists of an index into the process vector and a sequence number. The process vector size is limited by the system parameter MAXPROCESSCNT and determines the maximum number of processes active at any time in the system. If all process entry slots are occupied, you cannot create another process and get an error message SS$_NOSLOT.

Volker.
dflm
Occasional Advisor

Re: show system - I/O , PID

Thanks Volker.

So how do I stop and restart the process (in the case of TCPIP$SNMP_1)?

How to determine if any of the processes hang ?

Thx again ;)
Karl Rohwedder
Honored Contributor

Re: show system - I/O , PID

To restart a TCPIP subsystem, you should use its shutdown/startup procedure in SYS$MANAGER.
In the case of SNMP its TCPIP$SNMP_SHUTDOWN.COM and TCPIP$SNMP_STARUP.COM.

Hung processes often have 'strange' process states, e.g. RWxxx (resource wait...) for an extended period of time.

regards Kalle
Volker Halle
Honored Contributor
Solution

Re: show system - I/O , PID

Why would you want to stop TCPIP$SNMP_1 ?
Any problems with SNMP ? You could be using the SYS$STARTUP:TCPIP$_SHUTDOWN.COM and ..._STARTUP.COM procedures or - even better - use @SYS$MANAGER:TCPIP$CONFIG.COM to stop and start the TCPIP services.

To determine, if a process is hung, is much more complicated. You can at least tell, that it does not do anything, if none of it's counters in SHOW SYSTEM/PROC=xxx does increase.

You would then need to execute some command, which would normally be serviced by that process. If that command hangs or returns some kind of timeout error, you could conclude, that the process is actually hung.

There are also some process states (RWxxx), which indicate some kind of temporary or long-lasting resource wait problem for a process.

Are these questions for your interest only or are you trying to diagnose and solve a real problem ?

Volker.
Robert Gezelter
Honored Contributor

Re: show system - I/O , PID

dflm,

As someone new to OpenVMS, these are good questions.

As Volker noted, the IO Count is an accounting of all the IO operations for a process. It will keep increasing, but you are unlikely to reach a limit (the count is stored as an unsigned, 32-bit number -- aka longword, thus the maximum value is on the order of 2**32, 4G).

Put in perspective, if a process is executing a consistent average of 1,000 IO operations/second, it would take 46 days of continuous operation before overflow was a serious concern (at lower IO rates, the duration is accordingly smaller, at 100 IO operations/second, it is approximately 460 days).

I would not rate this as a concern, although (tongue in cheek) as increasing hardware reliability increases the uptimes of individual OpenVMS instances, perhaps it might cause strange accounting log entries (e.g., total IO count << than either Direct or Buffered IO count).

The Process ID will, sooner or later, recycle. But that will take a VERY long time. I am actually not sure if anybody has observed a Process ID recycle occur in nature, even with the extended cluster uptimes that are common with OpenVMS.

- Bob Gezelter, http://www.rlgsc.com
David Jones_21
Trusted Contributor

Re: show system - I/O , PID

Bob Gezelter: "The Process ID will, sooner or later, recycle. But that will take a VERY long time. I am actually not sure if anybody has observed a Process ID recycle occur in nature, even with the extended cluster uptimes that are common with OpenVMS."

In a cluster, PID recycles are common if you have a high process creation rate and only 100 or so free process slots. I've seen it happen,moreso when kernel threads came along.

The PID is an encoded value whose interpretation is reserved to the OS. Don't read anything into the magnitude of the PID, all you can count on is that two concurrently existing processes will never have the same PID.
I'm looking for marbles all day long.
John Abbott_2
Esteemed Contributor

Re: show system - I/O , PID

Bob, just to show how varied things are, we have a PID that sometimes does 4~5K BIO/sec so a "$show system" quickly wanders into overflow (I/O column is BIO+DIO combined), not that the display overflow matters to us.

Regards
John.
Don't do what Donny Dont does
dflm
Occasional Advisor

Re: show system - I/O , PID

Hi Volker,

I was concerned on I/O count keeps going up and don't know if i should keep a tab on them. As I was caught previously with the version 32767 problem before , just to be sure.

Thanks to all of you who took time to answer my questions. Thx ;)
Robert Gezelter
Honored Contributor

Re: show system - I/O , PID

David,

I stand corrected.

- Bob Gezelter, http://www.rlgsc.com
Jan van den Ende
Honored Contributor

Re: show system - I/O , PID

My little addition to PIDs & exhausting them.

In a cluster, PIDs tart from 20000000 (hex)
Separate this in first 3 - last 5 digits.
The first digits are the VMS instance - each node that joins this goes up by 2. The last 5 are per-instance process IDs. (we have observed the third digit becoming odd after long node-uptime, so perhaps the 3 - 5 digit split mentioned above is better represented as 23 - 41 bits).

(btw: does anyone know if this is general, or just happened so because of some setting when the cluster formed? It certainly HAS been consistent since)

In (nearly) 10 years now the PIDs of our most-recently booted node start with 6E.
So, we nearly exhausted 2, 3, 4, 5, and 6.
Which is 5 out of 14. (0 is for non-clustered systems, no idea where 1 comes into play)
A quick calculation shows that the cluster has now had "a" node boot 640 times; ANY reason.
It will be some time still before we find out what happens if we run out... :-)
If the cluster is not "politics-ed away" before, then I will be long retired!

fwiw

Proost.

Have one on me.

jpe


Don't rust yours pelled jacker to fine doll missed aches.
Wim Van den Wyngaert
Honored Contributor

Re: show system - I/O , PID

To find high usage you don't use show system but e.g. monitor proc with options /topcpu or /topdio or /topbio. Or use VPA to analyze it afterwards.

Wim
Wim