<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: concerns Queue_manager and UCX$FTPD  I/O  counts in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689644#M103497</link>
    <description>&lt;PRE&gt;  Pid    Process Name    State  Pri      I/O       CPU       Page flts  Pages
2CE00401 SWAPPER         HIB     16        0   0 00:00:15.01         0      0
2CE00407 CLUSTER_SERVER  HIB     14    34184   0 00:01:01.33        93    108
2CE00408 SHADOW_SERVER   HIB      6  9787472   0 00:02:09.07        62     96
2CE00409 CONFIGURE       HIB     10       17   0 00:00:00.06        43     27
2CE0040A LANACP          HIB     12       93   0 00:00:00.11       112    138
2CE0040C FASTPATH_SERVER HIB     10        9   0 00:00:00.04        76     92
2CE0040D IPCACP          HIB     10       10   0 00:00:00.09        35     47
2CE0040E ERRFMT          HIB      8   398073   0 00:00:46.23       105    126
2CE0040F CACHE_SERVER    HIB     16       45   0 00:00:00.18        29     40
2CE00410 OPCOM           HIB      8   137951   0 00:00:11.33     13547     54
2CE00411 AUDIT_SERVER    HIB      9126460569   0 01:41:12.94       167    200
2CE00412 JOB_CONTROL     HIB      8   386796   0 00:00:47.01       173    205&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hey, we have an audit_server process with IO count exceeding the available column width. Is this a problem? Formatting wise, sure. Is it impacting the system? No, it's been running for 142 days, so that's to be expected.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The point I am trying to make, as Steven has also pointed out, is a high count does not necessarily mean anything other than it's a high count.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If, on the other hand you have a &lt;U&gt;known process&lt;/U&gt; that has a high count of say IO when it shouldn't then you &lt;STRONG&gt;MAY&lt;/STRONG&gt; have a problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Details of this problem process and its trail of destruction need to be forthcoming from yourself otherwise, your system is probably functioning ok.&lt;/P&gt;</description>
    <pubDate>Fri, 02 Jan 2015 06:50:35 GMT</pubDate>
    <dc:creator>MarkOfAus</dc:creator>
    <dc:date>2015-01-02T06:50:35Z</dc:date>
    <item>
      <title>concerns Queue_manager and UCX$FTPD  I/O  counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689605#M103495</link>
      <description>&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;I have pasted the SHOW SYSTEM output of my&amp;nbsp;VAX 4000-50.&lt;/P&gt;&lt;P&gt;OpenVMS 6.1 server.&amp;nbsp;DEC TCP/IP Services for OpenVMS VAX Version V3.2&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I see many people uses this vax sever for many print queus and many nfs mounts. I like to know why queue_manager, security_sever,&amp;nbsp;&lt;SPAN&gt;UCX$FTPD processes I/O, othercounts are too much. Is it OK?, if not how to monitor those activities, and when those counts will be reset (automatically OR manually)?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;00000203 &amp;nbsp; &amp;nbsp;SWAPPER HIB 16 0 0 00:03:34.34 0 0&lt;BR /&gt;00000201 &amp;nbsp; &amp;nbsp;CONFIGURE HIB 9 36 0 00:00:00.06 121 183&lt;BR /&gt;00000205 &amp;nbsp; IPCACP HIB 10 7 0 00:00:00.03 95 154&lt;BR /&gt;00000108 &amp;nbsp; ERRFMT HIB 8 2630 0 00:00:05.35 142 234&lt;BR /&gt;00000109 &amp;nbsp; OPCOM HIB 9 797 0 00:00:00.92 328 176&lt;BR /&gt;0000030A &amp;nbsp; AUDIT_SERVER HIB 10 120 0 00:00:00.25 524 820&lt;BR /&gt;0000031B &amp;nbsp; JOB_CONTROL HIB 8 82990 0 00:04:56.93 268 398&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;0000010C &amp;nbsp; QUEUE_MANAGER &amp;nbsp; HIB &amp;nbsp; &amp;nbsp;9 &amp;nbsp; &amp;nbsp; 1998392 &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;01:17:05.66 &amp;nbsp; &amp;nbsp;1049 &amp;nbsp; &amp;nbsp;1286&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;0000020D &amp;nbsp; SECURITY_SERVER &amp;nbsp;HIB &amp;nbsp; 10 &amp;nbsp; &amp;nbsp;7885 &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp;00:00:58.85 &amp;nbsp; &amp;nbsp; 1980 &amp;nbsp; &amp;nbsp;1664&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;0000021E &amp;nbsp; TP_SERVER &amp;nbsp; &amp;nbsp;HIB &amp;nbsp; &amp;nbsp;10 &amp;nbsp; &amp;nbsp; 18968 &amp;nbsp; &amp;nbsp;0 &amp;nbsp; &amp;nbsp; 00:00:05.30 &amp;nbsp; &amp;nbsp; 201 &amp;nbsp; 308&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;00000223 &amp;nbsp; NETACP HIB 10 3491 0 00:00:00.33 180 405&lt;BR /&gt;00000204 &amp;nbsp; EVL HIB 6 39 0 00:00:00.10 293 402 N&lt;BR /&gt;00000208 &amp;nbsp; REMACP HIB 8 8 0 00:00:00.00 81 51&lt;/P&gt;&lt;P&gt;0000020A &amp;nbsp; UCX$LPD_QUEUE HIB 4 47 0 00:00:00.19 534 475&amp;nbsp;&lt;/P&gt;&lt;P&gt;0000011F &amp;nbsp; UCX$INET_ACP HIB 10 159 0 00:00:00.25 351 388&amp;nbsp;&lt;/P&gt;&lt;P&gt;00000021 &amp;nbsp; UCX$INET_ROUTED LEF 6 30 0 00:00:00.08 280 458 S&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;00000226 &amp;nbsp; UCX$FTPD &amp;nbsp; LEF &amp;nbsp;9 &amp;nbsp; &amp;nbsp;247946 &amp;nbsp; 0 &amp;nbsp; 00:02:47.49 &amp;nbsp; &amp;nbsp;1452 &amp;nbsp; &amp;nbsp;975 N&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Navipa&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jan 2015 01:26:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689605#M103495</guid>
      <dc:creator>Navipa</dc:creator>
      <dc:date>2015-01-02T01:26:57Z</dc:date>
    </item>
    <item>
      <title>Re: concerns Queue_manager and UCX$FTPD  I/O  counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689632#M103496</link>
      <description>&lt;P&gt;&amp;gt; [...] I like to know why queue_manager, security_sever,&amp;nbsp;&lt;SPAN&gt;UCX$FTPD&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;gt; processes I/O, othercounts are too much.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp; Define "too much".&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp; A process which runs from start-up to shut-down may eventually show&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;large numbers for CPU time, I/O, and so on.&amp;nbsp; What's your uptime?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;gt; Is it OK?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp; What, exactly, do you think is wrong?&amp;nbsp; (And why?)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;gt; [...] when those counts will be reset (automatically OR manually)?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp; Normally, when you reboot the system?&amp;nbsp; Why do you care?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jan 2015 04:57:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689632#M103496</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2015-01-02T04:57:17Z</dc:date>
    </item>
    <item>
      <title>Re: concerns Queue_manager and UCX$FTPD  I/O  counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689644#M103497</link>
      <description>&lt;PRE&gt;  Pid    Process Name    State  Pri      I/O       CPU       Page flts  Pages
2CE00401 SWAPPER         HIB     16        0   0 00:00:15.01         0      0
2CE00407 CLUSTER_SERVER  HIB     14    34184   0 00:01:01.33        93    108
2CE00408 SHADOW_SERVER   HIB      6  9787472   0 00:02:09.07        62     96
2CE00409 CONFIGURE       HIB     10       17   0 00:00:00.06        43     27
2CE0040A LANACP          HIB     12       93   0 00:00:00.11       112    138
2CE0040C FASTPATH_SERVER HIB     10        9   0 00:00:00.04        76     92
2CE0040D IPCACP          HIB     10       10   0 00:00:00.09        35     47
2CE0040E ERRFMT          HIB      8   398073   0 00:00:46.23       105    126
2CE0040F CACHE_SERVER    HIB     16       45   0 00:00:00.18        29     40
2CE00410 OPCOM           HIB      8   137951   0 00:00:11.33     13547     54
2CE00411 AUDIT_SERVER    HIB      9126460569   0 01:41:12.94       167    200
2CE00412 JOB_CONTROL     HIB      8   386796   0 00:00:47.01       173    205&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hey, we have an audit_server process with IO count exceeding the available column width. Is this a problem? Formatting wise, sure. Is it impacting the system? No, it's been running for 142 days, so that's to be expected.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The point I am trying to make, as Steven has also pointed out, is a high count does not necessarily mean anything other than it's a high count.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If, on the other hand you have a &lt;U&gt;known process&lt;/U&gt; that has a high count of say IO when it shouldn't then you &lt;STRONG&gt;MAY&lt;/STRONG&gt; have a problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Details of this problem process and its trail of destruction need to be forthcoming from yourself otherwise, your system is probably functioning ok.&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jan 2015 06:50:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689644#M103497</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2015-01-02T06:50:35Z</dc:date>
    </item>
    <item>
      <title>Re: concerns Queue_manager and UCX$FTPD  I/O  counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689750#M103498</link>
      <description>&lt;P&gt;I will echo the previous post.&amp;nbsp; High numbers in and of themselves are not necessarily a problem.&amp;nbsp; I have machines with very long uptimes that have many processes with high I/O counts.&amp;nbsp; This is normal expected behavior.&amp;nbsp; You have to know your environment to determine whether or not these counts are indicators of problems.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the processes listed, I don't see a problem.&amp;nbsp; But, I also don't know your environment.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;More information is needed about your site before anyone can comment specifically.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Dan&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jan 2015 14:55:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689750#M103498</guid>
      <dc:creator>abrsvc</dc:creator>
      <dc:date>2015-01-02T14:55:33Z</dc:date>
    </item>
    <item>
      <title>Re: concerns Queue_manager and UCX$FTPD  I/O  counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689797#M103499</link>
      <description>&lt;P&gt;You're using completely&amp;nbsp;insecure, decades-old software, and you're focusing on the I/O counts? &amp;nbsp; &amp;nbsp; Um, OK.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;OpenVMS VAX V6.1 is over twenty years old, TCP/IP Services has seen a very large number of stability and security updates and new features since V3.2, and the VAX hardware and storage here is probably at least that old.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;OK. &amp;nbsp;Technically and arguably, any I/O via FTP is too much, as it exposes the&amp;nbsp;users' server login credentials in cleartext to anyone on the network. &amp;nbsp;That means there's effectively no security here, and no individual accountability.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'd suggest working toward replacing this configuration with &amp;nbsp;newer hardware and particularly with newer OpenVMS software.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;While you're working toward replacement, use MONITOR to record system performance information or acquire and install other tools to record system performance, and to determine the performance trends and usage spikes and related patterns. &amp;nbsp;&amp;nbsp;This data collection will give you a baseline to determine whether what you're seeing is a problem, or not.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A single snapshot of I/O activity unfortunately provides basically no usable information. &amp;nbsp;If all that I/O activity happened in, say, ten minutes and activities during those ten minutes were&amp;nbsp;business-critical and your system was grinding under the load, then you probably need performance work and more likely an upgrade. &amp;nbsp; If that I/O activity has happened piecemeal&amp;nbsp;over a year or two of uptime, then your system is probably pretty quiet on average.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you want to know about performance, the Performance Management manual in the OpenVMS documentation set will provide a high-level overview — OpenVMS VAX V6.1 is older than the oldest versions supported by various tools, unfortunately.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 02 Jan 2015 19:43:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/concerns-queue-manager-and-ucx-ftpd-i-o-counts/m-p/6689797#M103499</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2015-01-02T19:43:05Z</dc:date>
    </item>
  </channel>
</rss>

