<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: cpu &amp;amp; disk usage in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552272#M840528</link>
    <description>Hi&lt;BR /&gt;&lt;BR /&gt;I agree you disks have exessive queues.  If I could explain....&lt;BR /&gt;&lt;BR /&gt;CPU, roughly speaking when it is busy you have 30% usr &amp;amp; 50% sys,  So total 80% CPU.  This meams that your system is doing nearly twice as many system calls as user CPU.&lt;BR /&gt;&lt;BR /&gt;Disks, you have excessine queues (avque) 13-14..  the disks are not actually too busy at  25% or so and do ~30 IO/s.   Implying at 100% you would expect 120 IO/s, this is typical for 10,000 rpm disks with an average service time of 8ms (125 IO/s).  BUT your avserv is 30ms.  This should be more like 8ms!!!  &lt;BR /&gt;&lt;BR /&gt;So my suspicions are that your disis are part of the problem.. BUT your %wio is really quite low...  I then took a look at your TOP results.. two processes hth which seem to be fighting it out running really quite hot at 70%.  There are other processes running, but much quieter....  &lt;BR /&gt;&lt;BR /&gt;My suspicions are (and these are really just guesses) &lt;BR /&gt; 1 - the hth processs are fighting each other for CPU time, causing each other to conext switch off &amp;amp; on.  This could be responsible for the high %sys values.&lt;BR /&gt; 2 - hth processes are fighting each other for disks at the SAME time.  even though they only use the disks infrequently (~25% of time) they do it simultanously, thus causing excessive disks queues.&lt;BR /&gt; 3 - One or both of the disks c2t1d0 and c2t0d0 are broken/behaving poorly (I assume they are mirrored pairs with c2t1d0 as primary).  Though I would have expected to see higher %wio if this were the case&lt;BR /&gt; 4 - The SCSI bus that c2t1d0 &amp;amp; c2t0d0 may be overloaded or behaving poorly.  Again I would have expected to see high %wio if this were the case.&lt;BR /&gt;&lt;BR /&gt;Number 1 could be checked by trying to run only one hth.&lt;BR /&gt;Number 2 could be checked by ... knowing how hth works ...&lt;BR /&gt;Number 3 &amp;amp; 4 you really need to look in syslog.log &amp;amp; use mstm to check them out.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim&lt;BR /&gt;</description>
    <pubDate>Sat, 28 May 2005 06:30:17 GMT</pubDate>
    <dc:creator>Tim D Fulford</dc:creator>
    <dc:date>2005-05-28T06:30:17Z</dc:date>
    <item>
      <title>cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552267#M840523</link>
      <description>I found cpu high usage when webserver is peak time.&lt;BR /&gt;webserver: rx4640 1.3G 2cpu hpux 11.23&lt;BR /&gt;           two disk with mirrored ux&lt;BR /&gt;cpu %sys &amp;gt;= 50 &amp;amp; disk io busy but vmstat&lt;BR /&gt;show no pi or po  &lt;BR /&gt;I attach cpu,disk,vmstat monitering result!!&lt;BR /&gt;I wonder this is really cpu problem&lt;BR /&gt;  or disk io problem!!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 26 May 2005 03:58:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552267#M840523</guid>
      <dc:creator>Jaieun Chu</dc:creator>
      <dc:date>2005-05-26T03:58:55Z</dc:date>
    </item>
    <item>
      <title>Re: cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552268#M840524</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;You have not posted your stats.&lt;BR /&gt;&lt;BR /&gt;1 - Are you saying %sys is &amp;gt; 50%&lt;BR /&gt;OR&lt;BR /&gt;are you saying total cpu (%usr+%sys) &amp;gt; 50%&lt;BR /&gt;&lt;BR /&gt;I'm guessing the total CPU (%sys+%usr) is greatetr than 50% and your system has a spinning process, which will take up 1 whole CPU *50%!!).  the output of a top would be sufficient to show this&lt;BR /&gt;&lt;BR /&gt;If however %sys &amp;gt; 50% this is most conmcerning, as it means you are doing LOADS of system calls, this maty mean something more subtle is happening on your system.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Thu, 26 May 2005 06:47:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552268#M840524</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2005-05-26T06:47:48Z</dc:date>
    </item>
    <item>
      <title>Re: cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552269#M840525</link>
      <description>monitering file attach fail so see the following !!&lt;BR /&gt;&lt;BR /&gt;23:11:43     cpu    %usr    %sys    %wio   %idle&lt;BR /&gt;23:12:03       0      29      50       2      18&lt;BR /&gt;               1      29      52       1      18&lt;BR /&gt;          system      29      51       1      18&lt;BR /&gt;23:12:23       0      31      54       1      15&lt;BR /&gt;               1      30      54       1      16&lt;BR /&gt;          system      30      54       1      15&lt;BR /&gt;23:12:43       0      30      57       0      13&lt;BR /&gt;               1      29      53       1      17&lt;BR /&gt;          system      30      55       0      15&lt;BR /&gt;23:13:03       0      30      56       1      13&lt;BR /&gt;               1      30      55       0      15&lt;BR /&gt;          system      30      55       1      14&lt;BR /&gt;23:13:23       0      28      57       0      14&lt;BR /&gt;               1      28      55       0      17&lt;BR /&gt;          system      28      56       0      15&lt;BR /&gt;23:13:43       0      28      57       1      14&lt;BR /&gt;               1      31      55       0      13&lt;BR /&gt;          system      30      56       1      13&lt;BR /&gt;&lt;BR /&gt;23:11:43   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;23:12:03   c2t1d0   25.61   13.45      31     459   29.73   29.17&lt;BR /&gt;           c2t0d0   24.36   14.42      28     446   36.87   29.50&lt;BR /&gt;23:12:23   c2t1d0   16.90   17.39      46     681   57.61   31.38&lt;BR /&gt;           c2t0d0   15.05   18.46      42     658   63.13   30.60&lt;BR /&gt;23:12:43   c2t1d0    9.20    1.81      21     306   11.22   16.48&lt;BR /&gt;           c2t0d0    7.50    2.63      18     298   17.21   17.35&lt;BR /&gt;23:13:03   c2t1d0   13.65    8.74      35     507   17.56   16.63&lt;BR /&gt;           c2t0d0   11.65    9.42      31     492   19.41   16.21&lt;BR /&gt;23:13:23   c2t1d0    7.70    0.71      18     262    1.38   16.73&lt;BR /&gt;           c2t0d0    6.15    0.62      15     251    0.66   14.36&lt;BR /&gt;23:13:43   c2t1d0   10.75   11.75      31     468   22.91   15.78&lt;BR /&gt;           c2t0d0    9.35   12.97      28     456   27.03   15.43&lt;BR /&gt;&lt;BR /&gt;----------------&lt;MEMORY report=""&gt;--------------------&lt;BR /&gt;&lt;BR /&gt;        procs           memory                   page                              faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in     sy    cs  us sy id&lt;BR /&gt;    4     0     0   276784  184850   62   15     0    0     0    0     0   2789   5573   714  10 15 76&lt;BR /&gt;    3     0     0   284020  183390  118   39     0    0     0    0     0   5981  99045  1230  29 51 20&lt;BR /&gt;    2     0     0   267582  183081   76   12     0    0     0    0     0   6415  13894  1464  30 54 16&lt;BR /&gt;    3     0     0   279049  183360   44    4     0    0     0    0     0   7334  11614  1411  30 55 15&lt;BR /&gt;    3     0     0   277106  183230  209   24     0    0     0    0     0   7245  18590  3115  30 55 15&lt;BR /&gt;    4     0     0   277636  183654   17    0     0    0     0    0     0   6235  10209  1182  28 56 16&lt;BR /&gt;    4     0     0   279565  183639   61    8     0    0     0    0     0   6196  11353  1197  30 56 14&lt;BR /&gt;    3     0     0   282254  183168  180   25     0    0     0    0     0   7949  18700  4277  31 57 12&lt;BR /&gt;    6     0     0   278566  183652   39    5     0    0     0    0     0   6759  10442  1192  29 53 18&lt;BR /&gt;    2     0     0   275299  184508   58   13     0    0     0    0     0   5373  10629  1081  27 51 22&lt;BR /&gt;    2     0     0   282760  183832  191   43     0    0     0    0     0   7226  17787  4081  29 51 20&lt;BR /&gt;    2     0     0   264394  184507   23    8     0    0     0    0     0   5944  10083  1206  28 50 22&lt;BR /&gt;    3     0     0   276668  184504   48   21     0    0     0    0     0   5548   9960  1090  27 49 24&lt;BR /&gt;    4     0     0   277510  183760  206   36     0    0     0    0     0   5909  16690  1923  27 47 26&lt;BR /&gt;    4     0     0   275008  184624   10    2     0    0     0    0     0   6317   9390  1136  28 49 22&lt;BR /&gt;    4     0     0   275394  184590   70   29     0    0     0    0     0   6181  11515  1252  28 52 20&lt;BR /&gt;    3     0     0   278067  184050  187   43     0    0     0    0     0   7205  17160  3688  27 52 21&lt;BR /&gt;    3     0     0   274890  184621   22    7     0    0     0    0     0   5606   9212  1111  28 51 21&lt;BR /&gt;    3     0     0   280553  184621   25   11     0    0     0    0     0   5121   9027  1037  27 51 22&lt;BR /&gt;    3     0     0   280723  184491  169   35     0    0     0    0     0   5858  15723  2336  27 53 20&lt;BR /&gt;         procs           memory                   page                              faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in     sy    cs  us sy id&lt;BR /&gt;    4     0     0   278506  184507   17    6     0    0     0    0     0   5705   9654  1171  29 53 18&lt;BR /&gt;    3     0     0   274828  184491   12    6     0    0     0    0     0   5249   8842  1043  26 52 22&lt;BR /&gt;    3     0     0   277771  183522  195   38     0    0     0    0     0   5625  15916  2769  27 49 24&lt;BR /&gt;    3     0     0   276054  184498   20    7     0    0     0    0     0   5604   9753  1156  25 51 23&lt;BR /&gt;    4     0     0   271095  184033   45   18     0    0     0    0     0   7589  11531  1208  28 56 15&lt;BR /&gt;    4     0     0   282485  184492  174   34     0    0     0    0     0   6702  16189  3309  28 54 18&lt;BR /&gt;    2     0     0   264410  184491   41   12     0    0     0    0     0   6323   9992  1168  27 51 22&lt;BR /&gt;    2     1     0   278308  184507    8    2     0    0     0    0     0   4970   8385  1020  27 49 24&lt;BR /&gt;    2     1     0   279441  183896  181   41     0    0     0    0     0   6184  15905  3425  25 48 27&lt;BR /&gt;    2     0     0   271617  184498   33   15     0    0     0    0     0   6013  10207  1202  27 50 24&lt;BR /&gt;    4     0     0   281079  184498   15    5     0    0     0    0     0   5572   9189  1252  25 49 26&lt;BR /&gt;    2     0     0   282098  183504  190   32     0    0     0    0     0   5820  15547  2746  25 47 28&lt;BR /&gt;    4     0     0   278415  184439   46   21     0    0     0    0     0   5343   9700  1074  26 49 25&lt;BR /&gt;    3     0     0   272389  184423   43   26     0    0     0    0     0   5474  10614  1122  26 49 25&lt;BR /&gt;    1     0     0   283077  183833  165   31     0    0     0    0     0   5700  15205  2875  26 50 25&lt;BR /&gt;    3     0     0   279893  184424   81   29     0    0     0    0     0   5599  11168  1189  24 47 29&lt;BR /&gt;    3     0     0   272857  184423   22    9     0    0     0    0     0   5545   9133  1115  27 45 28&lt;BR /&gt;    3     0     0   281344  183838  188   41     0    0     0    0     0   7172  18130  4148  28 49 23&lt;BR /&gt;    2     0     0   271995  184401   62   29     0    0     0    0     0   6413  10728  1186  30 52 18&lt;BR /&gt;    3     0     0   269781  184423   26   12     0    0     0    0     0   5457   9688  1092  30 51 19&lt;BR /&gt;&lt;BR /&gt;&lt;/MEMORY&gt;</description>
      <pubDate>Thu, 26 May 2005 18:50:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552269#M840525</guid>
      <dc:creator>Jaieun Chu</dc:creator>
      <dc:date>2005-05-26T18:50:01Z</dc:date>
    </item>
    <item>
      <title>Re: cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552270#M840526</link>
      <description>this is top monintering result!!&lt;BR /&gt;System: ebsweb05 Sun May 22 23:21:43 2005&lt;BR /&gt;Load averages: 0.91, 0.95, 0.98&lt;BR /&gt;205 processes: 181 sleeping, 23 running, 1 zombie&lt;BR /&gt;Cpu states:&lt;BR /&gt;CPU   LOAD   USER   NICE    SYS   IDLE  BLOCK  SWAIT   INTR   SSYS&lt;BR /&gt; 0    0.92  28.0%   0.0%  52.3%  19.7%   0.0%   0.0%   0.0%   0.0% &lt;BR /&gt; 1    0.91  27.8%   0.0%  51.6%  20.6%   0.0%   0.0%   0.0%   0.0% &lt;BR /&gt;---   ----  -----  -----  -----  -----  -----  -----  -----  -----&lt;BR /&gt;avg   0.91  27.9%   0.0%  52.0%  20.2%   0.0%   0.0%   0.0%   0.0% &lt;BR /&gt;&lt;BR /&gt;Memory: 479436K (381024K) real, 1274588K (1112756K) virtual, 736128K free  Page# 1/13&lt;BR /&gt;&lt;BR /&gt;CPU TTY  PID USERNAME PRI NI   SIZE    RES STATE    TIME %WCPU  %CPU COMMAND&lt;BR /&gt; 1   ? 14645 webtob   241 20 57712K 44504K run    752:10 70.57 70.45 hth&lt;BR /&gt; 0   ? 14646 webtob   154 20 49792K 36584K sleep  746:09 69.52 69.40 hth&lt;BR /&gt; 1   ? 14639 webtob   154 20 16952K  1152K sleep   16:52  1.34  1.34 htl&lt;BR /&gt; 0   ? 13941 www      152 20 32288K  6624K run      0:03  0.54  0.54 httpd&lt;BR /&gt; 0   ? 18429 www      152 20 32160K  6608K run      0:03  0.53  0.53 httpd&lt;BR /&gt; 0   ? 24502 www      152 20 32032K  6496K run      0:00  0.52  0.52 httpd&lt;BR /&gt; 0   ? 14638 webtob   154 20 17376K  4420K sleep    6:34  0.49  0.49 wsm&lt;BR /&gt; 1   ?    51 root     152 20  3024K  2688K run     50:53  0.41  0.41 vxfsd&lt;BR /&gt; 0   ? 21448 webtob   168 20 17876K  4584K sleep    0:03  0.25  0.25 htmls&lt;BR /&gt; 0   ? 21454 webtob   154 20 17812K  4568K sleep    0:03  0.24  0.24 htmls&lt;BR /&gt; 1   ? 21453 webtob   154 20 17812K  4568K sleep    0:03  0.24  0.24 htmls&lt;BR /&gt; 1   ? 14652 webtob   154 20 17876K  4616K sleep    4:34  0.23  0.23 htmls&lt;BR /&gt; 1   ? 14659 webtob   154 20 17876K  4616K sleep    4:37  0.23  0.23 htmls&lt;BR /&gt; 0   ? 14660 webtob   154 20 17876K  4616K sleep    4:33  0.23  0.23 htmls&lt;BR /&gt; 1   ? 14649 webtob   154 20 17876K  4616K sleep    4:37  0.23  0.23 htmls&lt;BR /&gt; 1   ? 21451 webtob   154 20 17812K  4568K sleep    0:03  0.23  0.23 htmls&lt;BR /&gt;</description>
      <pubDate>Thu, 26 May 2005 18:56:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552270#M840526</guid>
      <dc:creator>Jaieun Chu</dc:creator>
      <dc:date>2005-05-26T18:56:43Z</dc:date>
    </item>
    <item>
      <title>Re: cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552271#M840527</link>
      <description>As your out put I think CPU is normal.&lt;BR /&gt;It seems that the disk I/O with high usage...&lt;BR /&gt;You can check with GlanPlus to make sure!&lt;BR /&gt;&lt;BR /&gt;Regard,&lt;BR /&gt;Hoang Chi Cong</description>
      <pubDate>Thu, 26 May 2005 20:18:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552271#M840527</guid>
      <dc:creator>Hoang Chi Cong_1</dc:creator>
      <dc:date>2005-05-26T20:18:48Z</dc:date>
    </item>
    <item>
      <title>Re: cpu &amp; disk usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552272#M840528</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I agree you disks have exessive queues.  If I could explain....&lt;BR /&gt;&lt;BR /&gt;CPU, roughly speaking when it is busy you have 30% usr &amp;amp; 50% sys,  So total 80% CPU.  This meams that your system is doing nearly twice as many system calls as user CPU.&lt;BR /&gt;&lt;BR /&gt;Disks, you have excessine queues (avque) 13-14..  the disks are not actually too busy at  25% or so and do ~30 IO/s.   Implying at 100% you would expect 120 IO/s, this is typical for 10,000 rpm disks with an average service time of 8ms (125 IO/s).  BUT your avserv is 30ms.  This should be more like 8ms!!!  &lt;BR /&gt;&lt;BR /&gt;So my suspicions are that your disis are part of the problem.. BUT your %wio is really quite low...  I then took a look at your TOP results.. two processes hth which seem to be fighting it out running really quite hot at 70%.  There are other processes running, but much quieter....  &lt;BR /&gt;&lt;BR /&gt;My suspicions are (and these are really just guesses) &lt;BR /&gt; 1 - the hth processs are fighting each other for CPU time, causing each other to conext switch off &amp;amp; on.  This could be responsible for the high %sys values.&lt;BR /&gt; 2 - hth processes are fighting each other for disks at the SAME time.  even though they only use the disks infrequently (~25% of time) they do it simultanously, thus causing excessive disks queues.&lt;BR /&gt; 3 - One or both of the disks c2t1d0 and c2t0d0 are broken/behaving poorly (I assume they are mirrored pairs with c2t1d0 as primary).  Though I would have expected to see higher %wio if this were the case&lt;BR /&gt; 4 - The SCSI bus that c2t1d0 &amp;amp; c2t0d0 may be overloaded or behaving poorly.  Again I would have expected to see high %wio if this were the case.&lt;BR /&gt;&lt;BR /&gt;Number 1 could be checked by trying to run only one hth.&lt;BR /&gt;Number 2 could be checked by ... knowing how hth works ...&lt;BR /&gt;Number 3 &amp;amp; 4 you really need to look in syslog.log &amp;amp; use mstm to check them out.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim&lt;BR /&gt;</description>
      <pubDate>Sat, 28 May 2005 06:30:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cpu-amp-disk-usage/m-p/3552272#M840528</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2005-05-28T06:30:17Z</dc:date>
    </item>
  </channel>
</rss>

