1828302 Members
3626 Online
109975 Solutions
New Discussion

cpu & disk usage

 
SOLVED
Go to solution
Jaieun Chu
Advisor

cpu & disk usage

I found cpu high usage when webserver is peak time.
webserver: rx4640 1.3G 2cpu hpux 11.23
two disk with mirrored ux
cpu %sys >= 50 & disk io busy but vmstat
show no pi or po
I attach cpu,disk,vmstat monitering result!!
I wonder this is really cpu problem
or disk io problem!!

system engineer
5 REPLIES 5
Tim D Fulford
Honored Contributor

Re: cpu & disk usage

Hi

You have not posted your stats.

1 - Are you saying %sys is > 50%
OR
are you saying total cpu (%usr+%sys) > 50%

I'm guessing the total CPU (%sys+%usr) is greatetr than 50% and your system has a spinning process, which will take up 1 whole CPU *50%!!). the output of a top would be sufficient to show this

If however %sys > 50% this is most conmcerning, as it means you are doing LOADS of system calls, this maty mean something more subtle is happening on your system.

Regards

Tim
-
Jaieun Chu
Advisor

Re: cpu & disk usage

monitering file attach fail so see the following !!

23:11:43 cpu %usr %sys %wio %idle
23:12:03 0 29 50 2 18
1 29 52 1 18
system 29 51 1 18
23:12:23 0 31 54 1 15
1 30 54 1 16
system 30 54 1 15
23:12:43 0 30 57 0 13
1 29 53 1 17
system 30 55 0 15
23:13:03 0 30 56 1 13
1 30 55 0 15
system 30 55 1 14
23:13:23 0 28 57 0 14
1 28 55 0 17
system 28 56 0 15
23:13:43 0 28 57 1 14
1 31 55 0 13
system 30 56 1 13

23:11:43 device %busy avque r+w/s blks/s avwait avserv
23:12:03 c2t1d0 25.61 13.45 31 459 29.73 29.17
c2t0d0 24.36 14.42 28 446 36.87 29.50
23:12:23 c2t1d0 16.90 17.39 46 681 57.61 31.38
c2t0d0 15.05 18.46 42 658 63.13 30.60
23:12:43 c2t1d0 9.20 1.81 21 306 11.22 16.48
c2t0d0 7.50 2.63 18 298 17.21 17.35
23:13:03 c2t1d0 13.65 8.74 35 507 17.56 16.63
c2t0d0 11.65 9.42 31 492 19.41 16.21
23:13:23 c2t1d0 7.70 0.71 18 262 1.38 16.73
c2t0d0 6.15 0.62 15 251 0.66 14.36
23:13:43 c2t1d0 10.75 11.75 31 468 22.91 15.78
c2t0d0 9.35 12.97 28 456 27.03 15.43

------------------------------------

procs memory page faults cpu
r b w avm free re at pi po fr de sr in sy cs us sy id
4 0 0 276784 184850 62 15 0 0 0 0 0 2789 5573 714 10 15 76
3 0 0 284020 183390 118 39 0 0 0 0 0 5981 99045 1230 29 51 20
2 0 0 267582 183081 76 12 0 0 0 0 0 6415 13894 1464 30 54 16
3 0 0 279049 183360 44 4 0 0 0 0 0 7334 11614 1411 30 55 15
3 0 0 277106 183230 209 24 0 0 0 0 0 7245 18590 3115 30 55 15
4 0 0 277636 183654 17 0 0 0 0 0 0 6235 10209 1182 28 56 16
4 0 0 279565 183639 61 8 0 0 0 0 0 6196 11353 1197 30 56 14
3 0 0 282254 183168 180 25 0 0 0 0 0 7949 18700 4277 31 57 12
6 0 0 278566 183652 39 5 0 0 0 0 0 6759 10442 1192 29 53 18
2 0 0 275299 184508 58 13 0 0 0 0 0 5373 10629 1081 27 51 22
2 0 0 282760 183832 191 43 0 0 0 0 0 7226 17787 4081 29 51 20
2 0 0 264394 184507 23 8 0 0 0 0 0 5944 10083 1206 28 50 22
3 0 0 276668 184504 48 21 0 0 0 0 0 5548 9960 1090 27 49 24
4 0 0 277510 183760 206 36 0 0 0 0 0 5909 16690 1923 27 47 26
4 0 0 275008 184624 10 2 0 0 0 0 0 6317 9390 1136 28 49 22
4 0 0 275394 184590 70 29 0 0 0 0 0 6181 11515 1252 28 52 20
3 0 0 278067 184050 187 43 0 0 0 0 0 7205 17160 3688 27 52 21
3 0 0 274890 184621 22 7 0 0 0 0 0 5606 9212 1111 28 51 21
3 0 0 280553 184621 25 11 0 0 0 0 0 5121 9027 1037 27 51 22
3 0 0 280723 184491 169 35 0 0 0 0 0 5858 15723 2336 27 53 20
procs memory page faults cpu
r b w avm free re at pi po fr de sr in sy cs us sy id
4 0 0 278506 184507 17 6 0 0 0 0 0 5705 9654 1171 29 53 18
3 0 0 274828 184491 12 6 0 0 0 0 0 5249 8842 1043 26 52 22
3 0 0 277771 183522 195 38 0 0 0 0 0 5625 15916 2769 27 49 24
3 0 0 276054 184498 20 7 0 0 0 0 0 5604 9753 1156 25 51 23
4 0 0 271095 184033 45 18 0 0 0 0 0 7589 11531 1208 28 56 15
4 0 0 282485 184492 174 34 0 0 0 0 0 6702 16189 3309 28 54 18
2 0 0 264410 184491 41 12 0 0 0 0 0 6323 9992 1168 27 51 22
2 1 0 278308 184507 8 2 0 0 0 0 0 4970 8385 1020 27 49 24
2 1 0 279441 183896 181 41 0 0 0 0 0 6184 15905 3425 25 48 27
2 0 0 271617 184498 33 15 0 0 0 0 0 6013 10207 1202 27 50 24
4 0 0 281079 184498 15 5 0 0 0 0 0 5572 9189 1252 25 49 26
2 0 0 282098 183504 190 32 0 0 0 0 0 5820 15547 2746 25 47 28
4 0 0 278415 184439 46 21 0 0 0 0 0 5343 9700 1074 26 49 25
3 0 0 272389 184423 43 26 0 0 0 0 0 5474 10614 1122 26 49 25
1 0 0 283077 183833 165 31 0 0 0 0 0 5700 15205 2875 26 50 25
3 0 0 279893 184424 81 29 0 0 0 0 0 5599 11168 1189 24 47 29
3 0 0 272857 184423 22 9 0 0 0 0 0 5545 9133 1115 27 45 28
3 0 0 281344 183838 188 41 0 0 0 0 0 7172 18130 4148 28 49 23
2 0 0 271995 184401 62 29 0 0 0 0 0 6413 10728 1186 30 52 18
3 0 0 269781 184423 26 12 0 0 0 0 0 5457 9688 1092 30 51 19

system engineer
Jaieun Chu
Advisor

Re: cpu & disk usage

this is top monintering result!!
System: ebsweb05 Sun May 22 23:21:43 2005
Load averages: 0.91, 0.95, 0.98
205 processes: 181 sleeping, 23 running, 1 zombie
Cpu states:
CPU LOAD USER NICE SYS IDLE BLOCK SWAIT INTR SSYS
0 0.92 28.0% 0.0% 52.3% 19.7% 0.0% 0.0% 0.0% 0.0%
1 0.91 27.8% 0.0% 51.6% 20.6% 0.0% 0.0% 0.0% 0.0%
--- ---- ----- ----- ----- ----- ----- ----- ----- -----
avg 0.91 27.9% 0.0% 52.0% 20.2% 0.0% 0.0% 0.0% 0.0%

Memory: 479436K (381024K) real, 1274588K (1112756K) virtual, 736128K free Page# 1/13

CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
1 ? 14645 webtob 241 20 57712K 44504K run 752:10 70.57 70.45 hth
0 ? 14646 webtob 154 20 49792K 36584K sleep 746:09 69.52 69.40 hth
1 ? 14639 webtob 154 20 16952K 1152K sleep 16:52 1.34 1.34 htl
0 ? 13941 www 152 20 32288K 6624K run 0:03 0.54 0.54 httpd
0 ? 18429 www 152 20 32160K 6608K run 0:03 0.53 0.53 httpd
0 ? 24502 www 152 20 32032K 6496K run 0:00 0.52 0.52 httpd
0 ? 14638 webtob 154 20 17376K 4420K sleep 6:34 0.49 0.49 wsm
1 ? 51 root 152 20 3024K 2688K run 50:53 0.41 0.41 vxfsd
0 ? 21448 webtob 168 20 17876K 4584K sleep 0:03 0.25 0.25 htmls
0 ? 21454 webtob 154 20 17812K 4568K sleep 0:03 0.24 0.24 htmls
1 ? 21453 webtob 154 20 17812K 4568K sleep 0:03 0.24 0.24 htmls
1 ? 14652 webtob 154 20 17876K 4616K sleep 4:34 0.23 0.23 htmls
1 ? 14659 webtob 154 20 17876K 4616K sleep 4:37 0.23 0.23 htmls
0 ? 14660 webtob 154 20 17876K 4616K sleep 4:33 0.23 0.23 htmls
1 ? 14649 webtob 154 20 17876K 4616K sleep 4:37 0.23 0.23 htmls
1 ? 21451 webtob 154 20 17812K 4568K sleep 0:03 0.23 0.23 htmls
system engineer
Hoang Chi Cong_1
Honored Contributor

Re: cpu & disk usage

As your out put I think CPU is normal.
It seems that the disk I/O with high usage...
You can check with GlanPlus to make sure!

Regard,
Hoang Chi Cong
Looking for a special chance.......
Tim D Fulford
Honored Contributor
Solution

Re: cpu & disk usage

Hi

I agree you disks have exessive queues. If I could explain....

CPU, roughly speaking when it is busy you have 30% usr & 50% sys, So total 80% CPU. This meams that your system is doing nearly twice as many system calls as user CPU.

Disks, you have excessine queues (avque) 13-14.. the disks are not actually too busy at 25% or so and do ~30 IO/s. Implying at 100% you would expect 120 IO/s, this is typical for 10,000 rpm disks with an average service time of 8ms (125 IO/s). BUT your avserv is 30ms. This should be more like 8ms!!!

So my suspicions are that your disis are part of the problem.. BUT your %wio is really quite low... I then took a look at your TOP results.. two processes hth which seem to be fighting it out running really quite hot at 70%. There are other processes running, but much quieter....

My suspicions are (and these are really just guesses)
1 - the hth processs are fighting each other for CPU time, causing each other to conext switch off & on. This could be responsible for the high %sys values.
2 - hth processes are fighting each other for disks at the SAME time. even though they only use the disks infrequently (~25% of time) they do it simultanously, thus causing excessive disks queues.
3 - One or both of the disks c2t1d0 and c2t0d0 are broken/behaving poorly (I assume they are mirrored pairs with c2t1d0 as primary). Though I would have expected to see higher %wio if this were the case
4 - The SCSI bus that c2t1d0 & c2t0d0 may be overloaded or behaving poorly. Again I would have expected to see high %wio if this were the case.

Number 1 could be checked by trying to run only one hth.
Number 2 could be checked by ... knowing how hth works ...
Number 3 & 4 you really need to look in syslog.log & use mstm to check them out.

Regards

Tim
-