1832645 Members
2734 Online
110043 Solutions
New Discussion

sar -M

 
Rushank
Super Advisor

sar -M

N-Class :HP-UX 11 : 8 CPUS Oracle8i Database

The sar -Mu 5 5 comes with this output. I'm worried at %wio value. This value looks like too high. Do I need to change any kernel values to make it correct.
12:13:38 cpu %usr %sys %wio %idle
12:13:43 0 9 3 84 4
1 5 4 71 19
2 4 2 23 70
3 4 4 52 40
4 1 1 40 58
5 2 4 63 31
6 43 2 37 18
7 2 2 19 77
system 9 3 49 4012:14:28 0 11 3 37 49
1 7 4 84 5
2 12 5 50 33
3 19 4 48 29
4 13 2 20 66
5 15 5 36 44
6 11 5 12 73
7 2 2 20 76
system 11 4 38 47

Average 0 8 3 46 44
Average 1 7 5 50 38
Average 2 8 4 27 61
Average 3 7 4 45 44
Average 4 10 2 52 36
Average 5 9 4 39 48
Average 6 20 4 43 33
Average 7 4 3 46 47
Average system 9 3 43 44
9 REPLIES 9
Sandip Ghosh
Honored Contributor

Re: sar -M

what is the output of swapinfo -tm?
What is the value of dbc_max_pct on the Server?

Please post the above data. Also post the sar -d 5 10.

Sandip
Good Luck!!!
Sanjay_6
Honored Contributor

Re: sar -M

Rushank
Super Advisor

Re: sar -M

dbc_max_pct 8 and min is 5
swapinfo -tam

Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol2
dev 8676 0 8676 0% 0 - 1 /dev/vg07/lvol2
reserve - 2796 -2796
memory 6343 1057 5286 17%
total 16043 3853 12190 24% - 0 -

Having such a big value in %wio is good or bad??
Sandip Ghosh
Honored Contributor

Re: sar -M

Its not good at all. Try to find out if some disk is too much busy.

Your swapinfo is showing that you do not have any swapping.
dbc_max_pct is also okay.
Either the server is writing something on the tape drive or some of your disks are too busy. If the disks are too busy then distribute that particular file system on that disk to other disks.
Once look at the condition of the file tables through glance. If you press "t" it will give the utilisation of the tables. Look at that and accordingly increase the kernel parameter as required.

Sandip
Good Luck!!!
Jon Finley
Honored Contributor

Re: sar -M

This can also be caused if one of your disks is going bad, and is queueing writes.

Getting any SCSI related errors in syslog.log?


Jon
"Do or do not. There is no try!" - Yoda
Rushank
Super Advisor

Re: sar -M

I've increased nfile parameter couple weeks back to 40000 since I was getting errors "file table full" It looks OK now it seems.
But I've some disks showing bottle necks and they are above 98 % full. But as per our oracle DBA these file systems are not going to increase since the size is already allocated within oracle for these file system. Most of these filesystems are not spread accross the different disks. Each disk is configured for single file system and mirrored. There is no other errors in syslog file.

Hope I'm clear. Here is the putput of glance for system table output

System Table Available Used Utilization High(%)
--------------------------------------------------------------------------------
Proc Table (nproc) 5000 603 12 12
File Table (nfile) 40010 17745 44 44
Shared Mem Table (shmmni) 128 11 9 9
Message Table (msgmni) 128 2 2 2
Semaphore Table (semmni) 128 27 21 21
File Locks (nflocks) 900 390 43 43
Pseudo Terminals (npty) 512 6 1 1
Buffer Headers (nbuf) na 131544 na na
S.K. Chan
Honored Contributor

Re: sar -M

Did you run sar over a period of time so that you can determine if the high wio percentage happens only during a specific time, example during the day between 9am-2pm, etc, etc ? When the disk utilization is high, io wait is bound to increase, the question is how much is too much. I've seen some servers that runs up to 30% on that value and yet they seems to not slow things down (no user complaint). Do you get any feedback from the users for that matter ? If there aren't any hardware issue it's just matter of tweaking your io load balance and that is another different area to tackle all together. Start to pinpoint which FS/Disk/LV is the heavy ones ..
# glance -i
==> i=IO by FS u=IO by Disk v=IO by LV
Collect sar data over time to analyze the trend, with that hopefully you'll have a better picture.
Sandip Ghosh
Honored Contributor

Re: sar -M

tHE CONDITION OF YOUR FILE tABLES LOOKS OKAY.

Are you getting %wio 70% constantly or for a peak period? If it is most of the time, then try to stripe thosy file systems on more disks. In that way you can reduce some of the load on the disk drives.

Sandip
Good Luck!!!
Rushank
Super Advisor

Re: sar -M

This is the output od sar -M for entire day from 8:00AM to 6:00PM
06/06/02

08:00:00 %usr %sys %wio %idle
08:20:00 12 5 44 39
08:40:00 14 4 49 33
09:00:00 10 4 37 49
09:20:01 10 4 43 43
09:40:00 8 4 42 46
10:00:00 15 4 49 32
10:20:00 17 6 47 30
10:40:00 10 4 35 52
11:00:00 11 5 40 44
11:20:00 12 4 38 46
11:40:01 13 4 46 37
12:00:00 8 4 47 41
12:20:00 9 3 41 46
12:40:00 10 2 21 67
13:00:00 9 3 28 60
13:20:00 10 3 22 65
13:40:01 8 3 33 56
14:00:00 9 3 27 61
14:20:00 13 3 24 61
14:40:00 15 4 40 41
15:00:00 23 6 46 25
15:20:01 26 7 51 16
15:40:00 16 6 55 23
16:00:00 21 4 42 33
16:20:00 15 4 32 48
16:40:00 20 4 35 41
17:00:01 6 2 20 72
17:20:00 7 3 32 58
17:40:00 6 3 26 65
18:00:00 6 2 17 75