Operating System - Tru64 Unix
1753407 Members
6862 Online
108793 Solutions
New Discussion юеВ

Re: average load versus vmstat output

 

average load versus vmstat output

Hi,

how should be "average load" interpreted for SMP Alpha servers (my case ES47 - 4 processors) ?

Why having uptime ~8 still I have ~20% idle ?
There is an output from uptime and vmstat commands:

sza1:/# vmstat -w 5
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow pin pout in sy cs us sy id iowait
16 1K 181 718K 121K 173K 3G 114M 157M 3M 2K 29K 11K 12 10 77 1
13 1K 181 718K 121K 173K 9447 166 388 0 2K 34K 11K 77 6 17 0
17 1K 181 718K 121K 173K 18012 100 303 0 2K 31K 14K 64 8 29 0
17 1K 181 718K 121K 173K 18158 99 374 0 1K 47K 10K 74 6 20 0

sza1:/# w
16:40 up 16 days, 18 mins, 3 users, load average: 8.06, 8.27, 8.17

best regards,
Michal


2 REPLIES 2
Hein van den Heuvel
Honored Contributor

Re: average load versus vmstat output


You probably do not need to worry about this.
It can be an artifact of the specific application/sheduler interaction.

There could be some soft or hard processor binding happening: If you explicitly tell the system (runon) to execute a particular taks (and it childs) ona given cpu, then obviouly you can have a high run queue on that cpu while the others are idle.
This can happen behind your back as the scheduler tries to avoid switching processes of cpus, notably in a numa-ish system like the ES47.

Some HPUX notes suggests that short waiting processes (disk IO) are also considered runable (yet idle). I have not read that for Tru64 but maybe it does. Do you have significant disk IO?

Google for: "load average" +idle +tru64 (+site:hp.com) give some interesting articles.

hth,
Hein.


Ivan Ferreira
Honored Contributor

Re: average load versus vmstat output

There is no exact relation between the vmstat and uptime output. For example on my systems:

uptime
13:23 up 38 days, 4:55, 14 users, load average: 18.48, 22.81, 24.95

vmstat -w 5
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow pin pout in sy cs us sy id iowait
45 1K 189 1M 263K 289K 8G 1G 1G 21 6K 176K 63K 36 25 38 1
19 1K 223 1M 258K 289K 42613 6048 4563 0 9K 155K 54K 38 38 23 0
22 1K 219 1M 259K 289K 21078 2788 2187 0 6K 156K 85K 14 24 62 0
36 1K 187 1M 261K 289K 47061 5699 5161 0 8K 185K 71K 26 39 35 0
23 1K 189 1M 262K 289K 23360 3083 2739 0 5K 182K 60K 41 27 32 0
39 1K 189 1M 262K 289K 27560 3641 2721 0 5K 161K 89K 17 27 56 0
34 1K 188 1M 261K 289K 37163 5121 3991 0 5K 172K 79K 30 35 35 0

All depends of the kind of job that is doing your system, and the kind of processes that are running.

You should get the load average as a baseline, when your system has good performance, and compare it when you are having poor performance. Then use vmstat to obtain a indication of where could be the problem.

If your system is working good you don't have to worry about it.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?