Operating System - Tru64 Unix
cancel
Showing results for 
Search instead for 
Did you mean: 

Tru64 vmstat output

SOLVED
Go to solution

Tru64 vmstat output

Hi,

I am supporting
System Type: AlphaServer ES45 Model 2
Number of CPUs: 2
Cache: 8.0 MB ( 8 MB) Memory size: 2048 MB

Following is the output from command vmstat 5 5. Would anyone make a comment on the number of page faults in the system. If you see a problem please let me know what it is and what to do about it.

TIA,

Regards
Kafsat

Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
4 352 33 216K 8308 30K 837M 79M 297M 1930 104M 0 140 1M 482 26 25 49
4 350 34 216K 8373 30K 1017 175 213 0 243 0 48 11K 568 2 2 96
4 350 33 216K 8274 30K 1080 84 152 0 116 0 63 5K 614 2 2 97
4 350 33 216K 8268 30K 295 14 27 0 16 0 31 8K 286 1 1 98
4 350 33 216K 8262 30K 1025 88 108 0 106 0 48 28K 599 3 4 93





5 REPLIES
Hein van den Heuvel
Honored Contributor

Re: Tru64 vmstat output

> Would anyone make a comment on the number of page faults in the system.

Sure. It's high!
Your Free memory is too low.
1800 MB active, 250 MB wired... your 2GB is gone.


Time of a 'top' or 'ps aux' and/or 'ipcs -ma'
to find out the big spenders.

Here is alittle perl script to help some:

# cat sort-by-size
while (<>) {
if (/ ([0-9\.]*)([MK])/) {
$x=$1*(($2 eq "M")?1000:1);
$lines{$x} = $_;
}
}
foreach $x (sort {$a<=>$b} keys %lines) {
print $lines{$x};
}
# ps -AOrssize | perl sort-by-size


hth,
Hein.

Re: Tru64 vmstat output

Hi hein

I would like to ask you related to VMSTAT. I have m/c with 14GB RAM and I see 4 GB mem. always free, then why vmstat showing me page fault.

procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
82 2K 95 1M 527K 143K 2G 419M 935M 1279 498M 0 4K 38K 33K 71 26 2
65 2K 103 1M 526K 143K 8537 839 5325 0 1928 0 6K 38K 48K 69 26 5
37 2K 100 1M 528K 143K 9662 831 5670 0 1799 0 5K 37K 48K 63 26 11
61 2K 99 1M 525K 143K 11879 924 7092 0 1924 0 5K 37K 46K 63 23 14
54 2K 104 1M 520K 143K 11063 921 7109 0 1849 0 6K 40K 48K 70 26 4


thanks in advance
Hein van den Heuvel
Honored Contributor

Re: Tru64 vmstat output

Pagefaulting is a mechanisme, not a problem.
Don't get worried by the word 'fault'. It is not intended as a problem, it is a feature!

Let's say you run an image. It then activates some code, changees local variables, mallocs a bunch of memory and writes to that. Did you ever wonder how that memory becomes 'visible' to the process?
It is all faulted in! The code is load into the memory from the image through a 'pin' page in (fault). The statisc data first comes in the same way, as long as the process only reads. When it writes/modifies, it will need a private copy which is created through 'cow' = copy on write. The malloc memory comes 'out of nowhere'. A fresh page odf physical memory is allocated and zeroed out" 'zero'. So the faults you see are simply the system at work! To reduce that, you have to figure out whether you can keep processes running longer, doign more work after activation. Perhaps a script that fiers up an awk for each iteration can be taught to issue a single awk to get all desired data.

hth,
Hein.

Re: Tru64 vmstat output

Hein,

Thank you for your response and the script. I understand the system is running low in memory. But most of active memory is actually used by ubc. Output from vmstat -P shows that. The system is running Unidata database. Unidata lets Unix handle buffer cache.

I have looked into the manual it did not say much about page faults it did say about paging. But there is no page out, only page in.

I have included output from vmstat -P and ps -AOrssize | perl sort-by-size

I look forward to your comments.

Regards
Kafsat

vmstat -P
Total Physical Memory = 2048.00 M
= 262144 pages

Physical Memory Clusters:

start_pfn end_pfn type size_pages / size_bytes
0 371 pal 371 / 2.90M
371 262135 os 261764 / 2045.03M
262135 262144 pal 9 / 72.00k

Physical Memory Use:

start_pfn end_pfn type size_pages / size_bytes
371 544 scavenge 173 / 1.35M
544 1246 text 702 / 5.48M
1246 1379 data 133 / 1.04M
1379 1596 bss 217 / 1.70M
1596 1771 kdebug 175 / 1.37M
1771 1777 cfgmgmt 6 / 48.00k
1777 1779 locks 2 / 16.00k
1779 1793 pmap 14 / 112.00k
1793 2385 unixtable 592 / 4.62M
2385 2433 logs 48 / 384.00k
2433 6556 vmtables 4123 / 32.21M
6556 262135 managed 255579 / 1996.71M
============================
Total Physical Memory Use: 261764 / 2045.03M

Managed Pages Break Down:

free pages = 3133
active pages = 33597
inactive pages = 19405
wired pages = 29642
ubc pages = 169974
==================
Total = 255751

WIRED Pages Break Down:

vm wired pages = 4397
ubc wired pages = 0
meta data pages = 7810
malloc pages = 12214
contig pages = 2664
user ptepages = 2282
kernel ptepages = 265
free ptepages = 10
==================
Total = 29642






ps -AOrssize | perl sort-by-size
2164 0K < ?? 0:00.00
1 96K IL ?? 0:06.40 init
527 112K S ?? 0:00.06 nfsiod
2195 120K I ?? 0:00.00 inetd
2332 128K I + console 0:00.01 getty
2103 168K S ?? 0:01.45 svrMgt_mib
2094 176K S ?? 1:02.44 snmpd
2150 184K S ?? 0:01.61 cpqthresh_mib
388895 192K S ?? 0:00.02 udsrvhelpd
2196 200K I ?? 0:18.78 inetd
2112 208K S ?? 0:04.37 os_mibs
355923 232K I + pts/46 0:00.04 sh
436 240K S ?? 0:24.85 syslogd
2226 272K I ?? 0:01.80 lpd
424 280K I ?? 0:01.94 unirpcd
108310 288K I + pts/9 0:00.04 support.menu.sh
416552 304K S ?? 0:01.85 telnetd
520 320K I ?? 0:00.01 mountd
134836 328K I ?? 0:02.23 smbd
5 336K S ?? 0:00.76 hotswapd
534 344K I ?? 0:00.68 rpc.lockd
133369 352K S ?? 0:04.02 nmbd
345199 376K S pts/12 0:00.02 ksh
214 392K I ?? 0:15.41 evmlogger
352867 432K S + pts/12 0:00.01 sort.by.size.per
524 448K S ?? 0:00.76 proplistd
375110 456K I ?? 0:00.02 dtexec
369331 480K I ?? 0:00.02 dtexec
2085 488K S ?? 0:19.23 sendmail
440 496K I ?? 0:02.42 binlogd
390272 504K S N ?? 0:00.03 dtscreen
363293 616K I ?? 0:00.02 smbd
353775 632K I ?? 0:00.01 smbd
344253 640K I ?? 0:00.02 smbd
357860 648K I ?? 0:00.02 smbd
123521 664K I ?? 0:04.39 ttsession
280874 672K I ?? 0:00.14 smbd
382921 696K S ?? 0:00.04 smbd
236915 816K I ?? 0:00.07 smbd
3 832K I ?? 0:00.03 kloadsrv
323291 880K I ?? 0:01.66 smbd
296981 888K I ?? 0:02.20 smbd
409794 904K I ?? 0:03.17 rpc.ttdbserverd
351632 936K I ?? 0:00.24 smbd
369740 944K I ?? 0:00.06 dxconsole
2246 952K S ?? 0:11.62 sysman_hmmod
2185 960K S ?? 0:11.81 config_hmmod
2118 976K S ?? 0:02.04 cpq_mibs
293346 992K S ?? 0:01.43 smbd
355080 1.2M I ?? 0:02.54 smbd
323897 1.3M I ?? 0:00.11 dtterm
340213 1.4M S ?? 0:00.20 smbd
2182 1.5M S < ?? 0:01.75 advfsd
396539 1.8M R + pts/12 0:00.03 ps
340244 2.1M S ?? 0:02.08 dtwm
406 2.4M S < ?? 0:15.04 sbcs
300405 2.5M I + pts/20 0:00.62 udt
2328 2.6M S ?? 0:16.59 Xdec
348696 2.8M I + pts/88 0:00.66 udt
342689 3.4M I + pts/23 0:00.51 udt
203698 3.5M I ?? 0:00.13 udsrvd
366119 3.6M I ?? 0:00.19 udsrvd
331573 3.7M I + pts/59 0:01.09 udt
391710 3.8M I + pts/4 0:00.48 udt
329123 4.1M S + pts/71 0:03.94 udt
381447 4.2M I + pts/13 0:01.63 udt
336916 4.4M S + pts/38 0:04.97 udt
324813 4.5M I + pts/70 0:02.20 udt
358426 4.6M I + pts/46 0:03.49 udt
290621 4.7M I + pts/87 1:44.96 udt
334912 4.8M S + pts/68 0:02.18 udt
360491 4.9M I + pts/45 0:04.27 udt
268598 5.0M I + pts/78 0:02.28 udt
382710 5.1M I + pts/85 0:15.44 udt
377029 5.2M I + pts/79 0:06.39 udt
373773 5.3M I + pts/77 0:15.55 udt
347198 5.4M I + pts/63 0:08.76 udt
316104 5.5M I + pts/55 0:28.06 udt
375483 5.6M S + pts/84 0:20.20 udt
360945 5.7M I + pts/58 3:05.91 udt
342351 5.8M S + pts/67 0:18.09 udt
24333 5.9M I + pts/66 0:17.74 udt
245076 6.0M S + pts/74 0:30.81 udt
349785 6.1M S + pts/65 0:41.28 udt
384925 6.2M S + pts/32 0:38.52 udt
300990 6.5M S + pts/15 1:13.51 udt
251509 6.7M S + pts/30 1:33.31 udt
362586 6.8M I + pts/57 0:57.83 udt
274823 6.9M I + pts/31 1:30.28 udt
317233 7.0M I + pts/61 0:13.46 udt
217446 7.1M S + pts/21 0:34.39 udt
356367 7.6M S + pts/51 0:42.42 udt
250900 7.7M S + pts/36 0:30.82 udt
349588 7.8M I + pts/7 0:12.83 udt
286745 8.0M I + pts/34 1:52.66 udt
308456 9.8M I + pts/24 0:21.39 udt
2331 13M S ?? 0:18.17 smsd

I have included output
Hein van den Heuvel
Honored Contributor
Solution

Re: Tru64 vmstat output


You seem to be on the right track quantifying and controlling this problem.
Indeed, the processes seem to take just 250 - 300 MB but 1400 MB in the UBC. With the pagefault load you are seeing, I would have expected the UBC to start adjusting itself donwards... unless it was told it coudl use all that? What was it told?
Try 'sysconfig -q vm | grep ubc'. Specifically, what are min and max set to? the typical 10 / 100 ? Perhaps you want to try with ubc_maxpercent set to 50% ?

Hein.