Operating System - HP-UX
1753785 Members
7469 Online
108799 Solutions
New Discussion юеВ

Re: script performance with gzip, wait and background commands

 
SOLVED
Go to solution
Michael Resnick
Advisor

Re: script performance with gzip, wait and background commands

Glance shows

/dev/vg00/lvol2 device,4.0gb avail, 0mb used
pseudo-swap: memory, 9.2gb avail, 6.5gb used

I wrote a little program to display a few things, including dyn_buf.psd_free from pstat_getdynamic and have been capturing that info regularly.

on 12/30, it showed 2624m free and slowly decreased with each day. Today, it shows 1569mb. (We've had little activity over the last two weeks as the plant is shut down for year-end.)

How can I see who's taking up the memory or if there's a leak?
Steven Schweda
Honored Contributor
Solution

Re: script performance with gzip, wait and background commands

> How can I see who's taking up the memory or
> if there's a leak?

Unbounded growth in the virtual memory (with
no good excuse) would suggest a leak.

A command like:
UNIX95=1 ps -e -o 'pid sz vsz args'
might reveal who's eating the memory. With
a bit of effort, its output could be piped
through an appropriate "sort" command, to
make it easier to find the culprit(s).
Dennis Handly
Acclaimed Contributor

Re: script performance with gzip, wait and background commands

>ME: This may be HP's fault here.

Yes. PHCO_27007 fixes it.

>How can I see who's taking up the memory or if there's a leak?

I'm not sure psd_free will give what you want. You could be using that memory for the buffer cache or whatever the kernel needs. It seems that psd_avm would be better.

What you really need to do is look at the size for individual processes and see which is leaking, using ps(1) as mentioned by Steven.

Or you can write another program to look pstat_getproc and these fields:
pst_vtsize; # virtual pages used for text
pst_vdsize; # virtual pages used for data
pst_vssize; # virtual pages used for stack
pst_vshmsize; # virtual pages used for shared memory
pst_vmmsize; # virtual pages used for mem-mapped files
Fredrik.eriksson
Valued Contributor

Re: script performance with gzip, wait and background commands

now I haven't read the complete thread, but seems like you could do this alot easier and alot more efficient.
A script like this might do the trick?

#!/bin/bash
num_cpu=12
incl_list="file_with_paths_whitespace_seperated.txt"
output_prefix="file"
binary="/usr/bin/gzip"
options="-c"

x=0
for i in $(cat $incl_list); do
status=1
output_file=$(echo "${output_prefix}_${x}.gz")
while [ $status -ne 0 ]; do
if [ $(ps aux | grep gzip | wc -l) -le $num_cpu ]; then
$binary $options $i $output_file
status=0
else
sleep 10
fi
done
let x=x+1
done

This just makes sure that there never is more then $num_cpus concurrent gzips running.

Hope it gives you some clues :)
Best regards
Fredrik Eriksson
Fredrik.eriksson
Valued Contributor

Re: script performance with gzip, wait and background commands

oh, sry... i forgot the add to background ampersand :)
Michael Resnick
Advisor

Re: script performance with gzip, wait and background commands

Hi all -

Sorry for not getting back sooner but wanted to make sure things were running smoothly. Thanks for all of your answers.

Turns out it was a memory issue, but not a memory leak. We've got a lot of things running on the machine and unknown to us, a couple of the database instances SGA sizes were increased.

We changed the dbc_max_pct kernel parm from 50 (default) to 40 and available memory seems to stay stable and enough for our processing. We've been running for almost a month without any issues.

Thanks again,

mike
Bill Hassell
Honored Contributor

Re: script performance with gzip, wait and background commands

> dbc_max_pct kernel parm from 50 (default) to 40

Unless you have a very small amount of RAM (2 GB or less), then 40% is far too large. Set the value of dbc_max_pct to use about 2-3GB of RAM, perhaps 4-6GB for 11.23 and later. An extremely large DBC is not an efficient use of RAM, especially prior to 11.23. Large SGAs that are setup efficiently will provide a bigger performance improvement than a large DBC. And if Oracle is running raw paritions rather than files, a large DBC is wasted space for Oracle.


Bill Hassell, sysadmin