Simpler Navigation coming for Servers and Operating Systems
Coming soon: a much simpler Servers and Operating Systems section of the Community. We will combine many of the older boards, and you won't have to click through so many levels to get at the information you need. If you are looking for an older board and do not find it, check the consolidated boards, as the posts are still there.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance problems with IBM universe

albcantabria
Occasional Advisor

Performance problems with IBM universe

We have serious problem of performance with an IBM universe Database over HP 9000.
We suspect IO problem.
Any idea or experience with this.
Regards.
13 REPLIES
Steven E. Protter
Exalted Contributor

Re: Performance problems with IBM universe

Shalom,

Perhaps measure performance:

http://www.hpux.ws/system.perf.sh

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Andy Torres
Trusted Contributor

Re: Performance problems with IBM universe

Depends on versions, but at my previous job we saw some performance gains after several attempts, including adding CPU, switching arrays from HP XP to EMC DMX, database tweaking, application tweaking, etc. IBM also worked a special version patch for us at the time, too. So many things were done to that Superdome that it's tough to pinpoint the actual improvement. But the greatest gains came when we upgraded from HP-UX 11.0 to 11.11.
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

Uname -a
B.11.11 U9000/800
is still necesary the path? what is it?.

HP have internal disks. 120GB. All Volumes in raid 1 (mirror).
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

I attach the HP_perf_info report.

This is a piece of them:

HP-UX dmsreae1 B.11.11 U 9000/800 04/04/06

21:36:46 device %busy avque r+w/s blks/s avwait avserv

21:40:15 c0t6d0 100.00 8.05 224 9200 37.61 33.28
c3t6d0 81.82 5.53 223 9426 21.13 22.66
21:40:16 c0t6d0 100.00 25.94 253 7748 101.63 30.86
c3t6d0 82.00 9.32 241 7604 27.89 22.25
21:40:17 c0t6d0 100.00 21.45 296 7780 75.04 24.55
c3t6d0 92.00 19.24 289 7760 64.55 23.01
21:40:18 c0t6d0 100.00 25.27 305 7328 81.12 23.54
c3t6d0 91.00 22.92 293 7152 65.73 22.35
21:40:19 c0t6d0 100.00 28.71 316 6076 97.83 23.84
c3t6d0 91.00 24.42 308 6080 77.40 21.59

albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

New system capture with system overload.
Rodney Hills
Honored Contributor

Re: Performance problems with IBM universe

Universe databases can generate a lot of execessive I/O if the hash files are sized to small and universe has to access the overflow pages.

Look for the file(s) that are badly sized. You can use the FILE.STAT and HASH.HELP commands to determine the proper sizing.

If you have "glance" you can monitor the process looking for files with high I/O activity.

If you have "lsof", then you can use the perl script I am attaching to monitor a process looking for changes in disk activity for each open file.

HTH

-- Rod Hills
There be dragons...
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

Thanks Rod.
Could you send me the script (or tell me an URL to download it).
Rodney Hills
Honored Contributor

Re: Performance problems with IBM universe

Here is the script...

Rod Hills
There be dragons...
Bill Hassell
Honored Contributor

Re: Performance problems with IBM universe

Universe uses hundreds to thousands of files but because it was originally written when a single program could only open a few dozen files at one time, a file pool is configured for each program. If the pool is small (less than 100) then each program will be constantly opening and closing files (seen very drmatically by sar -a 1 10. These stats are directory activity stats and Universe can kill performance when it has to constantly ope/close files. If you increase MFILE (I believe that is the variable name in the Universe config file) to about 500 or even 900, you should see a major improvement in performance.

NOTE: nfile is the kernel parameter limiting the number of open files for the entire system. If you add 400 to MFILE, you will have 400 additional files open at the same time for each Universe program. Make sure nfile is large enough to handle the largest number of Universe programs.


Bill Hassell, sysadmin
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

MFILES goes to 936, but we have others systems with similar capacity and workload, but not HP. And performance seems to be better.
Any possible problem with vxfs, volume configuration o kernel paramters could be the problem?.
Any idea is appreciated.
Regards.
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

Logical vol have been defined like mirrors as follows
Mirror policy: strait
consistency: no
sched: parallel
write cache: yes
extend size=32
mode:read/write
it is a good configuration for Universe?.

And, is ussual than iostat shows different rates for identical disks in mirror:

bps sps msps
c0t6d0 1739 1644 1
c3t6d0 2017 1773 1

Bill Hassell
Honored Contributor

Re: Performance problems with IBM universe

Check the sar -a stats. This is a measure of directory activity (open/close). Run the command when the system is busy:

sar -a 1 10

iostat is almost useless today since disks are no longer simple devices (hardware buffering, array controllers, stripes, etc). That's the reason that msps is always 1.

You can't compare different systems without defining the actual work being performed. I have regularly seen 5 to 50 times more I/O per task for "similar" systems. An investigation into the actual I/O (data read/write, directory open/close, buffer cache, MFILE settings, missing or corrupt indexes, etc) all affect the number of I/O's being generated. If you do not see any errors in syslog, then the hardware is doing what it was told to do.


Bill Hassell, sysadmin
albcantabria
Occasional Advisor

Re: Performance problems with IBM universe

Buffers, swap, inode seem to be ok (sar results).
My doubt is about times of devices.

Avserv (and avwait) is sometimes very high
(avserv Average time (in milliseconds) to service each transfer request (includes seek, rotational latency, and data transfer times) for the device)


HP-UX dmsreae1 B.11.11 U 9000/800 04/05/06

12:49:45 device %busy avque r+w/s blks/s avwait avserv
12:50:15 c0t6d0 100.00 2.93 150 2742 26.12 26.99
c3t6d0 97.00 2.69 145 2952 25.26 24.49

I can see also it via sar -u. High wio times.

12:49:45 cpu %usr %sys %wio %idle
12:50:15 0 14 15 29 42
1 15 10 63 12
2 9 4 51 36
3 9 12 24 55
system 12 10 42 36

Any idea?. Best regards.