Operating System - HP-UX
1848796 Members
8126 Online
104037 Solutions
New Discussion

sar -d; avserv times too high?

 
Dwane Ballard
New Member

sar -d; avserv times too high?

Hi all,
We have recently moved a batch application (Thoroughbred basic!!) from Unixware to HPUX 11. The new platform is a N4000 4x440, 4GB connected via 4 FCs to a XP512 w/20 47GB drives on a single ACP.

When we run this particular application (cash posting) we see no decrease in run-time, unlike all of the other applications that have been moved over. This application runs just as slow as it did on the UW system with a single ultra scsi raid5 array.

Things we are seeing in Glance include 25% CPU on a single CPU (single threaded app). The interesting number is 60% of the time it is blocked on IO. Not disk IO or LAN IO, just IO.
Sar -d shows us avwait times around 4-5 ms, but the avserv time is 70-100ms. Current configuration is 20 Open9s striped w/LVM using 64k stripes across the 4 FC controllers. Filesystem is VxFS.

We feel like the avserv times are too high. This app is entirely random, but still, 70-100ms? Any ideas on what we can do to reduce the avserv time? Any thoughts on what the IO category in Glance is and how we can get that down (if that's possible)?

We tried eliminating the LVM striping thinking we might be spending too much time servicing LVM, but that had no effect on the avserv times. It did change our Glance view to about 10% CPU, 60% blocked on INODE and 25% blocked on IO.....

Any help or insight would be greatly appreciated.

Dwane
8 REPLIES 8
Philip Chan_1
Respected Contributor

Re: sar -d; avserv times too high?

Hi,

The avserv time of 70-100ms is definitely too high. For a speedy system the avserv time should be down below 20ms.

Avoid Raid-5 and any disk stripping if you could, these will only slow down disk I/O performance. Use Raid-1 (mirror) instead.

Also, your system may just need more inodes to work with in order to run smooth, try increasing the max inode kernel parameter then re-test your application.

Rgds,
Philip
Ravi_8
Honored Contributor

Re: sar -d; avserv times too high?

Hi,
find that any process related to this app is hanging, if so kill it.if this didn't solve the problem try increasing the RAM and swap.

good luk
never give up
Paula J Frazer-Campbell
Honored Contributor

Re: sar -d; avserv times too high?

Hi
Does your batch process handle data extracted from large files?

On a multiuser system raid 5 will slow down a server. See Windoze Word doc attached.

Paula

If you can spell SysAdmin then you is one - anon
Dwane Ballard
New Member

Re: sar -d; avserv times too high?

Good responses so far. One bit of information I forgot to include is that the app is +70% read intensive. With this, and the fact that the XP512 has 4GB of cache, I was hoping that the RAID5 in the array would not be causing a problem.

We did change from LVM striping to concatenation across the 4 controllers. Vxdump/vxrestore times decreased considerably(3 hours to restore w/LVM striping vs. 1 hour w/concatenation). We have yet to test the app, tho'.

Dwane
Martha Mueller
Super Advisor

Re: sar -d; avserv times too high?

I recently attended an HP class on the internals of the HP-UX Operating System. Something the instructor pointed out may apply here: If the kernel parm dbc_max_pct is left at the default of 50% ( that's dynamic buffer cache as percent of system ram ), it can cause longer seek times because the system is spending more time searching cache than it would take to go to disk to retrieve data. You may want to reduce that percentage.
Vincent Stedema
Esteemed Contributor

Re: sar -d; avserv times too high?

Dwane,

I'd like to add the following to Philip's and Martha's suggestions: Use Glance (ables option) or sar (-v option) to check on your inode usage. If the number of inodes in use is close to maximum, increase the ninode kernel parameter.
Concerning the dynamic buffer cache: you might want to give dbc_max_pct and dbc_min_pct the same value, because there can be some overhead in the shrinking and growing of the cache size.

Hope this helps.

Vincent
Les Schuettpelz
Frequent Advisor

Re: sar -d; avserv times too high?

Dave Fargo using Les's ITRC login as a 'group' ID...

We need more information here. I understood the original post to state the OLD app system had a RAID-5, the NEW does not use RAID-5 but probably RAID-1, correct?

How are you connected to the XP512, are you going FC direct or FC switched or loop/quickloop?

There are 2 FC kernel parms you might want to take a look at, not sure if you need to: num_tachyon_adapters (should be OK at default of 5) and max_fcp_reqs which defaults to 512 and has a max at 1024. Any one know where to find metrics to guide tuning max_fcp_reqs ?

I don't know XP512 that well, I will assume that an Open9 is like a 9Gb EMC hypervolume which is a piece of a 47Gb HDA with a given block and track size.

You need to make sure your stripe size really works with the physical layout of your Open9's, and with the cache allocation strategy. Also, how are the Open9's for this application laid out on the 47's, do some of them share a physical 47Gb? This could cause inefficient stripe performance.

Other than that, I think there are a number of installations having a mysterious hard time getting some of the I/O stats to look good with 'exotic' FC/array configurations, let us know what you find out please.
Michael Steele_2
Honored Contributor

Re: sar -d; avserv times too high?

Can you elaborate on "...concatenation across all 4 controllers..."?

Its been suggested that I use LVM striping on our soon to be installed XP128 and I'm weighing its advantages.

Are you no longer a proponent of LVM striping?
Support Fatherhood - Stop Family Law