Online Expert Day - HPE Data Storage - Live Now
April 24/25 - Online Expert Day - HPE Data Storage - Live Now
Read more
System Administration
Showing results for 
Search instead for 
Did you mean: 

hpux11.3 and EMC Vnx array do X or performance stinks. X is?

Go to solution
Steve Post
Trusted Contributor

hpux11.3 and EMC Vnx array do X or performance stinks. X is?

Dear hp forum folks,
I have two rx2800i2 itegrity servers hooked up via fibre to an EMC Vnx5100 array.
It seems to run fast, then really slow.

We run reads and writes to a big sybase table.  It runs 5 times faster than our old system.  Then it runs 20 times slower.  Then fast, fast, fast, fast, slow, fast, slow, slow, slow, fast.  It is random.

Is there some bonehead thing I failed to do?  I need to make it work more consistently.

I am not looking for massive performance analysis here.  I just wanted to ask if I anyone stumbled on a performance problem in the past.   Like you discovered you merely needed to set the RunSlowly unix parameter to NO.

I mean in your experience with using a Vnx array with itanium, to a database, you had to do X or your database performance would stink.  And X would be?

If you can not think of anything, perhaps if I bring up some details.  Maybe that will make you point and say "OH that is not a good idea."

It does NOT have powerpath.  We are using hpux11.3 Dynamic Special Files instead.  

I have the scsimgr load_bal_policy set to round robin.  

The fibre switch seems to be idle for the scope of the work I am doing (ie it is not in the mix as an error).

We are using sybase using asynchonous io.  Perhaps the commands for async io are different between 11.2 and 11.3?  I see the nfs commands are different.

We used to use a scsi ssd disk on PA-Risc boxes.  But now we will use SSD in the Vnx array itself.  

Where is that RunReallyFast flag anyway?

Steve Post
Trusted Contributor

Re: hpux11.3 and EMC Vnx array do X or performance stinks. X is?

pvtimeout set to 90

lvtimeout left alone

bad block allocation set to NONE

async io enabled on hpux but maybe not right. 

I enabled MLOCK, but I see something about MPCTL too on 1 obscure white paper.

VNX can pump through 6 gigs per sec.  But it is instead running at 40k per sec.   GEE.  A bit slow!

element size of the lun in the array is 128 blocks.  this is 64k. 

sybase element size is 2k? 4k?  I'm not the sybase guy.  But I know it is very hard to change Sybase's element size.  And now it looks like the luns are hardwired on EMC-Vnx array to have an element size of 128. 

ak.   Cheeze-n-Rice.  Nuts.


Arockiasamy K
Frequent Advisor

Re: hpux11.3 and EMC Vnx array do X or performance stinks. X is?

Dear Steve,


   We faced some performance issue in our SAP environment.


Reasons may be anyone of the following:


SSD will give good performance at the beginning. After some months, it won't.

In HP-UX 11i v3, there is a product called tuneserver, tune the kernel parameters with this product. (this one solved our issues)

check how many front-end ports do u have?

whether the Storage is connected through SAN switch or direct?

check the average waiting time of the disks.(sar -d)


check for other issues also like CPU usage, Memory/swap usage, etc...

Arockiasamy K
Steve Post
Trusted Contributor

Re: hpux11.3 and EMC Vnx array do X or performance stinks. X is?

Thanks. We use a fibre switch with 4 paths to the two EMC storage processors. sar -d shows the disks are responding pretty good. Now I am wondering if the performance is specific to the Sybase database. I also used tuneserver. It had no affect.


I bet my question was sitting around a while because it was too vague. 

Steve Post
Trusted Contributor

Re: hpux11.3 and EMC Vnx array do X or performance stinks. X is?

THE QUESTION IS:  What is X?  What is that thing throttling the disk performance? 

THE SYMPTOM IS:  HPUX tells sybase that it is too busy to work.  It tells sybase to wait for those disk requests.  HPUX tells EMC disk array that there is nothing to do.

THE ANSWER IS:  Patch PHKL_41700 and set kernel parameter hires_timeout_enable=1


At least that is what it appears to be.  You see, the day I put in the patch, it had no affect.  But the day I put in the patch and set hires_timeout_enable=1, performance became normal.  And it has not failed since.


So I am fairly certain this was the culprit.  And I got the answer from the forums.  How?  I gave up, and asked if anyone on EARTH uses hpux and sybase and emc together.  I didn't get an answer (like yes or no).  But I got the solution to THIS question. 


Close enough.

Go figure.