Disk Arrays
Showing results for 
Search instead for 
Did you mean: 

va7400 stripe size

Go to solution
Mike Dunphy_1
Occasional Contributor

va7400 stripe size

Is there any way to change the stripe size on a va7400.
We are using one as raw devices for a informix database ... informix uses 2k pages for all
writes and reads.
I think this is causing the performance to suck
a dd w/ a bs of 2048 on a rlvol on the VA7400
takes 4 times as long as a dd to a single
standalone 36GB fiber disk on the same server.

We have the VA setup in "auto" everything.
Peter Mattei
Honored Contributor

Re: va7400 stripe size

Hi Mike

There are several things to consider.
1. If you create a LUN only blockmaps in the controller are created. If you then write to the LUN the corresponding Blocks on the disks have to be formatted and then written to. So there is an initial overhead (but you can instantaniously acces the LUN). If you then write to the same block it???s faster.
2. How many disk are in the array? The VA7400 is designed to perform best with lot???s of disks. The performance increases with the number of spindles.
3. What RAID level are you accessing? If you assign >50% of the raw capacity part of the data has to be converted to RAID5DP, which by design has a lower write-performance than RAID01. Use armperf ???u to see how much of your LUN is in what RAID level.
4. Are you using the performance path? Read on page 36 in the servic guide (see link below)


I love storage
Roger Buckthal_2
Frequent Advisor

Re: va7400 stripe size


No, you cannot change the stripe size on the VA. But, this is unlikely to be cause of your performance issue. HP successfully uses the Virtual Array in the TPC-C transaction benchmark with the current stripe size.

First, dd may not be a good representative of an Informix workload. So, be careful about judgment based on dd, or tuning your system to maximize dd performance then running a database workload. Beyond that, it???s going to take some investigation to propose a solution.

What version of firmware are you running? How many LUNs? How many disks, and what kind? Do you have hot spare enabled? How do you have the resiliency mode set? These questions can be answered with the armdsp ???a command.

Is this a transaction processing or decision support workload (random vs. sequential)? Is the database using raw IO or thru the f/s? Do you have the workload distributed across both controllers, and the LUNs? Have you changed the default queue depth on HPUX? How much memory is allocated for caching in the database/system?

Load the latest version of Command View sdm (version 1.04), it???s on hp.com. Then familiarize yourself with the performance logs. I like to import the data into Excel. Use the armperf with the COMMA option. Does the transfer distribution (a graph is a great analysis tool) look right based on your knowledge of the application (use the ARRAY metrics)? How is the workload distributed to the two controllers (use the OPAQUE or LUN metrics)?

Sorry for the 64 questions, but a quick guess doesn???t always work.

Mike Dunphy_1
Occasional Contributor

Re: va7400 stripe size

thanks for all the notes.
perhaps dd is not a comparable test however it seems to reflect
the same results of a informix sequential scan the informix
database is 2x as slow on the VA as it is on a old K420 with
some LVM fast wide scsi JBOD that is LVM striped/mirrored

the database is OLTP but the current test we are doing results
in a query that does a sequential scan on a large table , it
is MUCH slower on the VA ?!?!?! It is the same table and

a dd w/ a block size of > 1MB reflects the maximum speed of
the VA7400 @ 170MBS.
the dd's are doing are just reads outputing to /dev/null

the dd is much faster on the VA w/ a large blocksize and of
course much slower on the JBOD LVM as the blocksize
increases, the VA performance goes up and the JBOD
goes down. .. and the same is true the opposite as the
blocksize decreases the VA performance goes down.

Yes I am using the performance path

this Va has 45 73GB disks and 2 groups 1 group has 22 and
one has 23

It is HP14 firmware

It is RAW IO and it is going thru 1 controller since it is the primary
performance path

We alternate LUNS between the controllers to ballance the load.
Are you saying instead of creating 1 lun for the database I
should create 20 LUNS and alternate the primary controllers
and then LVM stripe or ? Currently this DB is built on one LUN.
This database is part of a MCSG cluster we have 14 different
LUNS configured on the VA, some of which are not even being
used for a unused capacity of more then 40% so I would expect
this thing to be running raid 0/1 still
Vincent Fleming
Honored Contributor

Re: va7400 stripe size

dd is not really a good performance metric... here's why:

dd issues a read or write, then waits until that data returns before issuing the next read/write.

Normally, databases don't do this. They issue multiple I/O's (async I/O) that can be handled silmultaneously by the disk array.

A single disk can outpace a disk array in this arrangement, as disks themselves perform read-ahead automatically into their onboard cache. The disk array is much more complicated... the I/O must pass through the controller first, be passed to a drive, and the response sent back through the controller...

This adds up to a higher latency to the host when doing things like dd's.

Small block sizes are also a bit a a performance problem... the protocol overhead on SCSI and Fibre Channel are fixed per I/O... so smaller I/O's have a greater percentage of protocol overhead than do larger I/O's. This normally translates into lower transfer rates. Don't forget that Fibre Channel has a double protocol overhead compared to straight SCSI - The VA implements the SCSI protocol *over* Fibre Channel. Again, small I/O's would generally suffer from this.

I'm not overly surprised by your experience... but you should try something much more accurate - load your database, and see how it performs. I would bet that your overall performance will be significantly better with the VA compared to JBOD.

Good luck!
No matter where you go, there you are.