Disk Arrays
Showing results for 
Search instead for 
Did you mean: 

VA7400 Performance on SAN

Seth Kaplan

VA7400 Performance on SAN

I'm thinking of trying to take advantage of the two paths I have to my VA7400. I'm thinking of creating two seperate luns on each redundancy group then creating two seperate PV's and then adding them to one VG. When I create my LV's I'll stripe them across each PV. Is this feasible? Crazy? What are the drawbacks? Is there something I'm not thinking about? My intention is to put an Informix DB on it as well as some file systems.
Thanks for your replies.
Eugeny Brychkov
Honored Contributor

Re: VA7400 Performance on SAN

idea is good. Perfomance path (and thus primary path) for LUN created within RG1 should be VA's C1 and for LUN created within RG2 should be C2.
When creating this config you'll have LUN 0 as one big LUN in one of RGs. Remember that for heterogeneous environment it's recommended to have LUN 0 size 10MB, permissions set to 'WC' and no user data on it (because all the management/VA commands go through LUN 0 device). But if you'll use this VA exclusively for HP-UX I believe you can proceed with your plan
Ian Hillier
Frequent Advisor

Re: VA7400 Performance on SAN

I think that striping data across 2 LUNs on a VA7400 would be redundently redundent. Is there a way you can break your filesystem into 2 parts? Then put part on each and use pvlinks to do failover on each. With the way the VA works, I don't think you need to stripe at the OS level because the array takes care of the striping and data protection at a lower level. Using pvlinks will give you redundency at the gbic and controller level.

Just my 2 cents


Re: VA7400 Performance on SAN


We do this in our environment. It's not in any "official" HP document, but we did see it recommended in an internal e-mail on "best practices" for configuring LUNs on the VA.

Kevin Tsang_1
Occasional Visitor

Re: VA7400 Performance on SAN

[From: Ken Metrunec, Sys Analyst, UNIX Support Group, TELUS Enterprise Solutions ]

We have a VA7400 (1Gb/sec w/512MB on each dual controller)and use the LVM striping set up for the 100+ GB on-line databases (in file systems).

The VA does not load balance across controllers (RG's) without s/w like Autopath.

My crude tests before and after LVM striping showed a throughput increase of about 25%. Glance and sar showed equal amounts of data & I/O on each "PRIMARY" PV Path. The biggest gain came from backing up the several 100GB file systems: reduced duration by third until the 1Gbit/sec SAN network bandwidth was max'ed out.

It's a lot easier to balance two LUNs & controllers, than 100 Disks and 10 controllers

(BTW - our philosophy is: our customers paid for the high performance systems, so we are expected to get the most out of it.)

The "gotcha" is that you can seriously degrade performance by "cross linking" your primary and alternate PV paths.

Use caution in defining your alternate PV paths. Make sure the primary path is the Array controller RG that contains the LUN.

Each RG "owns" half the disks and expects data i/o requests only for LUNs in its RG of disks. When a controller gets data I/O requests that are owned by the other controller, the first ctlr will query the 2nd ctrlr for its health, then pass the data I/O request to the, now checked OK, 2nd RG.

In a VA Controller Fialure Condition:
The Healthy RG Contrroller will automatically accept ALL data/o requests once it determines the other RG controller has failed. The array will reconfig itself to remove the failed device

In a VA Controller Non-Fialure Condition:
If you cross-link the Pri & Alt PV paths, the dual controllers will thrash querying each other to determine if the "Owner RG" controller is alive. This "serial 2wire communications" occurs with EVERY I/O request to the VA, and can cause the array to hang a bit.

I have seen this cause SCSI read/write errors, and LVM VG and LUN power fail errors in the system logs.

A properly configured PV mapping will have NO I/O on the alternate PV paths: niether in Glance nor sar -d.


Re: VA7400 Performance on SAN


We were told by our local HP Reps that Autopath didn't actually work as advertised under HP-UX. Have you ever heard anything like that?

They were so sure of it, that they recommended we return the software to them for a refund (which we did).

Kevin Tsang_1
Occasional Visitor

Re: VA7400 Performance on SAN

I may not have been clear earlier.

I never installed Autopath, using LVM to manually load balance across controllers, and create alternate PV links for fail over; even though our customer was also originally sold AutoPath for HPUX.

BTW - I tested the LVM fail over to alternate PV paths vigorously before gooing to production by asking our CE to pop out a controller while servers & VA7400 were in use.

We got the VA7400 s/w at version 1.00 and I was verrry reluctant to install Autopath when I did not get comforting reponses by HP to my questions. Some responses were for the wrong array type, while others couldn't tell me if LVM would fight with Autopath, or if Autopath would thrash when the FC HUBs we were using became busy.

To date, my reading on HPUX Autopath leads me to think there's no gain for the extra cost & admin work.
Vincent Fleming
Honored Contributor

Re: VA7400 Performance on SAN

With the VA, AutoPath does not load balance.

This is because it is faster if you do not load balance. Why? Read above about the "performance path" - the controllers own LUNs in their redundancy group, and the fastest way to access a LUN is through the controller that owns it.

So, AP acts active/passive with the VA.

With an XP, however, it does load balance. The XP's design allows for equal speed access to any LUN through any port, so load balancing really helps a lot.


Your idea is very sound, as striping over the two LUNs will use all the drives in the array, and provide the best performance. Read Eugeny's posting carefully... be sure you have the primary LVM PV-Link going to the owning controller, and you'll get the best performance.

Good luck!

No matter where you go, there you are.