Disk Enclosures
1748113 Members
3476 Online
108758 Solutions
New Discussion юеВ

Re: Number of logical disk in volume groupe( Performance issue )

 
CSP_ALGERIA
Frequent Advisor

Number of logical disk in volume groupe( Performance issue )

Hello,

Using an EVA storage in hpux and when creating a volume group VG01 with for example 100GB.

Is it better to create one LUN with 100 GB for that volume group or with two LUN 50 GB each one, or more???

What is the impact about the performance when using oracle.

Best regards

Omar
Nothing in the world can take the place of persistence.
6 REPLIES 6
Steven Clementi
Honored Contributor

Re: Number of logical disk in volume groupe( Performance issue )

WHat is the configuration on the EVA? Do you have multple disk groups that you can present smaller vdisks from?

I would think that if you had 1 vdisk from a single disk group vs. multiple vdisks from the same disk group, your performance would be the same since yuou are getting the same amount of i/o. If the vdisks were from separate disk groups, then I might expect to see an increase in performance.

This just personal opinion, I am hp-ux illiterate.


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Mark Poeschl_2
Honored Contributor

Re: Number of logical disk in volume groupe( Performance issue )

From a storage controller point of view, Steven is correct. However, HPUX is fairly sensitive to internal (to the OS) I/O queue contention. There is a separate queue for each disk presented to the OS - so in your case two are likely better than one. You can also tune the depth of each individual queue in HPUX.
Alzhy
Honored Contributor

Re: Number of logical disk in volume groupe( Performance issue )

With an EVA or any "controller centric" array - it does not matter whether your VG or LVs are defined using one large LUN or several smaller LUNS.

All the performance tweaks are done inside the EVA.

However, if you have several EVAs -- say 4, It will ceretainly matter as say for Oracle/DB storage.. I will certainly distribute or stripe my volumes accross those 4 EVAs..

Hakuna Matata.
Pat Obrien_1
Regular Advisor

Re: Number of logical disk in volume groupe( Performance issue )

2 smaller luns presented to the hpux host. Prefer 1 to each controller and then host base stripe them together. This will load balance both controllers instead of 1 working hard and the other awaiting for a failure to failover the luns.


Never stripe luns across multiple eva's as under severe conditions, an eva may shutdown or reboot,and this will completely destroy you volume. yes this stripping will be faster, but go with dr first unless you are mirroring eva's.
Mark Poeschl_2
Honored Contributor

Re: Number of logical disk in volume groupe( Performance issue )

Even with a "controller-centric" array, the way an OS treats what it sees as separate devices can definitely affect your performance. For example, Windows using the SCSIPort driver maintained one I/O queue per HBA, while HPUX uses one I/O queue per PV (or possibly per LV, not sure about that. In any case if I/O is concentrated on one queue vs. another your application's ability to access the busiest queue can be impacted. Remember - all this is going on in the server - before we ever get to the storage controller. Once we get to the EVA, Nelson's correct - there's only one queue that all I/O to all Vdisks gets dumped into, so Vdisk count is pretty irrelevant. Peter's idea of host-based striping might work, but you'd need to be sure where in the I/O stack the striping took place - before or after the queues used by applications to get I/O to/from the underlying device(s).
CSP_ALGERIA
Frequent Advisor

Re: Number of logical disk in volume groupe( Performance issue )


Hello,

First, many thanks for your contributions,
Note that we are using one EVA storage, with one default diskgoup using 13 disks 146 GB. And in hpux 11i we are using secure path software, so each Vdisk has one single path and the load balancing is done by secure path for a single disk.

best regards

Omar
Nothing in the world can take the place of persistence.