HPE EVA Storage
1823369 Members
2743 Online
109654 Solutions
New Discussion юеВ

EVA 5000 disk performance question

 
SOLVED
Go to solution
Mike Smith_33
Super Advisor

EVA 5000 disk performance question

During the course of migrating from our Hsg80 technology to the EVA5000 a few years back I worked with the DBA on testing methods to get the best throughput for copying the data over.

The EVA 5000 had all the disks in one group. I carved out a 100gb target lun for the dba. The copies to this lun were extremely time consuming.

I then tested by creating four 25gb target luns and he was able to copy the same amount of data at much faster speed to the four disks than he could to the one disk.

The new dba is wondering why this would be the case. It seems that since the EVA had one disk
group which consisted of 40 or so drives, the i/o was already spread out anyway. I agreed, particularly since you had to spec out this type of stuff manually on the HSG80 to make sure your luns were optimally placed. The EVA should have virtualized the entire thing making it a moot point, yet we saw a marked increase in throughput by creating multiple luns even though they were in the same group.

We are reviewing a high i/o performance issue on a single disk being reported by the O/S which is OpenVMS 7.3-2. My question to you all is, have you noticed this same issue with single vs multiple lun throughput and do you know why it is faster?

5 REPLIES 5
Mark Poeschl_2
Honored Contributor
Solution

Re: EVA 5000 disk performance question

I don't know about VMS, but I've seen similar phenomena with Unix. In Unix (Tru64), each device presented to the OS has its own queue and there seem to be efficiencies gained in having the OS operate on multiple I/O queues at once. Also, when we went to an EVA, we increased the maximum allowable depth of each disk's queue from 25 (the default) to 100.
Uwe Zessin
Honored Contributor

Re: EVA 5000 disk performance question

Yes, that is one thing. On some operating systems you can tune the depth.

The other is that the EVA is doing an implicit erase after a virtual disk has been created. During this time the writeback cache is disabled and the erase takes ressources, too.
.
Amar_Joshi
Honored Contributor

Re: EVA 5000 disk performance question

I am 100% agree with Mark & Uwe that this is the concern of how OS/HBA handles the queue and not that much on the EVA architecture.

In Windows world, default settings enables the Queue_Depth setting on Target (i.e. if QD is set to 32 it will be for the HBA and all the LUNs will be sharing the same 32 queue pointers). HBAs do allow to change this settings to LUN based where individual LUN will be assigned with a separate queue.

Having this kind of mechanism, any configuration with single disk of 100GB and 4 disks of 25GB will have different performance numbers.

In my personal opinion, having multiple disks (atleast 2) will give you better performance if you have dual-HBA connectivity configured. And more, better results can be achieved if queue_depth is set according to the application requirement/recommendations.
Mike Smith_33
Super Advisor

Re: EVA 5000 disk performance question

Thanks for all the responses to my questions, this forum is great.
Tom O'Toole
Respected Contributor

Re: EVA 5000 disk performance question


I think the main effect you are seeing is the database application is limiting the number of I/Os it queues to a single disk unit, and this is something the OS can't do anything about.

Another effect is that with VMS, each lun is only going to use one path, so you can balance over host fc adapters and controllers, etc..., by creating multiple luns, but I think the first effect is probably much more significant.

The EVA is great at processing many outstanding I/Os. At this site I was testing for a (VMS) data migration from hsg80 to eva. With this database (cache) it was the number of database file migration jobs that determined how many I/O were queued to the EVA. Even if many database files were on the same lun, throughput went WAY up as more database files wee migrated simultaneously. The number of LUNS was not as much of a bottleneck as the number of outstanding I/Os queued to the storage array.
Can you imagine if we used PCs to manage our enterprise systems? ... oops.