- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- EVA 5000 disk performance question
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-10-2007 07:17 AM
тАО10-10-2007 07:17 AM
The EVA 5000 had all the disks in one group. I carved out a 100gb target lun for the dba. The copies to this lun were extremely time consuming.
I then tested by creating four 25gb target luns and he was able to copy the same amount of data at much faster speed to the four disks than he could to the one disk.
The new dba is wondering why this would be the case. It seems that since the EVA had one disk
group which consisted of 40 or so drives, the i/o was already spread out anyway. I agreed, particularly since you had to spec out this type of stuff manually on the HSG80 to make sure your luns were optimally placed. The EVA should have virtualized the entire thing making it a moot point, yet we saw a marked increase in throughput by creating multiple luns even though they were in the same group.
We are reviewing a high i/o performance issue on a single disk being reported by the O/S which is OpenVMS 7.3-2. My question to you all is, have you noticed this same issue with single vs multiple lun throughput and do you know why it is faster?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-10-2007 07:41 AM
тАО10-10-2007 07:41 AM
Solution- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-10-2007 06:20 PM
тАО10-10-2007 06:20 PM
Re: EVA 5000 disk performance question
The other is that the EVA is doing an implicit erase after a virtual disk has been created. During this time the writeback cache is disabled and the erase takes ressources, too.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-11-2007 04:38 AM
тАО10-11-2007 04:38 AM
Re: EVA 5000 disk performance question
In Windows world, default settings enables the Queue_Depth setting on Target (i.e. if QD is set to 32 it will be for the HBA and all the LUNs will be sharing the same 32 queue pointers). HBAs do allow to change this settings to LUN based where individual LUN will be assigned with a separate queue.
Having this kind of mechanism, any configuration with single disk of 100GB and 4 disks of 25GB will have different performance numbers.
In my personal opinion, having multiple disks (atleast 2) will give you better performance if you have dual-HBA connectivity configured. And more, better results can be achieved if queue_depth is set according to the application requirement/recommendations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-11-2007 08:40 AM
тАО10-11-2007 08:40 AM
Re: EVA 5000 disk performance question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-11-2007 08:53 AM
тАО10-11-2007 08:53 AM
Re: EVA 5000 disk performance question
I think the main effect you are seeing is the database application is limiting the number of I/Os it queues to a single disk unit, and this is something the OS can't do anything about.
Another effect is that with VMS, each lun is only going to use one path, so you can balance over host fc adapters and controllers, etc..., by creating multiple luns, but I think the first effect is probably much more significant.
The EVA is great at processing many outstanding I/Os. At this site I was testing for a (VMS) data migration from hsg80 to eva. With this database (cache) it was the number of database file migration jobs that determined how many I/O were queued to the EVA. Even if many database files were on the same lun, throughput went WAY up as more database files wee migrated simultaneously. The number of LUNS was not as much of a bottleneck as the number of outstanding I/Os queued to the storage array.