- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Design of Disk Systems with EVA-3000 and SecurePat...
Disk Enclosures
1753971
Members
7748
Online
108811
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-08-2005 12:27 AM
06-08-2005 12:27 AM
Design of Disk Systems with EVA-3000 and SecurePath
A year or so ago we implemented a pair of EVA-3000s and SecurePath on our HP-UX site, and I could not find much very information about the “one big disk” approach proposed by the vendor (250GB on 1 disk with SecurePath between the OS and the Array).
I have since been troubleshooting a disk performance monitoring problem from Measureware and Openview and have discovered the following issues. I hope they contribute to the knowledge pool about using large disks and intermediary performance software.
I created a 112GB disk on an EVA-3000, and created 108GB of database space in 16 varying lvol sizes (ranging from 2.6GB to 15 GB – basically a partial copy of our Production Oracle Database). I then backed it up using 1, 2 and 3 LTO-2 Drives over a 2GB SAN to simulate load and measured the load at varying intervals using “glance”. I then deleted and recreated the space using 8 x 14GB disks, and altered them in SecurePath to use the same Fibre Card (no Load Balancing or Round Robin configured in either case) and re-did the backups again.
Results:-
Backing up 1 Physical Disk Drive to 3 LTO-2s incurred 180-200K blks/sec average, 100% disk utilisation and Disk Queues of 14-35.
Backing up 8 Physical Disk Drive to 3 LTO-2s incurred 180-200K blks/sec average, 40-90% disk utilisation and Disk Queues of 0.5-1.0.
In short, using fewer, larger disks on SecurePath will cause your local disk performance statistics to become meaningless. If you wish to get accurate Disk Performance information, better look towards the array-specific tools. We have basically had to turn off our MeasureWare Alerts for our Production Database Disks in Openview, because they are almost always at 100%.
Interesting point to note; backup time was not different using either one or 8 logical disks, so it does not look like performance is an issue when choosing methods of disk grouping.
Share and En
I have since been troubleshooting a disk performance monitoring problem from Measureware and Openview and have discovered the following issues. I hope they contribute to the knowledge pool about using large disks and intermediary performance software.
I created a 112GB disk on an EVA-3000, and created 108GB of database space in 16 varying lvol sizes (ranging from 2.6GB to 15 GB – basically a partial copy of our Production Oracle Database). I then backed it up using 1, 2 and 3 LTO-2 Drives over a 2GB SAN to simulate load and measured the load at varying intervals using “glance”. I then deleted and recreated the space using 8 x 14GB disks, and altered them in SecurePath to use the same Fibre Card (no Load Balancing or Round Robin configured in either case) and re-did the backups again.
Results:-
Backing up 1 Physical Disk Drive to 3 LTO-2s incurred 180-200K blks/sec average, 100% disk utilisation and Disk Queues of 14-35.
Backing up 8 Physical Disk Drive to 3 LTO-2s incurred 180-200K blks/sec average, 40-90% disk utilisation and Disk Queues of 0.5-1.0.
In short, using fewer, larger disks on SecurePath will cause your local disk performance statistics to become meaningless. If you wish to get accurate Disk Performance information, better look towards the array-specific tools. We have basically had to turn off our MeasureWare Alerts for our Production Database Disks in Openview, because they are almost always at 100%.
Interesting point to note; backup time was not different using either one or 8 logical disks, so it does not look like performance is an issue when choosing methods of disk grouping.
Share and En
Building a dumber user
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2005 12:59 AM
07-01-2005 12:59 AM
Re: Design of Disk Systems with EVA-3000 and SecurePath
In heavy load environments with more than just one server beeing havy...you can gain some poerformance by creating smaller Vdisks and combining them on the OS level, because you can load ballance the FC hba and also EVA host ports. If you have just one "big" drive all the traffic to that drive goes to only one EVA host port...
B
B
Nothing is impossible for those that don't have to do it themselves!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2005 04:01 AM
07-01-2005 04:01 AM
Re: Design of Disk Systems with EVA-3000 and SecurePath
A single host port can handle up to 2048 outstanding I/Os, but the default depth on HP-UX is 8 I/Os per LUN. On the other hand, it makes sense to use at least one more virtual disk that you can assign to the second contoller.
.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP