HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Maximum read performance using 2x EVA3000
Operating System - HP-UX
1834077
Members
2175
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2004 10:21 PM
02-12-2004 10:21 PM
Hello,
i'm crosposting it from the storage forum, but it also has LVM part.
so here's the situation:
1x HP-UX 11.00 box with 2x HBA connected to SAN with 2x EVA3000. On both EVA3000 I've configured two LUN's of the same size and presented them throught different preferred paths to utilize all 4 controllers.with SecurePath's load balancing.
On a host side all LUN's are located in the same VG, and logical volumes are striped acros all 4 LUN's.
My application is is very read intensive:
4-7 simultaniuos large (50GB) sequential reads are performed usualy. Application reads files in size ~20-50 mb in 256bytes chunks to 4MB buffer.
I assume two possible RAID configurations on EVA's - vRAID1 or vRAID5. Whitch one is to go for?
The second problem is the LVM configuration on host.
VG is configured with 8MB PE's and lvols are created with "lvcreate -i 4 -I 1024 -r N"
FS is VxFS.
I assume that VxFS block size also could be tuned. Also the buffer cache - currently i've set it to dynamic 5-20% of 12GB, I dont think that the best way, but thats how it is at the moment.
So what are your thougts about best configuration in such case?
Br,
Darius
i'm crosposting it from the storage forum, but it also has LVM part.
so here's the situation:
1x HP-UX 11.00 box with 2x HBA connected to SAN with 2x EVA3000. On both EVA3000 I've configured two LUN's of the same size and presented them throught different preferred paths to utilize all 4 controllers.with SecurePath's load balancing.
On a host side all LUN's are located in the same VG, and logical volumes are striped acros all 4 LUN's.
My application is is very read intensive:
4-7 simultaniuos large (50GB) sequential reads are performed usualy. Application reads files in size ~20-50 mb in 256bytes chunks to 4MB buffer.
I assume two possible RAID configurations on EVA's - vRAID1 or vRAID5. Whitch one is to go for?
The second problem is the LVM configuration on host.
VG is configured with 8MB PE's and lvols are created with "lvcreate -i 4 -I 1024 -r N"
FS is VxFS.
I assume that VxFS block size also could be tuned. Also the buffer cache - currently i've set it to dynamic 5-20% of 12GB, I dont think that the best way, but thats how it is at the moment.
So what are your thougts about best configuration in such case?
Br,
Darius
"In pure practice - everything woks, but nothing clear. In theory - everything clear, but nothing works. In most favorable case when theory meets practice - nothing works and nothing clear"
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2004 05:06 AM
05-28-2004 05:06 AM
Solution
A vraid1 configuration is going to give you better read io performance. With securepath installed & the relativlely small size of your data set I don't think you will see much a difference with vraid1 or vraid5 luns. The best practices are to match you fs block size to you applications block size and so on down to the array level. You could decrease dbc_max_pct a bit but 20% is okay unless you want to free more memory for other OS processes vs buffer cache. You could utilize the sar -d command to collect stats with a vraid1 & vraid5 configuration to compare. You will likely see very little differecnce in the numbers sar reports with this small dataset. Are you having a problem with IO performance?
Regards,
Curtis M. Wheatley
Regards,
Curtis M. Wheatley
Skilled workers are always in need
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP