- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Re: MA8000 with Veritas Volume Manger in RAID-50 o...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2005 05:05 AM
10-18-2005 05:05 AM
MA8000 with Veritas Volume Manger in RAID-50 on WS03
I am using an HP MA8000 system (2 HSG80s, 84 146 GB drives) with ACS 8.6F-10 to create (vertical) 6-disk RAID-5 arrays across the six I/O ports. These each of these arrays is a separate LUN.
I then take two LUNs and stripe them in Veritas Volume Manager 4.0 to create a two-column RAID-50 array.
I am doing this to get around the maximum volume size of the MA8000 (1.024 TB) and get the performance benefits of striping LUNs that are online to alternate controllers.
I have read a lot about how to do this and I know that it is very important to match chunksize, stripe unit width, and NTFS allocation unit size so that you don't create a thrashing monster.
Currently my arrays are using the following parameters:
MA8000 RAID-5 array: CHUNKSIZE = 128 (64 KB)
VManager RAID-0 array: Stripe Unit Width (sectors) = 128
NTFS allocation unit size = 65,536
LUN1 is presented to Controller A
LUN2 is presented to Controller B
The non-clustered server has two HBAs:
HBA1 can see LUN1 only
HBA2 can see LUN2 only
When I monitor the disks, I can see that the reads and writes are split evenly across the LUNs as expected.
With this configuration, I am getting sustained reads in the awful range of 7-25 MB/s instead of approaching the 1 Gbit/sec FC practical max of 60-80 MB/s.
Does you see anything weird with this configuration or have any practical experience with this kind of setup?
Thanks very much in advance,
Brit Davis
Senior Network Engineer
Generation IX Technologies
brit@generationix.com
(310) 477-4441
"If you think an expert is expensive, wait till you hire an amateur..."
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2005 05:38 AM
10-18-2005 05:38 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
> MAX_READ_CACHED_TRANSFER_SIZE = 32
> MAX_WRITE_CACHED_TRANSFER_SIZE = 32
Those are the names and defaults on ACS V8.7, but I think they are already present in V8.6, too.
(Great slogan! May I borrow it when I feel like an amateur ?)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2005 06:04 AM
10-18-2005 06:04 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
You are right about the cache settings being in 8.6 as well. This volume is serving files that are on the average of 1-4 GB each (oil/gas exploration). I had initially thought that raising the cache values to something above 32 KB would be beneficial until I realized the requests were so much bigger. Now that you bring it up, would it make sense to make it 64 KB to match the chunk/stripe/AllocUnit size, or something else?
Also, you say that the HSGs are not optimized for bandwidth, do you mean generally or in this configuration? Also, do you know if I am correct in assuming that I will have better performance splitting the LUNs between two controllers vs. putting both LUNs on the same controller? It sure seems like two LUNs/controllers/switches/HBAs would be the right thing to do (?).
-Brit
(I love the slogan too...I stumbled across it somewhere a while back so I think it's fair to say it's available for borrowing a bit more :) )
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2005 06:28 AM
10-18-2005 06:28 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
I've first met the HSG80 in late 1998 (yes, ninety-eight), so it's not the latest and greatest technology today, but I don't think that surprises you ;-)
You can easily raise the unit's parameters and see if it has any effect. I'd have to dig out some old paper, but if I recall correctly, the maximum throughput is about 55 MegaBytes/sec (with special benchmark programs I'd assume). I think it absolutely makes sense to use both controllers (or better their CPU and memory capacity).
The HSG has a feature called 'adaptive RAID3/5'. Well, it is not a true RAID3, but on large sequential transfers it will try to collect all chunks in the writeback cache, calculate the parity once and write the whole stripe to disks. It's possible that your I/O sizes are too small for this to happen - especially if you have another striping layer on the host.
Can VVM create a 'volume set' (two units just concatenated) instead of a stripeset? If your 6-member RAID-5 is smaller than 512MB, you could also create a 12-member RAID5 or create a so-called CONCATSET on the HSG, which links two RAID-5 storagesets to form one large unit (still <= 1TB).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2005 06:54 AM
10-18-2005 06:54 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
It's comforting to know that you've known this controller for so long, and yes, while not the latest and greatest, it is still chugging along.
I have already raised the cache values to 64 KB and retested the transfer rate. It basically had no effect. What is interesting is that for the first 5 or so seconds, I get 40-50 MB/s, then it drops off quickly to 5-15 MB/s for the remainder of the transfer.
The HSGs are using multibus failover and are maxed as far as the cache is concerned (mirrored 256 MB). Is there a CPU upgrade?
Thanks for explaining the 'adaptive RAID3/5' concept. I remember this but never got it. Now it makes sense. For now, though, write performance is on the backburner until I understand why the read perf is so poor.
I have a VVM concatset online with the same data. Its performance is not much better, but it is using chunksize=256 and AllocUnit=4KB, so it is not apples-to-apples. I have several variations on the volumes, see below for relative perf differences. I considered doing an HSG concatset but I can only do it once. I need to be able to continue to add to the volume beyond the 1.3 TB VVM volumes I currently have, so the HSG concatset is not as desirable, esp since I can't get the benefit of a stripe with it.
Here's the raw data I've collected so far, with various configurations:
*******
- From DP-int (2 column stripe)
chunksize = 128
stripe unit width = 64 SECTORS
NTFS Alloc Unit Size = 65,536
path = 1A1 and 2B2
2 GB file to D: takes ~60 seconds or average 35 MB/s.
2 GB file to D: takes ~90 seconds or average 22 MB/s. Peaks at 55 MB/s.
2 GB file to D: takes ~90 seconds or average 22 MB/s. Peaks at 55 MB/s.
- From DP-Seismic (single volume)
chunksize = 128
NTFS Alloc Unit Size = 4,096
path = 1A1
2 GB file to D: takes ~90 seconds or average 22 MB/s. Peaks at 35 MB/s.
- From DP-Shared (single volume)
chunksize = 128
NTFS Alloc Unit Size = 4,096
path = 2B2
2 GB file to D: takes ~90 seconds or average 22 MB/s. Peaks at 35 MB/s.
- From DO-Seismic (LUNs 8 and 9, 2 column stripe)
stripe unit width = 128 SECTORS
NTFS Alloc Unit Size = 65,536
slack = 50 MB
path = 1B1 and 2A2
2 GB file to E: takes ~130 seconds or average 15 MB/s. Peaks at ?
2 GB file to E: takes ~240 seconds or average 7 MB/s. Peaks at 40 MB/s.
- From DO-Seismic-Old to D: (LUNs 12 and 13, 2 volume concat set)
chunksize = 256
NTFS Alloc Unit Size = 4,096
slack = 1.8 MB
path = 1B1 (LUN 12) and 2B2 (LUN 13)
2 GB file to E: takes ~250 seconds or average 7 MB/s Peaks at 40 MB/s.
*******
Please do let me know if any of this data helps create any "ah-ha!" moments.
-Brit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2005 07:43 AM
10-19-2005 07:43 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
Have you monitored the HSG80's using vtdpy? It is accessible from the controller console serial port. You need a terminal emulator capable of displaying 132 columns by 48 lines.
Check the individual disks and their queues and also the cache usage.
Another thought, how is your server and the FC adapters configured? Hope they are located on different PCI buses. Which adapters are you using? Do they have the latest firmware? How about Secure Path? Is it a recent version with latest SP's?
Regards,
Kari
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2005 08:01 AM
10-20-2005 08:01 AM
Re: MA8000 with Veritas Volume Manger in RAID-50 on WS03
I started using the utility today. So far, I am seeing very good cache hit ratios but still poor overall KB/s. I/O is balanced nicely across all 6 SCSI ports. Do you know what range of numbers I might see for various throughput values while using DILX on a 6-disk RAID-5 volume? I am using six 4314 shelves.
As far as the HBAs, they are on an HP ML370G2 and are the KGPSA-CB model. The firmware is recent enough to be quailified with Veritas Volume Manager 4.0 (I'm not using Secure Path; I use the DMP driver in VxM).
Bottom line is that I would really like to know what is reasonable to expect, and especially whether my approach using striped LUNs with matching chunksize/stripe unit width/allocation unit size is the right way to go.
Please let me know if you can point out any other bits to look for. I really appreciate you help!
-Brit