Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Showing results for 
Search instead for 
Did you mean: 


Go to solution
Arsenio B. Talingdan
Occasional Contributor


We need to configure an XP128 for optimum application I/O performance. Six to ten applications will be accessing the disk array at the same time to different files that range in size from 20 gigs to several hundred gigs. The cumulative I/O is several terabytes. The basic characteristic of each application is long sequential read sequences in parallel with long sequential write sequences, followed by large random reads (4 megs each) in parallel with long sequential write sequences. All I/O requests are several megs each. The XP128 is linked with 6FCs to a 16 way RP8400.

Suggestions on LUNS, stripe size, raid level? Any help in avoiding a bad configuration will be greatly appreciated.
Steve Lewis
Honored Contributor

Re: XP128

In my opinion avoid RAID 5 where you have large sequential operations. Even with modern fast spin speeds, RAID 5 just doesn't provide the performance, because of the huge latency you get on writes. Even if most of the ops are reads, you have to re-populate the data every so often (which are writes).

I recommend RAID 1/0 and spending more on disks.

I find that a good rule of thumb for stripe size is half the buffer RAM of each spindle.

Be careful when positioning data over different LUNS. I have found with experience that giving a separate LUN to each application is actually a bad move, since it tends to saturate a single controller at a time with all the i/o, when other controllers are idle. Better to have more LUNS and put all data over all of them. Then add the LUNS to your storage (VGs or raw) across all controllers evenly. Having said that, it does pay to separate data within each application on different LUNs, e.g. where you move data from one area to another, different tablespaces and so on.

If you have Serviceguard, then keep a LUN private to each package.

Lastly have you read the Oracle document on SANE (stripe and mirror everything)? It holds some useful pointers and can be applied to disk arrays as well, but it can be taken with a pinch of salt, as it has caused some debate amongst DBAs.

Steve Lewis
Honored Contributor

Re: XP128

Sorry that should have been SAME, not SANE.
Plus I forgot to mention the cache RAM on the controllers. If you hammer the array with too many writes for too long, then this will eventually fill up. You will have to check the stats regularly.

Vincent Fleming
Honored Contributor

Re: XP128

You give your total capacity needs, but what is far more important is an estimate of your read and write rates. How much do you expect this to write per second? How much do you expect it to read per second?

If you haven't ordered your XP yet, note that the XP128 can have 2 basic configurations.... one or two ACP pairs.

For your application, back-end performance would be key, so I would suggest using 2 ACP pairs.

As for CHIP pairs, I would suggest using 2 CHIP pairs (16 ports) rather then 1 because 2 will double your bandwidth to the cache.

6 FC's sound fine (6*200MB/s = 1.2GB/s), but be careful with the LVM configuration. With PV-Links, be sure that you have Primary LUNs on every port. Optionally, use AutoPath. If you can, you should use more FC's - it will always give more performance.

Cache - you will need a lot of cache. Don't forget that your writes are mirrored on an XP. Your max usable write cache is approx = total_cache/4. ie: 32GB of cache will net you about 8GB of write cache. Size your cache to be a minimum of 4x your maximum write rate.

Speaking of write rate, you will need enough drives to sustain your write rate, and enough ACPs as well.

A well-configured XP128 can sustain about 430MB/s sequential (64K) write rate, with 100+ drives. An XP1024 can do twice that. Of course, it can burst way higher.

Now drives... the 36Gb/15K drives are faster than the 73's (of course), but limit your capacity. You should need a minimum of about 10 array groups, but this will vary based on your actual write rate.

Good luck!

No matter where you go, there you are.
Dave Wherry
Esteemed Contributor

Re: XP128

On an XP I have seen no problems with using Raid-5 for either reads or writes. All of your writes will go directly to cache not to disk so you really have no latency there, provided you have configured the array with enough cache. I suggest a lot of cache.
As for Raid-5 and sequential reads, with the read ahead functionality of the XP you get great performance on reads. Much of the read activity is also serviced from cache.
A couple of years ago I converted all of our Raid-1 on to Raid-5 on our XP256. At the time I had obtained a white paper from an engineer at Hitachi. Along with the white paper he said "The RAID-5+ has very good performance characteristics depending on your
workload. The problem with a RAID-5+ configuration is going to be small
block random writes. Sequential reads and wrties, and multi-threaded random
reads are the performance strengths of our RAID-5+."

The two areas I think you should focus on are, as I said earlier, first lots of cache. Second spread your I/O across your interfaces. Stripe the logical volumes across those interfaces. If you balance across those interfaces the XP will do a good job of balancing on the back end.
Arsenio B. Talingdan
Occasional Contributor

Re: XP128

Thanks for the replies. Vincent and Dave, thanks for the comments on cache ACP pairs and CHIP pairs. I am unfamiliar with CHIP pairs though. I am still not sure on number of LUNs and stripe size. There are 128 disks and the read write rate is continuous. Can the stripe size be set larger than 64k?