- Integrated Systems
- About Us
- Integrated Systems
- About Us
06-27-2002 02:55 PM
Suggestions on LUNS, stripe size, raid level? Any help in avoiding a bad configuration will be greatly appreciated.
Solved! Go to Solution.
06-28-2002 02:44 AM
I recommend RAID 1/0 and spending more on disks.
I find that a good rule of thumb for stripe size is half the buffer RAM of each spindle.
Be careful when positioning data over different LUNS. I have found with experience that giving a separate LUN to each application is actually a bad move, since it tends to saturate a single controller at a time with all the i/o, when other controllers are idle. Better to have more LUNS and put all data over all of them. Then add the LUNS to your storage (VGs or raw) across all controllers evenly. Having said that, it does pay to separate data within each application on different LUNs, e.g. where you move data from one area to another, different tablespaces and so on.
If you have Serviceguard, then keep a LUN private to each package.
Lastly have you read the Oracle document on SANE (stripe and mirror everything)? It holds some useful pointers and can be applied to disk arrays as well, but it can be taken with a pinch of salt, as it has caused some debate amongst DBAs.
06-28-2002 04:14 AM
Plus I forgot to mention the cache RAM on the controllers. If you hammer the array with too many writes for too long, then this will eventually fill up. You will have to check the stats regularly.
06-28-2002 05:15 AMSolution
If you haven't ordered your XP yet, note that the XP128 can have 2 basic configurations.... one or two ACP pairs.
For your application, back-end performance would be key, so I would suggest using 2 ACP pairs.
As for CHIP pairs, I would suggest using 2 CHIP pairs (16 ports) rather then 1 because 2 will double your bandwidth to the cache.
6 FC's sound fine (6*200MB/s = 1.2GB/s), but be careful with the LVM configuration. With PV-Links, be sure that you have Primary LUNs on every port. Optionally, use AutoPath. If you can, you should use more FC's - it will always give more performance.
Cache - you will need a lot of cache. Don't forget that your writes are mirrored on an XP. Your max usable write cache is approx = total_cache/4. ie: 32GB of cache will net you about 8GB of write cache. Size your cache to be a minimum of 4x your maximum write rate.
Speaking of write rate, you will need enough drives to sustain your write rate, and enough ACPs as well.
A well-configured XP128 can sustain about 430MB/s sequential (64K) write rate, with 100+ drives. An XP1024 can do twice that. Of course, it can burst way higher.
Now drives... the 36Gb/15K drives are faster than the 73's (of course), but limit your capacity. You should need a minimum of about 10 array groups, but this will vary based on your actual write rate.
06-28-2002 06:12 AM
As for Raid-5 and sequential reads, with the read ahead functionality of the XP you get great performance on reads. Much of the read activity is also serviced from cache.
A couple of years ago I converted all of our Raid-1 on to Raid-5 on our XP256. At the time I had obtained a white paper from an engineer at Hitachi. Along with the white paper he said "The RAID-5+ has very good performance characteristics depending on your
workload. The problem with a RAID-5+ configuration is going to be small
block random writes. Sequential reads and wrties, and multi-threaded random
reads are the performance strengths of our RAID-5+."
The two areas I think you should focus on are, as I said earlier, first lots of cache. Second spread your I/O across your interfaces. Stripe the logical volumes across those interfaces. If you balance across those interfaces the XP will do a good job of balancing on the back end.
07-01-2002 06:57 AM