- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- xp1024 4D+4D raid 0/1 performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-26-2007 01:56 AM
тАО03-26-2007 01:56 AM
I heard that 4D+4D raid 0/1 's performance over 2D+2D raid 0/1 is only about 50%. Is that true?
Suppose I have 128 disks, there are two options:
1). create 2D+2D raid0/1 array groups, that is there are total 32 array groups, create one LDEV/LUN in each array groups, and create a VG using total 32 LUNs, then create logical volume striping over total 32 LUNs.
2). create 4D+4D raid0/1 array groups, create one LDEV/LUN in each array groups, and create a VG using total 16 LUNs, then create logical volume striping over total 16 LUNs.
Both options can have IO spread over total 128 disks, Is there any performance difference in the above two options?
The application is oracle 8i(OLTP).
Thanks in advance.
-Xiang
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-26-2007 10:01 PM
тАО03-26-2007 10:01 PM
Re: xp1024 4D+4D raid 0/1 performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2007 01:46 AM
тАО03-27-2007 01:46 AM
SolutionSo both of your examples will have the same performance.
The main difference is the number of LUNs you have to manage and the granularity of your VG!
Cheers
XP-Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2007 03:28 AM
тАО03-27-2007 03:28 AM
Re: xp1024 4D+4D raid 0/1 performance
The customer has a CA environment: two XP 1024 arrays, one is primary, and another is backup. The customer just doubled the disks to try to improve the performance. The current system just uses 2D+2D raid 0/1. If 4D+4D performance is okay, I'd like to just configure the backup XP from scratch using 4D+4D, then the number of LUN will be the same as the primary site. Then after configuring CA, we can easily switch the oracle DB to backup site, then reconfigure the primary XP the same way, and eventually switch the DB back to primary site. Hopefully the oracle IO performance will improve a lot after xp upgrading.
If stick to 2D+2D, to really balance the I/O, we have to painfully do data migration, maybe use dd command to copy datafile/LV one by one. (if 4D+4D, we just leverage the CA mechanism, no manual data migration)
Does 4D+4D make sense in my scenario?
-Xiang
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2007 06:48 AM
тАО03-27-2007 06:48 AM
Re: xp1024 4D+4D raid 0/1 performance
You say that you would leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2.
So I would stay with 2+2 and also deploy the DR XP the same way!
In your scenario you will not gain performance.
You have to make sure that your VG really sripes across all available 2+2 groups; then you are fine!
Also see the attached XA / SAP config guide.
It is actually written for the XP12000 but most also matches the XP1024.
Cheers
XP-Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2007 06:52 AM
тАО03-27-2007 06:52 AM
Re: xp1024 4D+4D raid 0/1 performance
XP-Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2007 03:36 AM
тАО03-28-2007 03:36 AM
Re: xp1024 4D+4D raid 0/1 performance
However, I'm a little bit more confused than before now, :-)
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?
If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.
Maybe the reason is that the lun with the same size in a 4+4 actually will only use 4 disks? since in your doc, 4+4 just is a "concatenation" of 2 2+2.
And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.
And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?
-Xiang
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2007 04:02 AM
тАО03-28-2007 04:02 AM
Re: xp1024 4D+4D raid 0/1 performance
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?
A: Yes, you are right!
>> If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.
A: Yes, you are right again! I made a mistake when I said same number!!
In the 2+2 config you will end up with double the number of LUNs compared to 4+4.
Performance of #ldevs * 4+4 = #ldevs * 2 * 2+2
>> And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.
A: I do not agree. In the paper on page 9, figure 4 you will find that all values just double for 4+4. As an example take the purple line: For 2+2 it goes vertical at round 425IOPS for 4+4 at round 850IOPS.
>> And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?
A: Both options will give the exact same performance. You will stripe over a total of 128 disks in both cases.
Hope that helps
Cheers
XP-Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2007 04:45 AM
тАО03-28-2007 04:45 AM
Re: xp1024 4D+4D raid 0/1 performance
Yes, about "4+4 is 50% performance gain over 2+2." is my mistake, in the white paper I posted, 4+4's performance just doubled.
Now the conclusion, since both options will almost have the same performance, I'll choose 4+4, because with 4+4 . I have no need to do manual data migration, just leverage CA. Manual data migration need to put the production database offline for quite a while and manually copy data file one by one, which is time consuming and error-prone.
-Xiang