HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Disk Enclosures
cancel
Showing results for
Did you mean:

SOLVED
Go to solution

## xp1024 4D+4D raid 0/1 performance

Hi,

I heard that 4D+4D raid 0/1 's performance over 2D+2D raid 0/1 is only about 50%. Is that true?

Suppose I have 128 disks, there are two options:
1). create 2D+2D raid0/1 array groups, that is there are total 32 array groups, create one LDEV/LUN in each array groups, and create a VG using total 32 LUNs, then create logical volume striping over total 32 LUNs.
2). create 4D+4D raid0/1 array groups, create one LDEV/LUN in each array groups, and create a VG using total 16 LUNs, then create logical volume striping over total 16 LUNs.
Both options can have IO spread over total 128 disks, Is there any performance difference in the above two options?
The application is oracle 8i(OLTP).

-Xiang
8 REPLIES

## Re: xp1024 4D+4D raid 0/1 performance

anyone help me?
Honored Contributor
Solution

## Re: xp1024 4D+4D raid 0/1 performance

2x 2+2 and 1x 4+4 do have exactly the same performance numbers!

So both of your examples will have the same performance.

The main difference is the number of LUNs you have to manage and the granularity of your VG!

Cheers
XP-Pete
I love storage

## Re: xp1024 4D+4D raid 0/1 performance

Thanks, XP-Pete.
The customer has a CA environment: two XP 1024 arrays, one is primary, and another is backup. The customer just doubled the disks to try to improve the performance. The current system just uses 2D+2D raid 0/1. If 4D+4D performance is okay, I'd like to just configure the backup XP from scratch using 4D+4D, then the number of LUN will be the same as the primary site. Then after configuring CA, we can easily switch the oracle DB to backup site, then reconfigure the primary XP the same way, and eventually switch the DB back to primary site. Hopefully the oracle IO performance will improve a lot after xp upgrading.
If stick to 2D+2D, to really balance the I/O, we have to painfully do data migration, maybe use dd command to copy datafile/LV one by one. (if 4D+4D, we just leverage the CA mechanism, no manual data migration)
Does 4D+4D make sense in my scenario?

-Xiang
Honored Contributor

## Re: xp1024 4D+4D raid 0/1 performance

Well I believe it is not worth the effort.
You say that you would leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2.
So I would stay with 2+2 and also deploy the DR XP the same way!
In your scenario you will not gain performance.
You have to make sure that your VG really sripes across all available 2+2 groups; then you are fine!

Also see the attached XA / SAP config guide.
It is actually written for the XP12000 but most also matches the XP1024.

Cheers
XP-Pete
I love storage
Honored Contributor

## Re: xp1024 4D+4D raid 0/1 performance

Uuups; here is the doc!!

XP-Pete
I love storage

## Re: xp1024 4D+4D raid 0/1 performance

Hi Peter, thanks, the XP guide for SAP you provided is very useful.
However, I'm a little bit more confused than before now, :-)
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?

If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.

Maybe the reason is that the lun with the same size in a 4+4 actually will only use 4 disks? since in your doc, 4+4 just is a "concatenation" of 2 2+2.

And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.

And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?

-Xiang

Honored Contributor

## Re: xp1024 4D+4D raid 0/1 performance

>>However, I'm a little bit more confused than before now, :-)
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?

A: Yes, you are right!

>> If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.

A: Yes, you are right again! I made a mistake when I said same number!!
In the 2+2 config you will end up with double the number of LUNs compared to 4+4.
Performance of #ldevs * 4+4 = #ldevs * 2 * 2+2

>> And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.

A: I do not agree. In the paper on page 9, figure 4 you will find that all values just double for 4+4. As an example take the purple line: For 2+2 it goes vertical at round 425IOPS for 4+4 at round 850IOPS.

>> And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?

A: Both options will give the exact same performance. You will stripe over a total of 128 disks in both cases.

Hope that helps
Cheers
XP-Pete
I love storage