ProLiant Servers (ML,DL,SL)
1756482 Members
3127 Online
108848 Solutions
New Discussion

Proliant ML350 Gen6 with P410i controller - cannot migrate to RAID 1+0

 
Go to solution
KubaL
Frequent Advisor

Proliant ML350 Gen6 with P410i controller - cannot migrate to RAID 1+0

Hello!

 

Recently our storage (and AD) server was planned for RAID migration on storage volume for better performance benefits.

 

In past it was running 8 SATA drives (each 1TB) in RAID 5+0 configuration, but we wanted better write performance of the file server. The plan was to try RAID1+0 on it with up to 8x read and 4x write speed gain.

Since HP ACU was not giving an option  for direct RAID 1+0 migration, I figured (and actually found some evidence of it on this boards) that RAID must be migrated first to something else, so I chose RAID5 (terrible 1x write performance, quite fast up to 7x read performance). now, the problem is that despite everything migrated correctly and initialized properly, there is still no option in ACU to migrate to anything else, except for RAID0 or RAID5 ... why is that so? how do I make it migrate to RAID 1+0? or back to 5+0 for that matter?

 

This machine uses Expansion Bay for more drives, the Box1 (the stock one) is full, packed with 2 SAS2 drives (RAID1 OS drive) and 6 SATA drives for storage raid, the other 2 storage drives are in expansion bay (identified in ACU as Box2) bringing it to a total of 8 drives for raid in question.

 

Machine is running Windows Server 2008 R2 with HP ACU 9.40.12.0, all up to date as far as I know. can double check if needed.

 

am I doing something wrong? any ideas please?

 

Thanks!

 

 

P.S. This thread has been moved from Disk Array to ProLiant Servers (ML,DL,SL). - Hp Forum Moderator

3 REPLIES 3
Solution

Re: Proliant ML350 Gen6 with P410i controller - cannot migrate to RAID 1+0

I'm experimenting with an array migration myself, so I had just read through the documentation and two things came to mind.

 

The documentation says :   "Only RAID levels that are possible for this configuration are shown. For example, RAID 5 is not listed if the array has only two physical drives."

 

It sounds like you have a compatible number of drives, so I don't think that this would be the reason

 

The documentation also says: "For some combinations of initial and final settings of stripe size and RAID level, the array must contain unused drive space."

 

Is it possible that you don't have enough free space on the array to perform this migration?   You would end up with less available drive space in a RAID 10 scenario VS. A RAID 5 scenario.  I don't know how someone would account for this limitation without some sort of model for space needed.

 

KubaL
Frequent Advisor

Re: Proliant ML350 Gen6 with P410i controller - cannot migrate to RAID 1+0

accidentally started another thread instead of getting back to this one.

Yeah, it seems like when ACU migrated to RAID5 mode it consumed all available space on the logical array, so now the logical array is too big to migrate back to raid 5+0 ... sigh ...

oh, and I gave up on 1 + 0 (too small size after all in a long run)

I think this must be it, even though actual windows volume (partition) was not expanded in Windows after raid5 migration (I have unused space in DM) the actual logical array grew consuming all space ...

Is there anyway to shrink the logical raid volume down to the size of partition located on it?

KubaL
Frequent Advisor

Re: Proliant ML350 Gen6 with P410i controller - cannot migrate to RAID 1+0

OK, never mind. I swallowed the bullet, this server will stay on RAID5. I have just extended the volume in Windows Disk Manager, so we can enjoy the extra space, 6.4 TB.

Did some googling and seems that shrinking logical arrays is not supported in ACU and this volume is so big (5.5 TB before migration) that I have no way of fully backing it up along with NTFS ACL and no way I am recreating all of ACL from NAS file backups (that would probably take a week)...

 

I guess I must have speed-clicked through some dialogs from ACU when I was commencing the migration and accepted expansion to full size offered by RAID 5 config, my bad most likely.