- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA 2050-RAID 6, Block-level data striping with do...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2019 12:08 PM
06-06-2019 12:08 PM
MSA 2050-RAID 6, Block-level data striping with double distributed parity, over multiple enclosures
Good afternoon HPE Community,
Regarding the MSA 2050 Best Practices doc, it states the following:
“Create disk groups across disk enclosures....HPE recommends striping disk groups across shelf enclosures to enable data integrity in the event of an enclosure failure. A disk group created with RAID 1, 10, 5, or 6 can sustain one or more disk enclosure failures without loss of data depending on RAID type. Disk group configuration should consider MSA drive sparing methods such as global and dynamic sparing.”
What I can't find anywhere ( (CLI, SMU guides, Siesmic, etc.) are specifics. Client needs to know how many enclosures they can span a RAID 6 group across. I can't find any limits/maximums regarding enclosures. Could one put the maximum RAID 6 disk group of 10 drives across 5 enclosures (2 drives in each)?
Thanks in advance for any guidance, Tim.
P.S. this is an oppty, not currently installed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2019 05:29 PM
06-06-2019 05:29 PM
Re: MSA 2050-RAID 6, Block-level data striping with double distributed parity, over multiple enclosu
I hope the following MSA2050 Quickspec document will help you,
https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00008276enw
It is possible to scale an MSA up to a total of 8 enclosures including the array head itself. This means connecting to it up to seven LFF or SFF enclosures or even a mix of the two, so to offer a total of either 96 LFF drives, or 192 SFF drives.
Note that as the MSA 2050 SFF enclosure have 24 drive slots rather than the 25 of the D2700, that the total drive count is currently 192 rather than 199 drives. However, if upgrading an existing configuration utilizing D2700 enclosures, then 199 drives are still supported.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
***********************************************************************************
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2021 04:24 PM
08-10-2021 04:24 PM
Re: MSA 2050-RAID 6, Block-level data striping with double distributed parity, over multip
Hello,
I have a similar scenario with 1 Array Enclosure + 3 Drive Enclosure, 8 SSD + 48 FC and the best practice to distribute the disks among all the enclosures as well as the creation of disk groups where there is a R1 = SSD / R6 = FC.
For example distribute E1=8SSD+12FC / E2=12FC / E3=12FC / E4+12FC and the DG creation for example:
dgA_SSD-01 Performance (RAID 5 = 4HD) E1_Slot 1-4 dgA_10k-01 Standard (RAID 6 = 12 HD /[10+2]) E1_Slot 9-20 dgA_10k-02 Standard (RAID 6 = 12 HD /[10+2]) E2_Slot 1-12 dgB_SSD-01 Performance (RAID 5 = 4HD) E1_Slot 5-8 dgB_10k-01 Standard (RAID 6 = 12 HD /[10+2]) E3_Slot 1-12 dgB_10k-02 Standard (RAID 6 = 12 HD /[10+2]) E4_Slot 1-12
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-11-2021 12:28 PM
08-11-2021 12:28 PM
Re: MSA 2050-RAID 6, Block-level data striping with double distributed parity, over multip
Hi Tim,
There are a few moving parts with respect to the Best Practices suggestion.
1. You have multiple enclosures
2. You are using Fault-Tolerant or Reverse Cabling
3. You can create a RAID set with only the fault tolerant number of drives in each enclosure
4. You can make sure that any disk-group reconstructions happen in the same enclosure
The goal is to be able to sustain an entire enclosure going offline and your data staying online.
For example if you have 2 enclosures and use RAID 1 or RAID 10 and have the mirrors on different enclosures then if enclosure 2 should go completely away your data is still intact and accessible (if enclosure 1 disappears host connect goes away)
Another example is RAID 6 disk groups where each enclosure has 1 or 2 drives from the RAID set. DG1 => Disks 1.1, 1.2, 2.1, 2.2, 3.1, 3.2, 4.1, 4.2, 5.1, 5.2 if enclosure 3 should lose power DG1 is CRITICAL with 2 missing disks but is online and data is accessible. But if DG1 => Disks 1.1, 1.2, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 5.1, 5.2 if enclosure 3 loses power then DG1 is Quarantined due to 4 missing disks and then the Pool is also offline due to DG1 being non-accessible.
The last part is making sure that if a drive fault occurs then you need to make sure that the reconstruct happened within the same enclosure: eg. Disk 3.2 is removed and Disk 3.12 is used for reconstruct.
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]