MSA Storage

Re: Best way to setup an MSA 2042/2052


Best way to setup an MSA 2042/2052

Hi there,

I am thinking about replacing my MSA with a brand new one, so probably the new 2052. Anyone has any negative experiences with AutoTiering or Virtual Storage as a whole? Currently I am using a G4 model with just lineair storage. That never failed on me before. But I think a lot of the code of the bigger EVA platform made it to the MSA now. The techniques look very familier.

That being said, I can use the SSDs for read cache or for performance tiering. Somehow I feel some hesitation towards performance tiering. This is my primary storage system and it must be a set it and forget it system. Can't afford any downtime on this building block. Can anyone say something about the stability of the platform, esp comparing read cache vs performance tiering. And can you even have two pools (one per controller) set up as a performance tier? I thought this was a limitation in the 2042, but couln't find anything for the 2052.

Then, suppose performance tiering is the way to go, how would you set up three enclosures, if you do not want to spread disk groups over different enclosures. Or is it fine to do so? So first enclosure will have 4 x SSD (performance tier setup, need RAID1 at least for the SSD diskgroup, it contains data). That leaves 20 unused bays for the 1st enclosure. Following the power of 2 rule, I can add at most 9 HDDs in a diskgroup (8 data and 1 parity, RAID5). This is per pool, so 18 HDDs in total. Leaves me with 2 free bays. I can add two spares, one per pool?

The next enclosures would have a mix of SAS and MDL. Following the power of 2 here again, leaves me with 6 free bays in the enclosure, coz 18 (2 x (8+1)) will be used by disks. So now what? Again 2 spares? that leaves me with 4 bays unused.

To summarize, I find it rather difficult to find a setup that I can expand per enclosure., like I do now with lineair storage.

Am I missing something? I will be hosting a lot of VMs served by three DL380's on this storage array.

Any advice is welcome.






Re: Best way to setup an MSA 2042/2052

Hello Collector,

First, let me say, that if you are going to be purchasing a new MSA like this then you may want to consider engaging the solution architecture resources of the HPE partner that you'll be purchasing from. I commend you for wanting to plan this out before purchasing and that is exactly what the SA resources should be able to help you do with much more attention and focus than you'll be able to get from a forum. I'll address as much as I can here:

1) Yes, my customers have had very good experience, and little trouble, with virtualization features on MSA. Performance tiering and caching are pretty much set it and forget it type of things on an MSA. However, as your dataset grows and workload changes, it never hurts to take a look once and a while and see if adding additional resources will help improve your setup. There are no daily knobs and dials to turn but definitely good to do some occasional assesments. 

2) You mentioned not being able to afford any downtime. If that is truly the case then there may be better platforms for you in the HPE line. I know entire organizations that run on MSAs perfectly fine but, as you move throughout the HPE product line, you will find additional features and enhancements that will help reduce the exposure to downtime. 

3) On an MSA with virtualized storage, you have a Pool A and Pool B; each assigned ownership to a seperate controller. You create virtual disk groups and assign them to a Pool. Once a pool has disk groups of different drive types assigned then you are "tiering". If you wish to have SSD in a virtual disk group then you need the performance tiering license (included on 20x2). A pool can be configured to use SSD as a cache or as a performance tier but not both. While you could have one pool caching and one tiering, it would not be recommended as it starts to get into that fine grain management and knob and dial type thing that I think you are looking to avoid. 

4) While not 100% true in all cases, think of SSD caching on the MSA as sort of a poor man's tiering. It works well for people who don't want to make a substantial investment in SSD but still want some benefit from SSD. SSD Caching affects only random reads and requires 1 or 2 SSD disks per pool. Data is only copied to SSD and never permanetly lives there. So, RAID protection on a cache  drive is not needed. So, out of the box, the 20x2 model can have an SSD cache on each pool. All writes will follow the normal onboard cache/drive process and sequential reads will come directly from disk. 

5) Tiering allows us to introduce different performance levels of drives into the same pool. Typically we might see 10-15% SSD, 35-45% 10/15k, and the remaining as nearline.  We would build a config needed based on initial capacity and theoretically only ever expand our lowest tier of drives as more capacity was needed. Writes and reads are automatically optimized with the hottest data living (not copied) on SSD, coldest on NL, and the rest in the middle.  You do not need all three drive tiers but it is not generally considered best practice to have an SSD/NL setup. SSD/10k would generally be recommended in 2-tier setups. However, there may be very specific use cases where SSD/NL would be appropriate. 

6) Spares are assigned globally and are available to replace a qualifying disk in any disk group or pool. In a tiering config, you can probably get away with 1 x SSD spare, maybe 1 x spare for every 12-24 10/15k drives, and 1 spare for every 12 ish NL drives. This is all personal preference and comfort. Always use RAID6 on nearline and considering using RAID6 on your 10/15k as well. If using RAID 5 on 10/15k, I would bump up the spare count.  

7) Finally, regarding your layout. Best practice is to design a growth method and stick with it by adding disk groups that are as close as possible to your original disk group capacity and RAID layout. This will ensure that disk groups are properly balanced and perform at most efficient levels. You didn't mention capacity points so it's hard to really design out anything for you.  If you choose tiering then I would start with a minimum of 7 x SSD's (2 x 2+1 plus spare in RAID 5). You can also do less and do high capacity drives and RAID1. Again, this all depends on the specifics of your requirements and budget. For your spinning tier I would go with a minimum of 13 drives (2 x 4+2 + 1 spare in RAID6). Depending on what you need for capacity/performance, this may be enough. 

It sounds like you might have already referenced it but, if not, the best practice guide is a valuable resource at  or 2050 specific



I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.

Accept or Kudo


Re: Best way to setup an MSA 2042/2052

Hi Mike,

Thanks for your extensive answer, thanks you took the time on this.

I know there are more products from HP that might serve a better uptime than the entry level SAN, which the MSA in fact is. On the other hand, I ran a dozen of MSAs with little to no problems at all. Straight forward, plug and play. And good value for money. I currently run MSA as my primary platform for some customers, and even as tertiary for others. I did the day to day operation of several EVAs, but for this customer value for money is key. And for me as a provider, uptime is key. HP has added many features to the MSA platform, and though new features are compelling, they also might introduce some risk. So there my question about the stability when using all these new goodies. Seems alright then.

I was indeed planning for two equaly balanced tiers, so each controller hosting a performance tier of equal size. I asume that if one controller panics, the surving one takes over both tiers, to the expensive of some performance loss (am I right here?). In the 2042 there was a limitation of 1 performance tier per MSA. Is this limitation removed in the 2052?

The next question depends of course on the work load, but what would be a reasonable choice for SSD type? Mixed use, Read intensive or Write intensive? The ones included with the box are mixed used i thought.

I have now the following 3 configs in mind:
2 enclosures, so 48 slots (48TB):
7 x SSDs (0.8TB) ((2x RAID5 (2 data, 1 parity)), 1 hot spare) = 3.2TB ( 7%)
22 x HDDs (1.8TB) ((2x RAID6 (8 data, 2 parity)), 2 hot spares) = 28.8TB (60%)
14 x NLs (2.0TB) ((2x RAID6 (4 data, 2 parity)), 2 hot spares) = 16.0TB (33%)

5 unused =(

2 enclosures, so 48 slots (49.6):
7 x SSDs (0.8TB) ((2x RAID5 (2 data, 1 parity)), 1 hot spare) = 3.2TB ( 7%)
22 x NLs (2.0TB) ((2x RAID6 (8 data, 2 parity)), 2 hot spares) = 32.0TB (64%)
14 x HDDs (1.8TB) ((2x RAID6 (4 data, 2 parity)), 2 hot spares) = 14.4TB (29%)

5 unused =(

4 enclosures (2 x SFF, 2 x LFF), so 72 slots (64TB)
7 x SSDs (0.8TB) ((2x RAID5 (2 data, 1 parity)), 1 hot spare) = 3.2TB ( 5%)
22 x HDDs (1.8TB) ((2x RAID6 (8 data, 2 parity)), 2 hot spares) = 28.8TB (45%)
14 x LFFs (4.0TB) ((2x RAID6 (4 data, 2 parity)), 2 hot spares) = 32.0TB (50%)

2 LFF unused, 19 SFF unused =(

Some random thoughts:
I might of course increase the amount of SSDs, but this is going to be a rather expensive config all together. The unused slots I can use for larger disk groups, but then I leave the power of 2 paradigm. Does it really impact performance that bad, that you should really stick to the rule?

What is wishdom when adding new disks? Expanding a group of data disks or add a new complete disk group (to the expensive of extra parity disks). When a diskgroup is expanding, what is happening with the protection level? What happens to the disk group if a disk fails and its also expanding?

If one disk group dies, the whole tier where it belongs to dies along, therefore RAID6 and enough spares.
Don't care about disk groups spanning an enclosure? My current linear setup has 12 disks per raid set, so 2 per enclosure. If an enclosure goes bad, the other survives. In the tiered setup, if it dies, it all dies. At least one SAS channel might go down.

Looking forward to you reply.