MSA Storage

MSA 2052 config question

 
teofrast
New Member

MSA 2052 config question

Hello.

We have MSA2052 with Perfomance Tier license with current config:

LFF enclosure + expansion = 4 SSD 800 GB + 20 SAS 10 TB 7,2k

Within month we will receive SFF expansion = 2 SSD 800 GB + 22 SAS 1,2 TB 10k

 

How better to configure Disk Groups and Pools?

Use for VMware with various VM (Applications, SQL, FileServers, etc)

I assume such variant

Pool A:

READ-CACHE 2 SSD + RAID10 20 SAS 10k + RAID1 2 SSD + 2 Global Spare SAS 10k

Pool B:

READ-CACHE 2 SSD + RAID6 10 SAS 7,2k + RAID6 8 SAS 7,2k + 2 Global Spare SAS 7,2k

Or are there better options?

2 REPLIES 2
PrashantS
HPE Pro

Re: MSA 2052 config question

Hi,

Refer to this doccument page 24-27

https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00008277enw#

That compare the performance output on different RAID and types of IOPS.

 

Prashant S.


I am a HPE Employee

Accept or Kudo

Re: MSA 2052 config question

@teofrast 

For Pool A it seems you have mentioned both read cache and SSD Performance tier. This is not allowed. You can't have both part of the same Pool.

I would recommend you should go for Performance tier only instead of read cache until you have read intensive IO more for your business.

Performance tier will help you for both read/write operation but read cache will help only for read operation.

Also I would recommend to create VDG with same no of drives and same RAID level for that drive type. This also helps in performance.

For optimal write sequential performance, parity-based disk groups (RAID 5 and RAID 6) should be created with “the power of 2” method. This method means that the number of data drives (nonparity) contained in a disk group should be a power of 2.

You can also check best practices technical paper,

https://h20195.www2.hpe.com/v2/getpdf.aspx/A00015961ENW.pdf

 

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


I work for HPE
Accept or Kudo