MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

MSA 2040 Auto Tiering, Terrible performance ...

 
Probs
Occasional Visitor

MSA 2040 Auto Tiering, Terrible performance ...

Hi all,

Hoping someone might be able to help.  We have a HP 2040 with auto tiering, the disk groups, pools and volumes are configured like so:

4x 1TB SSD RAID5

9x 1.8TB 10k SAS RAID5

9x 1.8TB 10k SAS RAID5

All in a single Virtual Pool.  In the virtual pool I have two volumes configured  Vol1, Vol2 at 10TB (or there abouts) assigned as Cluster Shared Volumes (CSV) volumes are set to no affinity re-tiering as per the best practices.

I am using Windows 2016 with Hyper-V and failover clustering.  We currently have two nodes.

Our hosts are directly connected via two 10g NICS (one to each controller) on the same IP subnet, for testing purposes I have disabled one NIC and configured round robin as failover only.  Jumbo frames is not configured but even when it is the performance difference is negligable.

Performance wise from a Hyper-V VM I use IOMETER and I load a 20GB disk with 4kb 100% sequential write access profile and get a pitiful 8k IOPS. 

From the Hyper-V host I do the same at get a better, but not by much, 18K IOPS.  I know the 4KB 100% seq/write is a lousy real world test but should be one the SAN can easily fulfill to up to around 80,000IOPS from what I read.

Can't readily see any errors on the SAN or the host.

My question is, what the hell have I done wrong :)

 

3 REPLIES 3
Probs
Occasional Visitor

Re: MSA 2040 Auto Tiering, Terrible performance ...

I may have semi fixed the issue.  The MPIO policy was set to round robin which forced the use of unoptimised paths.  Turned it to failover only a performance hash it 47k IOPS per physical host.  I can run all three at nearly 100K IOPS now. 

I still only see a fraction of this in the guest though (13k IOPS) any ideas why? And also is it possible to get Windows 2016 to drive the IOPS any further so that one server can drive the full 90k?

Thanks, 

Probs

Rint
Occasional Visitor

Re: MSA 2040 Auto Tiering, Terrible performance ...

Ideally, you should put a switch in between the host iSCSI ports and the SAN.  Even though the MSA is advertised as Active/Active, a volume can only be owned by a single controller at a time.  By connecting your two host ports to A1 and B1, you're only ever going to get the throughput of a single port.  

As a test, plug the DAC/LC Fibre/CAT6 cable you're using to get to B1 into A2 and configure the host networking accordingly.  This will give you better throughput but at the expense of controller redundancy.

I normally have two 10GbE NICs over to a 10GbE modular switch (cabled to different modules for redundancy) then distribute A1,A2,B1 and B2 over the two 10GbE modules also.  Example networking for this would be:

Host 10GbE 1 - 172.16.0.11/24 
Host 10GbE 2 - 172.16.1.11/24
SAN A1 - 172.16.0.1/24
SAN A2 - 172.16.1.1/24
SAN B1 - 172.16.0.2/24
SAN B2 - 172.16.1.2/24

Jumbo should be enabled to switch larger storage based packets and I would typically use the following iSCSI intiator/target mappings:

Host 10GbE 1 - Connects to A1 and B1 (172.16.0.0/24 network)
Host 10GbE 2 - Connects to A2 and B2 (172.16.1.0/24 network)

This will give you two active/optimized paths (controller A on the SAN or whoever owns the volume) and two active/unoptimized paths (controller B on the SAN or whoever doesn't own the volume).

Hope this helps.

Dave

Probs
Occasional Visitor

Re: MSA 2040 Auto Tiering, Terrible performance ...

Hi Dave, 

Appreciate that, Unfortunately we can't install a 10G switch in this instance due to financial constraints, however switching the failover mode has helped.  Jumbo frames is enabled on the storage however when we turn it on the server it actually makes performance worse .... 

Left it off for now which gives acceptable performance. 

Cheers for your input tho. 

Don't suppose you have any thoughts around virtual disk groups ...