StoreVirtual Storage
1753844 Members
7753 Online
108806 Solutions
New Discussion

p4500 storage cluster; bad performance, bad config or bad choice

 
mwitsup
Occasional Contributor

p4500 storage cluster; bad performance, bad config or bad choice

Hi

I am expiriencing bad performance in my environment. 

 

At first I will decribe the environment;

Storage:

4 node P4500G2 storage cluster 

2x ISCSI 1Gbit per Node

12x 600GB 15K LFF per Cluster Node

RAID-5 on Node , RAID10 on network

Version 10.5

 

Servers:
10x HP BL460c G7 blade servers on C7000 chassis
Every server has HP NC382m Dual Port 1GbE Multifunction BL-c Adapter mezannine for ISCSI
Windows Server 2008r2 with Hyper-V Role enabled

 

Storage networking:
2x Procurve 6120G switch on Blade chassis
No extra/external switches
8 external ports on Procurve 6120G are connected to storage nodes

 

Configuration:
Hyper-V Failover cluster

Total of 70 VM's hosted
ISCSI connections with MPIO and HP DSM according to http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA4-0956ENW.pdf

 

Problem:


Poor throughput on VM's to storage.
I tried to use static vhd and dynamic vhd (slight difference)

I tested troughput with crystaldiskmark within VM (tried different VM's on different hosts, all the same poor performance).

See screenshot for measurements.

 

I looked at the performance counters on the storage cluster, I can read them, but I cannot understand them. I don't know the max values of the box!

 

Does anyone have some tips and tricks for me?

 

----------------------------------------------------------------------------------------------------------------------------------------

PS - This thread has been moved from SAN(Enterprise) to HP StoreVirtual Storage / LeftHand - Forum Moderator

5 REPLIES 5
oikjn
Honored Contributor

Re: p4500 storage cluster; bad performance, bad config or bad choice

not sure, but your total throughput sequential looks fine.  not sure why the 4k is so low.  my guess would be a NIC setting config problem causing latency.  

 

Can you take a screenshot of the iscsi connection page or did you verify that the DSM connections are all setup and that you switched everything from "vendor specific" to round robin?

mwitsup
Occasional Contributor

Re: p4500 storage cluster; bad performance, bad config or bad choice

Please see this dump from mpclaim -v

 



MPIO Storage Snapshot on Friday, 23 May 2014, at 08:43:49.878

Registered DSMs: 2
================
+--------------------------------|-------------------|----|----|----|---|-----+
|DSM Name | Version |PRP | RC | RI |PVP| PVE |
|--------------------------------|-------------------|----|----|----|---|-----|
|Microsoft DSM |006.0001.07601.17514|0030|0003|0001|030|False|
|HP Lefthand DSM for MPIO |010.0000.00000.1463|0030|0003|0001|030|False|
+--------------------------------|-------------------|----|----|----|---|-----+


Microsoft DSM
=============
No devices controlled by this DSM at this time!

 

HP Lefthand DSM for MPIO
========================
MPIO Disk5: 10 Paths, Round Robin, ALUA Not Supported
SN: 600EB3715C56CA400000002B
Supported Load Balance Policies: FOO RR VS

Path ID State SCSI Address Weight
---------------------------------------------------------------------------
fffffa80322e6b80 Active/Unoptimized 003|000|029|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8030810170 Active/Unoptimized 003|000|028|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa80303c50f0 Active/Unoptimized 003|000|027|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa80301b16e0 Active/Unoptimized 003|000|026|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa80306df540 Active/Unoptimized 003|000|025|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806e11e6d0 Active/Unoptimized 003|000|016|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa802e38eac0 Active/Unoptimized 003|000|015|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa80301a3330 Active/Unoptimized 003|000|014|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8033552010 Active/Unoptimized 003|000|013|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8030f4e010 Active/Unoptimized 003|000|012|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

MPIO Disk4: 10 Paths, Round Robin, ALUA Not Supported
SN: 600EB3715C56CA4000000027
Supported Load Balance Policies: FOO RR VS

Path ID State SCSI Address Weight
---------------------------------------------------------------------------
fffffa8066e111f0 Active/Unoptimized 003|000|024|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806e04f010 Active/Unoptimized 003|000|023|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806d666010 Active/Unoptimized 003|000|022|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806d2f9010 Active/Unoptimized 003|000|021|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8030c28b80 Active/Unoptimized 003|000|020|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806e12d370 Active/Unoptimized 003|000|019|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806d679270 Active/Unoptimized 003|000|018|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806dfeb4e0 Active/Unoptimized 003|000|017|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8031a43ab0 Active/Unoptimized 003|000|011|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa803029a200 Active/Unoptimized 003|000|002|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

MPIO Disk3: 10 Paths, Round Robin, ALUA Not Supported
SN: 600EB3715C56CA4000000066
Supported Load Balance Policies: FOO RR VS

Path ID State SCSI Address Weight
---------------------------------------------------------------------------
fffffa80335fc010 Active/Unoptimized 003|000|010|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa803224bb10 Active/Unoptimized 003|000|009|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8030e21500 Active/Unoptimized 003|000|008|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8031333010 Active/Unoptimized 003|000|007|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa80315a5ab0 Active/Unoptimized 003|000|006|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa802ff722c0 Active/Unoptimized 003|000|005|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa806cca41c0 Active/Unoptimized 003|000|004|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa802ff82010 Active/Unoptimized 003|000|003|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa803278e750 Active/Unoptimized 003|000|001|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

fffffa8031275010 Active/Unoptimized 003|000|000|000 0
Adapter: Microsoft iSCSI Initiator... (B|D|F: 000|000|000)
Controller: 1F3A012A00000000 (State: No Controller)

MSDSM-wide default load balance policy: N\A

No target-level default load balance policies have been set.

================================================================================

 

David_Tocker
Regular Advisor

Re: p4500 storage cluster; bad performance, bad config or bad choice

The 6120g has only 2mb of buffer shared over all the ports so you need to make sure this is not getting swamped.

First things to check:

Flow control enabled on -everything-:

Node NICs, Server NICs, switch ports.

If it is not enabled, do this first.

second:

Jumbo frames off on everything. (the 2mb buffer will hold far fewer packets with jumbo frames, even if the switch is happy to pass them)

third:

Make sure the two blade switches have connectivity to each other on the storage vlan - I suggest a LACP trunk between them. (there are cross connect ports in the blade enclosure that will link them)
I would be tempted to disable flow control on those ports with 4 nodes to avoid the link being shut out by nodes sending pause (flow control) frames.
In theory the 4 nics on each node could swamp the 2gb link between the switches, you definately dont want it being shut down by pause frames.

 

Advanced:

Try disabling TCP/UDP offload, TOE, RSS etc on the storage NICs on a node and see if this helps.

If you are using multipathing, you should have no teaming enabled on the storage nics, and an IP address in the storage vlan enabled on each of the storage nics. Connect both paths using the iSCSI initiator manually to make sure both paths are active and try the test again.

 

Regards.

David Tocker
mwitsup
Occasional Contributor

Re: p4500 storage cluster; bad performance, bad config or bad choice

Flow control;

CMC -> Node -> Network -> TCP Status -> Flow Control AUTO, Receive On, Transmit On  (V) - Check!

Device Manager -> NIC -> Advanced -> Flow Control -> Tx Enable, Rx Enable (V) - Check!

Switch -> Configuration -> Port Configuration -> Flow Control = Enabled (V) - Check!

 

Jumbo Frames;

Not used, all packet sizes are default

 

Storage Switch Connectivity;

Switches are connected through ISL (inter switch link - 10GB, No LACP)

Flow control is enabled on this port (no LACP)

Uplink 10GB So swamping is nearly impossible

 

Offload;

So many options to disable/enable;

IP checksum offload/TCP Checksum offload/UDP Checksum Offload/Large Send Oflload v1/Large Send Offload v2/TCP Connection Offload.  Is there a way to determine if disabling offload is good for me?

 

Multipath;

Please review the mpclaim -v dump

Every path is multipath and both are active (HP Best Practice)

 

Edit: Typo

 

David_Tocker
Regular Advisor

Re: p4500 storage cluster; bad performance, bad config or bad choice

That all looks good to me - The MPIO looks perfect to me, but its not my forte so if anyone else has a contribution, sing out.

 

I would suggest playing with the offload settings on one of the nodes:

Turn them all off and try it on one of the nodes (its not going to break anything, but it will down the link while the changes take effect, so probably a good idea to drain the node first)

I have mostly seen issues with offloading on broadcom based network cards. (also make sure the drivers are up to date)

 

If your switch can -generate- (send) flow control frames, I would turn this off - if you have two switches generating flow control frames they can and will completely stop the 10gb ISL if one of the p4000s generates a pause frame in relation to heavy traffic coming accross the ISL from another P4000 or a heavily sending host. You really want that ISL open all the time...

 

Heres a little article if you want more information:

http://virtualthreads.blogspot.co.nz/2006/02/beware-ethernet-flow-control.html

 

Cisco switches have this disabled by default (I dont think they can generate them at all these days) but I dont know about the 6120G/XG.

 

Otherwise, I would look at the loading of the cluster - a good rule of thumb for the queue depth is 1 for each spindle is okay. (For example if you have 32 spindles, a queue depth of 32 is probably not terrible)

Anything more and you may be loosing excessive performance due to contention.

 

Your throughput is actually pretty good for a P4000 - the 4k is where things are worse than I would expect... looks like what you would expect for a very heavily loaded cluster.

 

My personal view is that 3 nodes is optimal for P4000 deployments on 1GB - anything more and I think things start slowing up due to the amount of traffic going between the nodes. To crank up the performance you really need to go to 10GB, and if that is too expensive I normally recommend shared SAS or FC sans.

These days, the 3PAR 7000 series is pretty similar in price to a P4000 4 node cluster, and it will absolutely spank the P4000.

 

 

Regards.

David Tocker