StoreVirtual Storage
1748175 Members
4291 Online
108758 Solutions
New Discussion юеВ

Re: High latency, low IO's, MBps

 
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

After setting nRaid10 and restriping finished:
Writing 8kb random io's
Throughput metrics:
IOs/sec: 4818.04
MBs/sec: 37.64

that is a drop from
IOs/sec: 13450.80
MBs/sec: 105.08
with no network Raid.
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

I had a look at the port counters and found that the error counters are mostly zero. There are Tx Drops but way below HP's rule of thumb 1 in 5000.

Switch 1
Port 5 Server ALB slave Nic connected. No errors
Port 9 SAN node 1 ALB slave Nic connected.
Bytes TX 1,706,663,918 Unicast Tx 132,150,964 Bcast TX 267,65
Drops Tx 183
Port 15 SAN node 2 ALB slave Nic connected.
B Tx 31,882,815 Bc Tx 132,883,178 U 268,263
Drops Tx 11

Strangely the trk1 ports show flow control off, while enabled in the Config menu. According to the manual happens when the port on the other side is not configured for flow control. Guess what, the connected Trk1 ports on switch 2 all show flow control on! Contradicting.

Switch 2 has no drops on the San nodes.
Server Nic port
B Tx 1,290,285,957 Bc tx 316,230,605 U Tx 201,778
Drops Tx 3245

Should I conclude that the overhead of network Raid 10 is the reason for the complaints about the P4300 performance?
teledata
Respected Contributor

Re: High latency, low IO's, MBps

volume Access Specification IOps Read IOps Write IOps MBps
NR0 8K; 55% Read; 80% random 842 462 380 6.6
NR-10 8K; 55% Read; 80% random 513 282 230 4.0

NR0 16K; 66% Read; 100% random 923 619 305 14.4
NR-10 16K; 66% Read; 100% random 485 325 160 7.6

NR0 64K; 66% Read; 100% random 470 315 155 29.4
NR-10 64K; 66% Read; 100% random 304 204 100 19.0

NR0 4K; 75% Read; 80% random 829 621 207 3.2
NR-10 4K; 75% Read; 80% random 606 455 151 2.4

NR0 32K; 55% Read; 80% random 541 297 244 16.9
NR-10 32K; 55% Read; 80% random 377 207 170 11.8

I ran a quick test... All I had handy though was a pair of VSAs (on ESXi 3.5, each VSA has 16 500GB SATA drives) so there is a lot more network overhead than a physical node, but even here you can see that the drop in performance isn't as large as you are seeing in your test...
http://www.tdonline.com
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

Thanks for the effort. IMHO the IOMeter Access specification with 55% reads masks the outcome. Still a significant drop is already noticeable. With 100% 8K random writes (SQL/Exchange server) the network Raid overhead will be more apparant. Our SAN is intended as the target for our Hyper-V SQL2008 server and mixed read/writes approaces reality better.

The switch 1 No flow control is now gone as I exchanged the dual personality ports for 10/100/1000 ports on the 2910al.

I have attached the results of my IOSql tests sofar on a P4300 7.2TB 2 node system. ALB, No Jumbo, No flow Control, No trunk versus ALB, Jumbo, trunk, flow control and RSTP; HD Raid5 versus Raid10; network Raid 0 versus network Raid 10.

Would there be a improvement in sequential reads and writes when adding a third node? Improvement in the order of?

TIA.
AuZZZie
Frequent Advisor

Re: High latency, low IO's, MBps

Did you ever get anywhere on this?

I'm currently looking into the P4300 solution but all I'm finding are people complaining about the performance of the network raid (the hole reason to purchase the san)
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

@Auzzie

We are currently using HP's High Availabity Bundle Midrange Rack HA. The P4300 is configured Raid10, nRaid10 with ALB. I have compared my test results with those provided by HP and seen comparable results. The bottle neck is P4300's two nodes for load balancing. Adding a third or fourth node will lead to a 50% and 100% percent performance increase as per HP information. Basicly two nodes is a poor man's solution.
I am currently running W2008R2 fail-over cluster with a W2008R2 Hyper-V server running Progress database server. Performance is acceptable. We are going to add more nodes before going live with further Hyper-V servers (SQL server, RDS, SBS server).

The SAN capabilities in combination with W2008R2 Hyper-V are a definite plus. Two nodes is not the recomended config and a big question mark for performance critical database servers. In such instances mutiple P4500 with 10GB ports or a server with solid state disks are maybe a better solution.
Thomas Halwax
Advisor

Re: High latency, low IO's, MBps

We have a two node P4300 cluster with ALB on redundant HP 2910al switches. Clients are two DL380 G7 with ESXi 4.1.

Attached is a screenshot of a robocopy job and of the HP SAN Performance Monitor. As you can see we are able to reach 122 MByte/s (max. for a 1 Gbit/s link is 125 MByte/s).

Source of the robocopy job is a Win2003 server using the MS iSCSI initiator, target is Win2003 server on VMFS.

All volumes are Network RAID 10 (volumes mirrored).

Flow control, jumbo frames and Rapid Spanning Tree are enabled.

Of course this is no IO test but it shows that a P4300 cluster can operate at the max. throughput limit of a 1 Gbit/s link.

Thomas
Thomas Halwax
Advisor

Re: High latency, low IO's, MBps

@teledata

comparing your results with 2 x P4300 G2 Cluster, each node has 8 x 450 GB SAS, Node Raid 5, Network Raid 10 (mirror)

Using IOMeter on a 5GB raw disk via MS iSCSI initiator (iops total, iops read, iops write,mbps):

4k, 75% Read, 80% Random: 2244,1684,559,8
8k, 55% Read, 80% Random: 1886,1038,848,14
16k, 66% Read, 100% Random: 2193,1443,750,34
32k, 55% Read, 80% Random: 1456,801,654,45
64k, 66% Read, 100% Random: 1192,786,405,74

Thomas
mggates
New Member

Re: High latency, low IO's, MBps

Thomas,
what is your server to SAN connection look like? I see the G2 cluster has 2 internal 10/100 cards. Hard to believe your seeing that performance of those cards.
AuZZZie
Frequent Advisor

Re: High latency, low IO's, MBps

I think you're mistaken. The G2 has 2 X 10/100/1000 nics per node.