StoreVirtual Storage
1748209 Members
2776 Online
108759 Solutions
New Discussion юеВ

High latency, low IO's, MBps

 
Fred Blum
Valued Contributor

High latency, low IO's, MBps


I have tested with IOmeter and IOSql against the servers local HD and the P4300 2 node SAN, Hd Raid5, nRaid 10.

I tested first against the local HD, then without jumbo/flow control/static trunk TRK1/ LACP/RTSP against the SAN and then with jumbo, flow control, static trunk TRK1, LACP, RTSP. In both cases teamed NICs with TLB(=ALB). First time SAN disk formatted NTFS default allocation unit size.
Random IO 32,64,128,256KB writes all better against harddisk. Random IO 8KB writes exeption 47% worse. Seq write IO's all around 22% worse. Random reading IO's small KB better (8KB 8875 344% better) over 128Kb worse, sequential IO's all worse.

Tried improving with jumbo, flowcontrol, static trunk, LACP and RSTP. Hard disk now formatted with 64KB allocation unit size. Small random writes slightly improved over 32kb random writes worse, Seeing worse performance with small random reads, improving 128KB and over. Same picture with sequential reads. See excel sheet.

I had expected to see an improvement across the board. Was I wrong to assume that?

What is the performance you are achieving? SQLIO test definition also in the excel sheet.

is there a way to monitor the HP 2910al switch perfromance?

TIA,
Fred


29 REPLIES 29
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

Sending excel sheet again.
mggates
New Member

Re: High latency, low IO's, MBps

I am seeing similar disappointing results. My client has a P4300 SAS 7.2 SAN. both units are using 802.3ad link aggregation into a dedicated VLAN on a pair of Cisco 3750s. I don't think the network is a limiting factor. Unless it has to do with jumbo frames. A simple run of ATTO disk benchmark on a server with an attached SAN volume shows performance maxing out around 120mbs. The same server running benchmark on local raid array approaches 400mbs. I am struggling in my search for for tuning documents and just what my expectation of performance should be.
mggates
New Member

Re: High latency, low IO's, MBps

Maybe I answered my own question. Regrading Bits and Bytes. My server in question only has a single 1gb nic into the storage vlan. If my understanding is correct that should top out at 125 MBs? If I add a nic and bundle them should I expect to see disk speeds approaching 250MBs?
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

@mggates

In earlier threads I've read that on average 125MBps was the max, but I am not achieving that with Advanced Load Balancing.

Have a look at this link: Bonding versus MPIO performance http://blog.open-e.com/bonding-versus-mpio-explained/
Damon Rapp
Advisor

Re: High latency, low IO's, MBps

With 802.3ad you can really only get 125MB per host. Each NIC on the LH box can only talk to one NIC in the server. So on the LH node, you could get 250MB of throughput but you would need 2 clients to test that out (125MB per client).

This of course assumes that you have enough disks in the right RAID configuration to be able to generate 250MB of throughput.

To get more throughput to the clients, you could bond interfaces on the clients and them have them access multiple LH nodes via network raid.

In my SAN setup, all LH nodes and servers are using 802.3ad and have at least 2 bond nics.

Thanks,

Damon
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

Hi Damon,

I have a 2 node 7,2TB Starter San. Which has 2x8 HD. Still in HD Raid5 (thinking about changing that to RAID10 for performance) and nRaid10. So as a rule of thumb I've read that that should be able to produce 16x150=2400iops.

If I follow this calculation (IOPS * Number of Disks * Segment Size) / 1024 I should be able to reach 150MBps.

Did you see my SQLIO results? The max Mbps was 110,01 Mbps - 1760,16 IOps at 64KB random reading IO's. This was with 64KB allocation unit size, jumbo, flowcontrol and RSTP.
With default W2008 R2 allocation unit size, no jumbo, no flowcontrol, no RSTP it was 112,96 MBps/1807,4 IOSps. both cases ALB. So it fell.

I had expected to see an overall improvement following the Networking Best Practices Guide. The improvement is seen only with writing 8K and 32K random IO's and reading sequential. Probably due to the 64KB allocation unit size. But 64Kb random IO's writing falls. That is not what I had expected and why I am questioning my configuration. Were my assumptions of an overall improvement wrong with jumbo/flowcontrol/RSTP/static LACP trunk?

I am thinking of testing again without jumbo, and testing with HD Raid10 before deciding on the production setup.

Pointers appreciated.






Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps

When reconfiguring one node, removed it from cluster and management group, the remaining node had to be changed to nRaid 0.
I copied the 25GB SQLio test file back over and noticed that the transfer speed doubled from 75GB to 150GB.
So I have 1/2 the spindles 8 instead of 16, but without network raid. Still the speed doubles. Is their such a high price in performance for nRaid?
teledata
Respected Contributor

Re: High latency, low IO's, MBps

Hmmm,

That doesn't sound correct...

I'd start by enabling SNMP on your switch, then collect interface statistics:

Packets in/out
Errors In/Out
dropped packets in/out


http://www.tdonline.com
Fred Blum
Valued Contributor

Re: High latency, low IO's, MBps


I have just changed the HD Raid5 to Raid10. Did a SQL10 test with network Raid 0. Improvement of 8kb random write from IOs/sec 2471.28 - MBs/sec 19.30 to IOs/sec 13450.80 MBs/sec 105.08.
Volume is currently restriping will test also with network Raid 10. Expecting to see a drop again to 20 MBps.
Will try to find out how to monitor the sw2910al.