<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High latency, low IO's, MBps in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/5299541#M3553</link>
    <description>I'm afraid currently it isn't possible to get over ~120 MB/s because the current dsm only uses one nic at a time. To achieve higher throuhput you need 10 gb nics. Single threaded io's like a file copy are limited in throughput by san/iq also. Using vmware's round robin MPIO plugin could help to actively use both nics.</description>
    <pubDate>Sat, 13 Aug 2011 22:08:54 GMT</pubDate>
    <dc:creator>M.Braak</dc:creator>
    <dc:date>2011-08-13T22:08:54Z</dc:date>
    <item>
      <title>High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709450#M1749</link>
      <description>&lt;BR /&gt;I have tested with IOmeter and IOSql against the servers local HD and the P4300 2 node SAN, Hd Raid5, nRaid 10.&lt;BR /&gt;&lt;BR /&gt;I tested first against the local HD, then without jumbo/flow control/static trunk TRK1/ LACP/RTSP against the SAN and then with jumbo, flow control, static trunk TRK1, LACP, RTSP. In both cases teamed NICs with TLB(=ALB). First time SAN disk formatted NTFS default allocation unit size.&lt;BR /&gt;Random IO 32,64,128,256KB writes all better against harddisk. Random IO 8KB writes exeption 47% worse. Seq write IO's all around 22% worse. Random reading IO's small KB better (8KB 8875 344% better) over 128Kb worse, sequential IO's all worse.&lt;BR /&gt;&lt;BR /&gt;Tried improving with jumbo, flowcontrol, static trunk, LACP and RSTP. Hard disk now formatted with 64KB allocation unit size. Small random writes slightly improved over 32kb random writes worse, Seeing worse performance with small random reads, improving 128KB and over. Same picture with sequential reads. See excel sheet.    &lt;BR /&gt;&lt;BR /&gt;I had expected to see an improvement across the board. Was I wrong to assume that?&lt;BR /&gt;&lt;BR /&gt;What is the performance you are achieving? SQLIO test definition also in the excel sheet.&lt;BR /&gt;&lt;BR /&gt;is there a way to monitor the HP 2910al switch perfromance?&lt;BR /&gt;&lt;BR /&gt;TIA,&lt;BR /&gt;Fred&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 04 Nov 2010 14:44:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709450#M1749</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-04T14:44:36Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709451#M1750</link>
      <description>Sending excel sheet again.</description>
      <pubDate>Thu, 04 Nov 2010 14:48:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709451#M1750</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-04T14:48:41Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709452#M1751</link>
      <description>I am seeing similar disappointing results.  My client has a P4300 SAS 7.2 SAN.  both units are using 802.3ad link aggregation into a dedicated VLAN on a pair of Cisco 3750s.  I don't think the network is a limiting factor.  Unless it has to do with jumbo frames.  A simple run of ATTO disk benchmark on a server with an attached SAN volume shows performance maxing out around 120mbs.  The same server running benchmark on local raid array approaches 400mbs.  I am struggling in my search for for tuning documents and just what my expectation of performance should be.</description>
      <pubDate>Thu, 04 Nov 2010 16:20:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709452#M1751</guid>
      <dc:creator>mggates</dc:creator>
      <dc:date>2010-11-04T16:20:09Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709453#M1752</link>
      <description>Maybe I answered my own question.  Regrading Bits and Bytes.  My server in question only has a single 1gb nic into the storage vlan.  If my understanding is correct that should top out at 125 MBs?  If I add a nic and bundle them should I expect to see disk speeds approaching 250MBs?</description>
      <pubDate>Thu, 04 Nov 2010 16:35:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709453#M1752</guid>
      <dc:creator>mggates</dc:creator>
      <dc:date>2010-11-04T16:35:32Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709454#M1753</link>
      <description>@mggates&lt;BR /&gt;&lt;BR /&gt;In earlier threads I've read that on average 125MBps was the max, but I am not achieving that with Advanced Load Balancing.&lt;BR /&gt;&lt;BR /&gt;Have a look at this link: Bonding versus MPIO performance &lt;A href="http://blog.open-e.com/bonding-versus-mpio-explained/" target="_blank"&gt;http://blog.open-e.com/bonding-versus-mpio-explained/&lt;/A&gt;</description>
      <pubDate>Thu, 04 Nov 2010 18:40:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709454#M1753</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-04T18:40:57Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709455#M1754</link>
      <description>With 802.3ad you can really only get 125MB per host.  Each NIC on the LH box can only talk to one NIC in the server.  So on the LH node, you could get 250MB of throughput but you would need 2 clients to test that out (125MB per client).&lt;BR /&gt;&lt;BR /&gt;This of course assumes that you have enough disks in the right RAID configuration to be able to generate 250MB of throughput.&lt;BR /&gt;&lt;BR /&gt;To get more throughput to the clients, you could bond interfaces on the clients and them have them access multiple LH nodes via network raid. &lt;BR /&gt;&lt;BR /&gt;In my SAN setup, all LH nodes and servers are using 802.3ad and have at least 2 bond nics.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Damon</description>
      <pubDate>Tue, 09 Nov 2010 00:38:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709455#M1754</guid>
      <dc:creator>Damon Rapp</dc:creator>
      <dc:date>2010-11-09T00:38:59Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709456#M1755</link>
      <description>Hi Damon,&lt;BR /&gt;&lt;BR /&gt;I have a 2 node 7,2TB Starter San. Which has 2x8 HD. Still in HD Raid5 (thinking about changing that to RAID10 for performance) and nRaid10. So as a rule of thumb I've read that that should be able to produce 16x150=2400iops. &lt;BR /&gt;&lt;BR /&gt;If I follow this calculation (IOPS * Number of Disks * Segment Size) / 1024 I should be able to reach 150MBps. &lt;BR /&gt;&lt;BR /&gt;Did you see my SQLIO results? The max Mbps was 110,01 Mbps - 1760,16 IOps at 64KB random reading IO's. This was with 64KB allocation unit size, jumbo, flowcontrol and RSTP. &lt;BR /&gt;With default W2008 R2 allocation unit size, no jumbo, no flowcontrol, no RSTP it was 112,96 MBps/1807,4 IOSps. both cases ALB. So it fell.&lt;BR /&gt;&lt;BR /&gt;I had expected to see an overall improvement following the Networking Best Practices Guide. The improvement is seen only with writing 8K and 32K random IO's and reading sequential. Probably due to the 64KB allocation unit size. But 64Kb random IO's writing falls. That is not what I had expected and why I am questioning my configuration. Were my assumptions of an overall improvement wrong with jumbo/flowcontrol/RSTP/static LACP trunk?&lt;BR /&gt;&lt;BR /&gt;I am thinking of testing again without jumbo, and testing with HD Raid10 before deciding on the production setup.&lt;BR /&gt;&lt;BR /&gt;Pointers appreciated.&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2010 08:36:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709456#M1755</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-09T08:36:35Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709457#M1756</link>
      <description>When reconfiguring one node, removed it from cluster and management group, the remaining node had to be changed to nRaid 0.  &lt;BR /&gt;I copied the 25GB SQLio test file back over and noticed that the transfer speed doubled from 75GB to 150GB. &lt;BR /&gt;So I have 1/2 the spindles 8 instead of 16, but without network raid. Still the speed doubles. Is their such a high price in performance for nRaid?&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2010 10:39:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709457#M1756</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-09T10:39:19Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709458#M1757</link>
      <description>Hmmm,&lt;BR /&gt;&lt;BR /&gt;That doesn't sound correct...&lt;BR /&gt;&lt;BR /&gt;I'd start by enabling SNMP on your switch, then collect interface statistics:&lt;BR /&gt;&lt;BR /&gt;Packets in/out&lt;BR /&gt;Errors In/Out&lt;BR /&gt;dropped packets in/out&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2010 14:43:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709458#M1757</guid>
      <dc:creator>teledata</dc:creator>
      <dc:date>2010-11-09T14:43:48Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709459#M1758</link>
      <description>&lt;BR /&gt;I have just changed the HD Raid5 to Raid10. Did a SQL10 test with network Raid 0. Improvement of 8kb random write from IOs/sec 2471.28 - MBs/sec 19.30 to IOs/sec 13450.80 MBs/sec 105.08.&lt;BR /&gt;Volume is currently restriping will test also with network Raid 10. Expecting to see a drop again to 20 MBps.&lt;BR /&gt;Will try to find out how to monitor the sw2910al.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2010 15:16:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709459#M1758</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-09T15:16:41Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709460#M1759</link>
      <description>After setting nRaid10 and restriping finished: &lt;BR /&gt;Writing 8kb random io's&lt;BR /&gt;Throughput metrics: &lt;BR /&gt;IOs/sec: 4818.04 &lt;BR /&gt;MBs/sec: 37.64&lt;BR /&gt;&lt;BR /&gt;that is a drop from &lt;BR /&gt;IOs/sec: 13450.80&lt;BR /&gt;MBs/sec: 105.08&lt;BR /&gt;with no network Raid.</description>
      <pubDate>Tue, 09 Nov 2010 15:24:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709460#M1759</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-09T15:24:24Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709461#M1760</link>
      <description>I had a look at the port counters and found that the error counters are mostly zero. There are Tx Drops but way below HP's rule of thumb 1 in 5000. &lt;BR /&gt;&lt;BR /&gt;Switch 1 &lt;BR /&gt;Port 5 Server ALB slave Nic connected. No errors &lt;BR /&gt;Port 9 SAN node 1 ALB slave Nic connected.&lt;BR /&gt;Bytes TX 1,706,663,918 Unicast Tx 132,150,964 Bcast TX 267,65 &lt;BR /&gt;Drops Tx 183&lt;BR /&gt;Port 15 SAN node 2 ALB slave Nic connected.&lt;BR /&gt;B Tx 31,882,815 Bc Tx 132,883,178 U 268,263&lt;BR /&gt;Drops Tx 11&lt;BR /&gt;&lt;BR /&gt;Strangely the trk1 ports show flow control off, while enabled in the Config menu. According to the manual happens when the  port on the other side is not configured for flow control. Guess what, the connected Trk1 ports on switch 2 all show flow control on! Contradicting.&lt;BR /&gt;&lt;BR /&gt;Switch 2 has no drops on the San nodes.&lt;BR /&gt;Server Nic port&lt;BR /&gt;B Tx 1,290,285,957 Bc tx 316,230,605 U Tx 201,778 &lt;BR /&gt;Drops Tx 3245&lt;BR /&gt;&lt;BR /&gt;Should I conclude that the overhead of network Raid 10 is the reason for the complaints about the P4300 performance?</description>
      <pubDate>Wed, 10 Nov 2010 12:00:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709461#M1760</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-10T12:00:25Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709462#M1761</link>
      <description>&lt;!--!*#--&gt;volume Access Specification IOps Read IOps Write IOps MBps&lt;BR /&gt;NR0 8K; 55% Read; 80% random 842 462 380 6.6&lt;BR /&gt;NR-10 8K; 55% Read; 80% random 513 282 230 4.0&lt;BR /&gt;     &lt;BR /&gt;NR0 16K; 66% Read; 100% random 923 619 305 14.4&lt;BR /&gt;NR-10 16K; 66% Read; 100% random 485 325 160 7.6&lt;BR /&gt;     &lt;BR /&gt;NR0 64K; 66% Read; 100% random 470 315 155 29.4&lt;BR /&gt;NR-10 64K; 66% Read; 100% random 304 204 100 19.0&lt;BR /&gt;     &lt;BR /&gt;NR0 4K; 75% Read; 80% random 829 621 207 3.2&lt;BR /&gt;NR-10 4K; 75% Read; 80% random 606 455 151 2.4&lt;BR /&gt;     &lt;BR /&gt;NR0 32K; 55% Read; 80% random 541 297 244 16.9&lt;BR /&gt;NR-10 32K; 55% Read; 80% random 377 207 170 11.8&lt;BR /&gt;    &lt;BR /&gt;I ran a quick test... All I had handy though was a pair of VSAs (on ESXi 3.5, each VSA has 16 500GB SATA drives) so there is a lot more network overhead than a physical node, but even here you can see that the drop in performance isn't as large as you are seeing in your test...</description>
      <pubDate>Thu, 11 Nov 2010 03:11:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709462#M1761</guid>
      <dc:creator>teledata</dc:creator>
      <dc:date>2010-11-11T03:11:58Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709463#M1762</link>
      <description>Thanks for the effort. IMHO the IOMeter Access specification with 55% reads masks the outcome. Still a significant drop is already noticeable. With 100% 8K random writes (SQL/Exchange server) the network Raid overhead will be more apparant. Our SAN is intended as the target for our Hyper-V SQL2008 server and mixed read/writes approaces reality better.&lt;BR /&gt;&lt;BR /&gt;The switch 1 No flow control is now gone as I exchanged the dual personality ports for 10/100/1000 ports on the 2910al.&lt;BR /&gt;&lt;BR /&gt;I have attached the results of my IOSql tests sofar on a P4300 7.2TB 2 node system. ALB, No Jumbo, No flow Control, No trunk versus ALB, Jumbo, trunk, flow control and RSTP; HD Raid5 versus Raid10; network Raid 0 versus network Raid 10.&lt;BR /&gt;&lt;BR /&gt;Would there be a improvement in sequential reads and writes when adding a third node? Improvement in the order of?&lt;BR /&gt;&lt;BR /&gt;TIA.</description>
      <pubDate>Thu, 11 Nov 2010 09:40:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709463#M1762</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2010-11-11T09:40:03Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709464#M1763</link>
      <description>Did you ever get anywhere on this?&lt;BR /&gt;&lt;BR /&gt;I'm currently looking into the P4300 solution but all I'm finding are people complaining about the performance of the network raid (the hole reason to purchase the san)</description>
      <pubDate>Sun, 12 Jun 2011 23:07:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709464#M1763</guid>
      <dc:creator>AuZZZie</dc:creator>
      <dc:date>2011-06-12T23:07:33Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709465#M1764</link>
      <description>@Auzzie&lt;BR /&gt;&lt;BR /&gt;We are currently using HP's High Availabity Bundle Midrange Rack HA. The P4300 is configured Raid10, nRaid10 with ALB. I have compared my test results with those provided by HP and seen comparable results. The bottle neck is P4300's two nodes for load balancing. Adding a third or fourth node will lead to a 50% and 100% percent performance increase as per HP information. Basicly two nodes is a poor man's solution. &lt;BR /&gt;I am currently running W2008R2 fail-over cluster with a W2008R2 Hyper-V server running Progress database server. Performance is acceptable. We are going to add more nodes before going live with further Hyper-V servers (SQL server, RDS, SBS server).&lt;BR /&gt;&lt;BR /&gt;The SAN capabilities in combination with W2008R2 Hyper-V are a definite plus. Two nodes is not the recomended config and a big question mark for performance critical database servers. In such instances mutiple P4500 with 10GB ports or a server with solid state disks are maybe a better solution.</description>
      <pubDate>Wed, 15 Jun 2011 13:25:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709465#M1764</guid>
      <dc:creator>Fred Blum</dc:creator>
      <dc:date>2011-06-15T13:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709466#M1765</link>
      <description>We have a two node P4300 cluster with ALB on redundant HP 2910al switches. Clients are two DL380 G7 with ESXi 4.1.&lt;BR /&gt;&lt;BR /&gt;Attached is a screenshot of a robocopy job and of the HP SAN Performance Monitor. As you can see we are able to reach 122 MByte/s (max. for a 1 Gbit/s link is 125 MByte/s).&lt;BR /&gt;&lt;BR /&gt;Source of the robocopy job is a Win2003 server using the MS iSCSI initiator, target is Win2003 server on VMFS.&lt;BR /&gt;&lt;BR /&gt;All volumes are Network RAID 10 (volumes mirrored).&lt;BR /&gt;&lt;BR /&gt;Flow control, jumbo frames and Rapid Spanning Tree are enabled.&lt;BR /&gt;&lt;BR /&gt;Of course this is no IO test but it shows that a P4300 cluster can operate at the max. throughput limit of a 1 Gbit/s link.&lt;BR /&gt;&lt;BR /&gt;Thomas</description>
      <pubDate>Fri, 17 Jun 2011 08:04:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709466#M1765</guid>
      <dc:creator>Thomas Halwax</dc:creator>
      <dc:date>2011-06-17T08:04:18Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709467#M1766</link>
      <description>@teledata&lt;BR /&gt;&lt;BR /&gt;comparing your results with 2 x P4300 G2 Cluster, each node has 8 x 450 GB SAS, Node Raid 5, Network Raid 10 (mirror)&lt;BR /&gt;&lt;BR /&gt;Using IOMeter on a 5GB raw disk via MS iSCSI initiator (iops total, iops read, iops write,mbps):&lt;BR /&gt;&lt;BR /&gt;4k, 75% Read, 80% Random: 2244,1684,559,8&lt;BR /&gt;8k, 55% Read, 80% Random: 1886,1038,848,14&lt;BR /&gt;16k, 66% Read, 100% Random: 2193,1443,750,34&lt;BR /&gt;32k, 55% Read, 80% Random: 1456,801,654,45&lt;BR /&gt;64k, 66% Read, 100% Random: 1192,786,405,74&lt;BR /&gt;&lt;BR /&gt;Thomas&lt;BR /&gt;</description>
      <pubDate>Fri, 17 Jun 2011 10:29:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709467#M1766</guid>
      <dc:creator>Thomas Halwax</dc:creator>
      <dc:date>2011-06-17T10:29:12Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709468#M1767</link>
      <description>Thomas,&lt;BR /&gt;what is your server to SAN connection look like?  I see the G2 cluster has 2 internal 10/100 cards.  Hard to believe your seeing that performance of those cards.</description>
      <pubDate>Fri, 17 Jun 2011 22:32:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709468#M1767</guid>
      <dc:creator>mggates</dc:creator>
      <dc:date>2011-06-17T22:32:55Z</dc:date>
    </item>
    <item>
      <title>Re: High latency, low IO's, MBps</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709469#M1768</link>
      <description>I think you're mistaken. The G2 has 2 X 10/100/1000 nics per node.</description>
      <pubDate>Fri, 17 Jun 2011 22:49:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/high-latency-low-io-s-mbps/m-p/4709469#M1768</guid>
      <dc:creator>AuZZZie</dc:creator>
      <dc:date>2011-06-17T22:49:56Z</dc:date>
    </item>
  </channel>
</rss>

