<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Poor performance P4500 Campus SAN with 4 x switches in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/5604855#M5000</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I got P4500G2 x 2, ESXi5 x 2, H3C switch.&amp;nbsp; All running @ 10GB.&lt;/P&gt;&lt;P&gt;P4500G2&amp;nbsp;network raid 10, (802.3ad bonding).&lt;/P&gt;&lt;P&gt;ESXi5&amp;nbsp; x 2 with software iscsi, storage path (RR).&lt;/P&gt;&lt;P&gt;H3C switch using (IRF=Stack).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have 2 guest windows to copy data and delete from each another.&amp;nbsp; It just a simple test, however I see the disk queue length over 6 on the CMC monitor.&amp;nbsp; Besides we found the error message of the disk lantency increased come out frequency.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help...&lt;/P&gt;</description>
    <pubDate>Sat, 31 Mar 2012 14:34:50 GMT</pubDate>
    <dc:creator>Snakehead</dc:creator>
    <dc:date>2012-03-31T14:34:50Z</dc:date>
    <item>
      <title>Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766991#M2496</link>
      <description>We have a P4500 campus SAN that was originally connected to 2 x 3Com GB switches with 2Gb trunk connection using VMware Round Robin and two GB connections per ESX host - 1 switch per site. To provide switch redundancy at each site, we added a second switch to each site - Site 1 now has 2 x switches with 2Gb trunk, site 2 has the same and we have a separate 2GB trunk linking the sites. when we move P4500 so that one GB connection goes into each switch the performance decreases significantly. If we move both GB connections to the same switch then performance is much better.&lt;BR /&gt;&lt;BR /&gt;any ideas?</description>
      <pubDate>Fri, 18 Mar 2011 11:21:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766991#M2496</guid>
      <dc:creator>Paul Large</dc:creator>
      <dc:date>2011-03-18T11:21:42Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766992#M2497</link>
      <description>Have you read the best practice for performance article dealing with jumbo frames, adaptive load balancing, and flow control? &lt;A href="http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01750150/c01750150.pdf?jumpid=reg_R1002_USEN" target="_blank"&gt;http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01750150/c01750150.pdf?jumpid=reg_R1002_USEN&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 19 Mar 2011 03:41:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766992#M2497</guid>
      <dc:creator>Cajuntank MS</dc:creator>
      <dc:date>2011-03-19T03:41:17Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766993#M2498</link>
      <description>Yes, I have LACP for 2GB connection (2 x 1GB) between the switches and Flow Control enabled on all switch ports. Does flow control need to be enabled anywhere else - on the P4000 for example?.&lt;BR /&gt;&lt;BR /&gt;Have not enabled Jumbo frames as I have read that it rarely makes any difference.</description>
      <pubDate>Sat, 19 Mar 2011 13:27:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766993#M2498</guid>
      <dc:creator>Paul Large</dc:creator>
      <dc:date>2011-03-19T13:27:59Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766994#M2499</link>
      <description>Yes. Flow control must be an end to end implementation. NICs on SAN nodes, ports on switches, and NICs on servers.&lt;BR /&gt;You said you have LACP enabled between your switches but did not say what you were doing on your SAN...ALB? LACP?&lt;BR /&gt;&lt;BR /&gt;On the Jumbo Frames part, some switches don't play nice with Jumbo Frames and Flow Control turned on at the same time. I have HP Procurve switches which I have not had an issue with myself.</description>
      <pubDate>Sat, 19 Mar 2011 20:47:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766994#M2499</guid>
      <dc:creator>Cajuntank MS</dc:creator>
      <dc:date>2011-03-19T20:47:49Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766995#M2500</link>
      <description>OK, I did not have Flow Control enabled on the P4000 which I have now enabled. Initial tests show a great improvement in performance :)</description>
      <pubDate>Sun, 20 Mar 2011 20:21:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766995#M2500</guid>
      <dc:creator>Paul Large</dc:creator>
      <dc:date>2011-03-20T20:21:14Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766996#M2501</link>
      <description>Here is a small list that I check...&lt;BR /&gt;Please check on the Node, Switch and Host Server&lt;BR /&gt;1.) Enable Flow Control on network switches and adapters. Flow control ensures a receiver can make the sender pace its speed and is important in avoiding data loss. (Recommended)&lt;BR /&gt;2.) Ensure Spanning Tree Algorithm for detecting loops is turned OFF. Loop detection introduces a delay in making a port become usable for data transfer and may lead to application timeouts.&lt;BR /&gt;3.) Segregate SAN and LAN traffic.  iSCSI SAN interfaces should be separated from other corporate network traffic (LAN).   &lt;BR /&gt;&lt;BR /&gt;Networking best practices&lt;BR /&gt;â ¢Use Non-Blocking switches and set the negotiated speed on the switches. &lt;BR /&gt;â ¢Disable unicast storm control on iSCSI ports.  Most switches have unicast storm control disabled by default.  If your switch has this enabled, you should disable this on the ports connected to iSCSI hosts and targets to avoid packet loss. &lt;BR /&gt;â ¢Servers should use dedicated NICs for SAN traffic.  Deploying iSCSI disks on a separate network helps to minimize network congestion and latency.  Additionally, iSCSI volumes are more secure whenâ ¦ Segregate SAN &amp;amp; LAN traffic can be separated using port based VLANs or physically separate networks.  &lt;BR /&gt;â ¢Configure additional Paths for High Availability; use either Microsoft MPIO or MCS (multiple connections per session) with additional NICs in the server to create additional connections  to the iSCSI storage array through redundant Ethernet switch fabrics. &lt;BR /&gt;â ¢Unbind File and Print Sharing from the iSCSI NIC â   on the NICs which connect only to the iSCSI SAN, unbind File and Print Sharing.&lt;BR /&gt;</description>
      <pubDate>Sun, 20 Mar 2011 20:27:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/4766996#M2501</guid>
      <dc:creator>Jitun</dc:creator>
      <dc:date>2011-03-20T20:27:17Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/5604855#M5000</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I got P4500G2 x 2, ESXi5 x 2, H3C switch.&amp;nbsp; All running @ 10GB.&lt;/P&gt;&lt;P&gt;P4500G2&amp;nbsp;network raid 10, (802.3ad bonding).&lt;/P&gt;&lt;P&gt;ESXi5&amp;nbsp; x 2 with software iscsi, storage path (RR).&lt;/P&gt;&lt;P&gt;H3C switch using (IRF=Stack).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have 2 guest windows to copy data and delete from each another.&amp;nbsp; It just a simple test, however I see the disk queue length over 6 on the CMC monitor.&amp;nbsp; Besides we found the error message of the disk lantency increased come out frequency.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help...&lt;/P&gt;</description>
      <pubDate>Sat, 31 Mar 2012 14:34:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/5604855#M5000</guid>
      <dc:creator>Snakehead</dc:creator>
      <dc:date>2012-03-31T14:34:50Z</dc:date>
    </item>
    <item>
      <title>Re: Poor performance P4500 Campus SAN with 4 x switches</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/5656695#M5196</link>
      <description>&lt;P&gt;A disk queue lenght over 6 is not probably something to worry about. We have one cluster in our setup (P4300 g2 X 2) that can hit upto about 80 on the queue lengh and the customers do not seem to notice. Real world testing is always going to better than numbers in the CMC. We have found that we can run upto about 300 RDS users with their SBS servers on a&amp;nbsp;single pair of P4300s without the customers reporting any performance issues. Throughput seems to be much less important than latency, make sure you have flow control enabled on the iSCSI ports on your VM hosts&amp;nbsp;and the P4000 ports on your switches. Jumbo frames appear to cause issues on the procurve 2810 switches, i dont know about the 2910 als?&lt;/P&gt;</description>
      <pubDate>Mon, 14 May 2012 09:53:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/poor-performance-p4500-campus-san-with-4-x-switches/m-p/5656695#M5196</guid>
      <dc:creator>David_Tocker</dc:creator>
      <dc:date>2012-05-14T09:53:48Z</dc:date>
    </item>
  </channel>
</rss>

