<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Virtual Connect switches throughput issues in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/virtual-connect-switches-throughput-issues/m-p/4585794#M9303</link>
    <description>&lt;BR /&gt;Hi, &lt;BR /&gt;&lt;BR /&gt;We're having some major nfs read throughput issues with the VC 1/10Gb-F switches. The reads coming off our NFS server to a blade can only go up to 3.6MB per second, where a typical nfs read speed can go up to 90MB/sec.  &lt;BR /&gt;&lt;BR /&gt;Has anybody seen this problem too? thanks,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 17 Feb 2010 16:25:52 GMT</pubDate>
    <dc:creator>gratchie</dc:creator>
    <dc:date>2010-02-17T16:25:52Z</dc:date>
    <item>
      <title>Virtual Connect switches throughput issues</title>
      <link>https://community.hpe.com/t5/bladesystem-general/virtual-connect-switches-throughput-issues/m-p/4585794#M9303</link>
      <description>&lt;BR /&gt;Hi, &lt;BR /&gt;&lt;BR /&gt;We're having some major nfs read throughput issues with the VC 1/10Gb-F switches. The reads coming off our NFS server to a blade can only go up to 3.6MB per second, where a typical nfs read speed can go up to 90MB/sec.  &lt;BR /&gt;&lt;BR /&gt;Has anybody seen this problem too? thanks,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 17 Feb 2010 16:25:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/virtual-connect-switches-throughput-issues/m-p/4585794#M9303</guid>
      <dc:creator>gratchie</dc:creator>
      <dc:date>2010-02-17T16:25:52Z</dc:date>
    </item>
    <item>
      <title>Re: Virtual Connect switches throughput issues</title>
      <link>https://community.hpe.com/t5/bladesystem-general/virtual-connect-switches-throughput-issues/m-p/4585795#M9304</link>
      <description>We are having a similar problem.  A about 2 months ago our storage vmotions took about 20 min for 10Gb of data.  Now they take about 6 hours, although no one is complaining about performance.  We have two c7000 enclosures with 27 BL460G1 servers with the same problem.  We are using NFS storage on 2 Netapp FAS3170 clusters with 10Gb links.  Links are not saturated anywhere.  Vmotion works fine on new enclosure with Flex-10 vc and vsphere, so not storage.  We just built empty enclosure with same vc swithes and updated all firmware.  Farm is ESX 3.5 update 3.  New server in empty enclosure is ESX 3.5 update 5.  Will post if we find something.  &lt;BR /&gt;&lt;BR /&gt;RJ</description>
      <pubDate>Wed, 17 Feb 2010 20:53:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/virtual-connect-switches-throughput-issues/m-p/4585795#M9304</guid>
      <dc:creator>Robert_266</dc:creator>
      <dc:date>2010-02-17T20:53:15Z</dc:date>
    </item>
  </channel>
</rss>

