<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Slow Disk Read Performance in 6 Node Cluster in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/slow-disk-read-performance-in-6-node-cluster/m-p/5562043#M4735</link>
    <description>Did you already tried splitting the backup job in two and running them simultainously? Saniq limits (capping) single threaded read jobs.</description>
    <pubDate>Thu, 23 Feb 2012 21:50:58 GMT</pubDate>
    <dc:creator>M.Braak</dc:creator>
    <dc:date>2012-02-23T21:50:58Z</dc:date>
    <item>
      <title>Slow Disk Read Performance in 6 Node Cluster</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/slow-disk-read-performance-in-6-node-cluster/m-p/5559819#M4717</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We've been running a P4000 setup for a few months now and during this time we've generally had a good experience.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We've now come to the point where we need to implement a new backup solution and we're looking at using Data Protector in conjunction with a HP Tape Library to back up a variety of Servers, but particularly the data that's held on the SAN.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The problem i'm facing is that the backup throughput simply isn't quick enough. &amp;nbsp;With the current data load and the speeds i'm seeing the backup is going to take in excess of 72 hours to complete. &amp;nbsp;I've done some rough calculations and if we can achieve around 70% of the maximum throughput of the backup drive then in fact we should be able to complete the backup in &amp;lt;8 hours. &amp;nbsp;But somewhere along the line we were seeing delays.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Initially it was thought the network might be the issue, but doing some tests using JPerf proved that not to be the case.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Following from that I did some IO tests on the SAN disks from a number of vmWare Servers. &amp;nbsp;When doing this I was seeing read rates ranging between 18.6 megabytes/s and 120 megabytes/s. &amp;nbsp;I also used vmWare to monitor the 3 vmNICs we are using to communicate with the SAN and could see that all three NICs were being utilised, with each achieving a peak transfer rate of around 30,000kbps.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've also checked the NIC's on each of the nodes in the cluster and the majority of the interfaces are only showing 10-20% utilisation. &amp;nbsp;When each disk read test is performed I see a spike up to around 50% on just one node, and i'm guessing that node is the gateway for the specific volume i'm testing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not sure if these figures are reasonable for a 6 node P4500 cluster or not - so i'd appreciate some feedback on this. &amp;nbsp;To give a bit of background we operate 3 x vmWare ESX 4 Servers which are connected to a 6 node P4500 cluster. &amp;nbsp;The cluster is configured as multi-site so data from 3 of the nodes is mirrored to the other nodes in the cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Unfortunately until we can achieve a backup in a reasonable time-frame i'm not able to migrate any more systems and/or data into the environment.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance for any help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Pete&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Feb 2012 10:15:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/slow-disk-read-performance-in-6-node-cluster/m-p/5559819#M4717</guid>
      <dc:creator>Peter J West</dc:creator>
      <dc:date>2012-02-22T10:15:07Z</dc:date>
    </item>
    <item>
      <title>Re: Slow Disk Read Performance in 6 Node Cluster</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/slow-disk-read-performance-in-6-node-cluster/m-p/5562043#M4735</link>
      <description>Did you already tried splitting the backup job in two and running them simultainously? Saniq limits (capping) single threaded read jobs.</description>
      <pubDate>Thu, 23 Feb 2012 21:50:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/slow-disk-read-performance-in-6-node-cluster/m-p/5562043#M4735</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-02-23T21:50:58Z</dc:date>
    </item>
  </channel>
</rss>

