<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance degradation on Drive Mapped to EVA5000 in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528266#M12328</link>
    <description>it seems we had the luxury of our quorom becoming corrupt. Rather than try and fix it we decided to rebuild the system from scratch. We used the dispar utilty as described above. The FTP transfers on both nodes are running extremely well at this point of time.&lt;BR /&gt;&lt;BR /&gt;thanks for your help</description>
    <pubDate>Mon, 25 Apr 2005 19:19:08 GMT</pubDate>
    <dc:creator>Jeff Marquez</dc:creator>
    <dc:date>2005-04-25T19:19:08Z</dc:date>
    <item>
      <title>Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528260#M12322</link>
      <description>I currently have an issue that seems to be related with disk performance to a drive mapped on an EVA5000&lt;BR /&gt;&lt;BR /&gt;Our system consists of the follow&lt;BR /&gt;Windows 2000 Cluster server&lt;BR /&gt;Each Node has&lt;BR /&gt;3Gb Ram&lt;BR /&gt;Dual 2.4ghz Cpu&lt;BR /&gt;Local C: drive&lt;BR /&gt;P: drive is a cluster resource 220Gb connect via fibre to MVA5000&lt;BR /&gt;X: drive 128M ran drive.&lt;BR /&gt;&lt;BR /&gt;Our problem seems to be writing to the P: drive but only occurs on Node 1 of the Cluster, not node 2.&lt;BR /&gt;&lt;BR /&gt;We have been running continuos tests with FTP’s from various different computers into the system. At any time it does not matter which computer the FTP’s originate from the transfer times are consistent with other machines.&lt;BR /&gt;&lt;BR /&gt;The transfers were set-up to copy files consecutively to the c: drive, then p: drive, then x: drive. All FTP’s use the same IP address for a destination and follow the same network path from origin.&lt;BR /&gt;&lt;BR /&gt;When the system is running without degradation, all FTP’s take about 10 seconds.&lt;BR /&gt;Once the system degrades, the FTP’s to the c: drive and x: drive remain at 10 seconds but the FTP to the P: drive takes over 40 seconds.&lt;BR /&gt;&lt;BR /&gt;We can force the system to go into degradation mode by moving a Cluster group containing replistor resources to Node 2. Node 1 is rebooted and then the Cluster group is moved back.&lt;BR /&gt;&lt;BR /&gt;We have observed that the performance does not degrade if either Virus scanning or Replistor are turned off when the Group is moved back to Node 1. However, once the system has degraded, turning virus scanning and Replistor off does not rectify the problem. Also, if Node 1 is not rebooted and the group is simply moved to Node 2 and moved back then the degradation does not occur.&lt;BR /&gt;&lt;BR /&gt;When the system is in degraded mode the Disk Queue Length to the P; drive is always very low, however, the inetinfo.exe service utilises nearly all of the CPU. The same is apparent with an RCP service if files are being transferred via RCP. I have made the assumption that the inetinfo.exe service is not able to deliver the file to the P: drive for some reason. It is significant to note here that the delivery to the c: drive and x: drive does not have a problem.&lt;BR /&gt;&lt;BR /&gt;It would seem that excess disk activity causes the system to go into ‘slow’ mode which it is not able to recover from.&lt;BR /&gt;&lt;BR /&gt;Any suggestions would be greatly</description>
      <pubDate>Tue, 19 Apr 2005 22:36:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528260#M12322</guid>
      <dc:creator>Jeff Marquez</dc:creator>
      <dc:date>2005-04-19T22:36:35Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528261#M12323</link>
      <description>A further note, once the system is in the 'degraded' mode when can get it to recover by taking the cluster resource P: drive offline and then bringing it back online</description>
      <pubDate>Tue, 19 Apr 2005 23:25:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528261#M12323</guid>
      <dc:creator>Jeff Marquez</dc:creator>
      <dc:date>2005-04-19T23:25:32Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528262#M12324</link>
      <description>Jeff,&lt;BR /&gt;you might want to check the following customer advisory:&lt;BR /&gt;""Windows applications may experience performance issues with EVA Virtual Disks during a heavy write load""&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=PSD_OI040301_CW02&amp;amp;printver=true" target="_blank"&gt;http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=PSD_OI040301_CW02&amp;amp;printver=true&lt;/A&gt;</description>
      <pubDate>Wed, 20 Apr 2005 03:41:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528262#M12324</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-20T03:41:36Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528263#M12325</link>
      <description>Right on the money Uwe.!</description>
      <pubDate>Wed, 20 Apr 2005 11:03:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528263#M12325</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-04-20T11:03:11Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528264#M12326</link>
      <description>No, the advice is for free, Nelson ;-)</description>
      <pubDate>Wed, 20 Apr 2005 11:09:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528264#M12326</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-20T11:09:23Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528265#M12327</link>
      <description>In addition to Uwe's reply you could also check if the HBA driver version and registry parameters are the same. If i.e. you have a greater queudepth on the server that running fine than on the other maybe that could also  the problem.&lt;BR /&gt;&lt;BR /&gt;Kind regards,&lt;BR /&gt;Erwin van Londen</description>
      <pubDate>Sat, 23 Apr 2005 06:41:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528265#M12327</guid>
      <dc:creator>Erwin van Londen</dc:creator>
      <dc:date>2005-04-23T06:41:28Z</dc:date>
    </item>
    <item>
      <title>Re: Performance degradation on Drive Mapped to EVA5000</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528266#M12328</link>
      <description>it seems we had the luxury of our quorom becoming corrupt. Rather than try and fix it we decided to rebuild the system from scratch. We used the dispar utilty as described above. The FTP transfers on both nodes are running extremely well at this point of time.&lt;BR /&gt;&lt;BR /&gt;thanks for your help</description>
      <pubDate>Mon, 25 Apr 2005 19:19:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/performance-degradation-on-drive-mapped-to-eva5000/m-p/3528266#M12328</guid>
      <dc:creator>Jeff Marquez</dc:creator>
      <dc:date>2005-04-25T19:19:08Z</dc:date>
    </item>
  </channel>
</rss>

