HPE EVA Storage
1826428 Members
3915 Online
109692 Solutions
New Discussion

Re: Performance degradation on Drive Mapped to EVA5000

 
Jeff Marquez
New Member

Performance degradation on Drive Mapped to EVA5000

I currently have an issue that seems to be related with disk performance to a drive mapped on an EVA5000

Our system consists of the follow
Windows 2000 Cluster server
Each Node has
3Gb Ram
Dual 2.4ghz Cpu
Local C: drive
P: drive is a cluster resource 220Gb connect via fibre to MVA5000
X: drive 128M ran drive.

Our problem seems to be writing to the P: drive but only occurs on Node 1 of the Cluster, not node 2.

We have been running continuos tests with FTP’s from various different computers into the system. At any time it does not matter which computer the FTP’s originate from the transfer times are consistent with other machines.

The transfers were set-up to copy files consecutively to the c: drive, then p: drive, then x: drive. All FTP’s use the same IP address for a destination and follow the same network path from origin.

When the system is running without degradation, all FTP’s take about 10 seconds.
Once the system degrades, the FTP’s to the c: drive and x: drive remain at 10 seconds but the FTP to the P: drive takes over 40 seconds.

We can force the system to go into degradation mode by moving a Cluster group containing replistor resources to Node 2. Node 1 is rebooted and then the Cluster group is moved back.

We have observed that the performance does not degrade if either Virus scanning or Replistor are turned off when the Group is moved back to Node 1. However, once the system has degraded, turning virus scanning and Replistor off does not rectify the problem. Also, if Node 1 is not rebooted and the group is simply moved to Node 2 and moved back then the degradation does not occur.

When the system is in degraded mode the Disk Queue Length to the P; drive is always very low, however, the inetinfo.exe service utilises nearly all of the CPU. The same is apparent with an RCP service if files are being transferred via RCP. I have made the assumption that the inetinfo.exe service is not able to deliver the file to the P: drive for some reason. It is significant to note here that the delivery to the c: drive and x: drive does not have a problem.

It would seem that excess disk activity causes the system to go into ‘slow’ mode which it is not able to recover from.

Any suggestions would be greatly
6 REPLIES 6
Jeff Marquez
New Member

Re: Performance degradation on Drive Mapped to EVA5000

A further note, once the system is in the 'degraded' mode when can get it to recover by taking the cluster resource P: drive offline and then bringing it back online
Uwe Zessin
Honored Contributor

Re: Performance degradation on Drive Mapped to EVA5000

Jeff,
you might want to check the following customer advisory:
""Windows applications may experience performance issues with EVA Virtual Disks during a heavy write load""

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=PSD_OI040301_CW02&printver=true
.
Alzhy
Honored Contributor

Re: Performance degradation on Drive Mapped to EVA5000

Right on the money Uwe.!
Hakuna Matata.
Uwe Zessin
Honored Contributor

Re: Performance degradation on Drive Mapped to EVA5000

No, the advice is for free, Nelson ;-)
.
Erwin van Londen
Valued Contributor

Re: Performance degradation on Drive Mapped to EVA5000

In addition to Uwe's reply you could also check if the HBA driver version and registry parameters are the same. If i.e. you have a greater queudepth on the server that running fine than on the other maybe that could also the problem.

Kind regards,
Erwin van Londen
https://erwinvanlonden.net
Jeff Marquez
New Member

Re: Performance degradation on Drive Mapped to EVA5000

it seems we had the luxury of our quorom becoming corrupt. Rather than try and fix it we decided to rebuild the system from scratch. We used the dispar utilty as described above. The FTP transfers on both nodes are running extremely well at this point of time.

thanks for your help