<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: HASS disk failure in Disk</title>
    <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080244#M4300</link>
    <description>The symptom of a drive failing, fixed by unplugging and re-inserting, is a very common occurrence of a drive that is going to fail. I have over 100 of these drives and thrir failure rate is about typical. The default timeout is fine and should only be increased for real arrays -- not JBOD's. It's not unusual for me to replace one or two of these drives per month. If you have a small number of drives and they are consistantly failing then I would check two things: 1) Power supply voltages 2) Cooling -- both the JBOD fans and the ambient.&lt;BR /&gt;</description>
    <pubDate>Mon, 29 Sep 2003 10:01:34 GMT</pubDate>
    <dc:creator>A. Clay Stephenson</dc:creator>
    <dc:date>2003-09-29T10:01:34Z</dc:date>
    <item>
      <title>HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080238#M4294</link>
      <description>Hi guys&lt;BR /&gt;Have any of you been getting any hardware failures on the Seagate ST39173WC?&lt;BR /&gt;Local HP office won't confirm or deny the cause, or even the MTBF for this model. A little bird whispered in my ear, and said I should try extending the PV timeout to 180 ms (pvchange -t 180 pvname). At the moment, the timeout value is defaulted. &lt;BR /&gt;I've had a string of 16 failures in the last 11 months.</description>
      <pubDate>Mon, 29 Sep 2003 02:52:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080238#M4294</guid>
      <dc:creator>Jakes Louw</dc:creator>
      <dc:date>2003-09-29T02:52:53Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080239#M4295</link>
      <description>Weve got lots of HASS drives, including many ST39173WC and we dont have any more failures than usual (rare). We dont adjust our PV timeout values either, the default is sufficient.&lt;BR /&gt;&lt;BR /&gt;I think you need to look at what type of failures you are having. What are the error messages you get when they fail ? eg. are they all powerfails, or scis errors or ??</description>
      <pubDate>Mon, 29 Sep 2003 04:45:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080239#M4295</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-09-29T04:45:01Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080240#M4296</link>
      <description>I'll have to check on the last one, but if my memory serves me correctly, we get POWERFAIL errors, the disks go to "NO H/W" on an ioscan, and go unavailable on a VGDISPLAY. Usually then the HP engineer will replace, but we have played around with unseating the disk, then rebooting, and then seating the disk again, and quite often the disk is visible and usable again once a vgsync is performed. Which tells me that the diagnostic firmware has flagged the disk due to excessive timeouts, or am I off base here?</description>
      <pubDate>Mon, 29 Sep 2003 06:14:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080240#M4296</guid>
      <dc:creator>Jakes Louw</dc:creator>
      <dc:date>2003-09-29T06:14:28Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080241#M4297</link>
      <description>Hi there.&lt;BR /&gt;Tried to check the Jamaica box with the stm tool ( cstm / xstm ) ?&lt;BR /&gt;Should give an overview about the real problems. We use these boxes for some time now and have very little problems.&lt;BR /&gt;Rgds&lt;BR /&gt;Alexander M. Ermes&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Sep 2003 06:42:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080241#M4297</guid>
      <dc:creator>Alexander M. Ermes</dc:creator>
      <dc:date>2003-09-29T06:42:34Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080242#M4298</link>
      <description>I would check disk firmwares (not to be too old) and SCSI busses - cables and termination. You should not see any SCSI/disk error messages in syslogs when server functions. If there're messages - then something is wrong. If you're daisy chained HASS's busses A and B - how long your daisy chain cable is?&lt;BR /&gt;If there're a lot of disks on the SCSI bus (let's say, 8 disks) and bus is loaded heavily (or example, having root/boot/swap/database disks on it) then I would split disks across different busses to split the load. If it's not possible - indeed try increasing PV timeout for low priority disks. Remember: 7 has highest priority, then .. 0, then 15 .. and 8 has lowest priority&lt;BR /&gt;Eugeny</description>
      <pubDate>Mon, 29 Sep 2003 07:15:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080242#M4298</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-09-29T07:15:49Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080243#M4299</link>
      <description>Hi Eugeny&lt;BR /&gt;&lt;BR /&gt;1) HP checked the firmware: they are happy&lt;BR /&gt;2) The disks are root/boot/swap, as you suggest, so high usage on occassion, hence my comment regarding timeouts&lt;BR /&gt;3) There are only 4 x HASS per SCSI card, so no heavy chaining&lt;BR /&gt;4) The cables are standard factory 5m or 10m units, installed by HP.&lt;BR /&gt;5) With only 2 x SCSI cards (one for primary, one for mirror), I cannot spread the load.&lt;BR /&gt;The parallel option for the mirroring should allow the fastest disk to update first, surely?</description>
      <pubDate>Mon, 29 Sep 2003 07:29:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080243#M4299</guid>
      <dc:creator>Jakes Louw</dc:creator>
      <dc:date>2003-09-29T07:29:37Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080244#M4300</link>
      <description>The symptom of a drive failing, fixed by unplugging and re-inserting, is a very common occurrence of a drive that is going to fail. I have over 100 of these drives and thrir failure rate is about typical. The default timeout is fine and should only be increased for real arrays -- not JBOD's. It's not unusual for me to replace one or two of these drives per month. If you have a small number of drives and they are consistantly failing then I would check two things: 1) Power supply voltages 2) Cooling -- both the JBOD fans and the ambient.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Sep 2003 10:01:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080244#M4300</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2003-09-29T10:01:34Z</dc:date>
    </item>
    <item>
      <title>Re: HASS disk failure</title>
      <link>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080245#M4301</link>
      <description>Again, just a sign-off on this:&lt;BR /&gt;&lt;BR /&gt;we started replacing the ST disks with IBM 9GB spindles (IBM DGHS09Y), and haven't had repeat failures on these fixes like we had on the Seagates.&lt;BR /&gt;&lt;BR /&gt;Makes one think....</description>
      <pubDate>Wed, 14 Jan 2004 07:25:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk/hass-disk-failure/m-p/3080245#M4301</guid>
      <dc:creator>Jakes Louw</dc:creator>
      <dc:date>2004-01-14T07:25:13Z</dc:date>
    </item>
  </channel>
</rss>

