<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Detecting failed disks in a Raid in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698393#M67390</link>
    <description>Sorry for spamming the forum replying to myself. But if there are some other Debian users - the simple solution is&lt;BR /&gt;&lt;BR /&gt;apt-get install cpqarrayd&lt;BR /&gt;&lt;BR /&gt;Hope this helps someone else :)</description>
    <pubDate>Wed, 28 Dec 2005 05:38:00 GMT</pubDate>
    <dc:creator>Guttorm Fjørtoft</dc:creator>
    <dc:date>2005-12-28T05:38:00Z</dc:date>
    <item>
      <title>Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698389#M67386</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I just installed Debian Sarge Linux og a ProLiant ML 350 G4p server, and everything works great!&lt;BR /&gt;&lt;BR /&gt;It's a web-server, so I didn't install any GUI (X-windows) at all, only change I did from the standard install was to use the 2.6 kernel.&lt;BR /&gt;&lt;BR /&gt;The only thing I'm missing is a way to get an alert if a hard disk fails. I saw there are some monitoring tools available for Linux, but it seemed to have quite a bit of requirements.&lt;BR /&gt;&lt;BR /&gt;So I looked around in /proc to see if there was a way to find out if the Raid is ok, but I couldn't find any.&lt;BR /&gt;&lt;BR /&gt;One looked promising:&lt;BR /&gt;&lt;BR /&gt;cat /proc/driver/cciss/cciss0 &lt;BR /&gt;cciss0: HP Smart Array 642 Controller&lt;BR /&gt;Board ID: 0x409b0e11&lt;BR /&gt;Firmware Version: 2.58&lt;BR /&gt;IRQ: 201&lt;BR /&gt;Logical drives: 1&lt;BR /&gt;Current Q depth: 0&lt;BR /&gt;Current # commands on controller: 0&lt;BR /&gt;Max Q depth since init: 159&lt;BR /&gt;Max # commands on controller since init: 261&lt;BR /&gt;Max SG entries since init: 31&lt;BR /&gt;       Sequential access devices: 0&lt;BR /&gt;&lt;BR /&gt;cciss/c0d0:       72.83GB       RAID 1(1+0)&lt;BR /&gt;&lt;BR /&gt;But I then tried to remove one of the hotswap disks, but I could see no difference.&lt;BR /&gt;&lt;BR /&gt;Anyone know where to look for Raid status information? I could then write my own little script to send an alert if a harddisk fails.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Guttorm Fjørtoft</description>
      <pubDate>Tue, 27 Dec 2005 07:53:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698389#M67386</guid>
      <dc:creator>Guttorm Fjørtoft</dc:creator>
      <dc:date>2005-12-27T07:53:03Z</dc:date>
    </item>
    <item>
      <title>Re: Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698390#M67387</link>
      <description>The Redhat commands are:&lt;BR /&gt;&lt;BR /&gt;mdadm Manage software RAID (mdadm â  detail /dev/md0)&lt;BR /&gt;partprobe [-s] Inform OS of partition table changes&lt;BR /&gt;"watch" or "cat /proc/mdstat"&lt;BR /&gt;&lt;BR /&gt;mpstat  View a RAID's status&lt;BR /&gt;raidstart /dev/md0   Start the array&lt;BR /&gt;mkraid  create the array&lt;BR /&gt;Also see "/etc/raidtab"&lt;BR /&gt;&lt;BR /&gt;Example:&lt;BR /&gt;Here's a sample configuration file: &lt;BR /&gt;&lt;BR /&gt;       #&lt;BR /&gt;       # sample raiddev configuration file&lt;BR /&gt;       #&lt;BR /&gt;       raiddev /dev/md0&lt;BR /&gt;           raid-level              0&lt;BR /&gt;           nr-raid-disks           2  # Specified below&lt;BR /&gt;           persistent-superblock   0&lt;BR /&gt;           chunk-size              8&lt;BR /&gt;&lt;BR /&gt;           # device #1:&lt;BR /&gt;           #&lt;BR /&gt;           device                  /dev/hda1&lt;BR /&gt;           raid-disk               0&lt;BR /&gt;&lt;BR /&gt;           # device #2:&lt;BR /&gt;           #&lt;BR /&gt;           device                  /dev/hdb1&lt;BR /&gt;           raid-disk               1&lt;BR /&gt;&lt;BR /&gt;       # A new section always starts with the &lt;BR /&gt;       # keyword 'raiddev'&lt;BR /&gt; &lt;BR /&gt;       raiddev /dev/md1&lt;BR /&gt;           raid-level              5&lt;BR /&gt;           nr-raid-disks           3  # Specified below&lt;BR /&gt;           nr-spare-disks          1  # Specified below&lt;BR /&gt;           persistent-superblock   1&lt;BR /&gt;           parity-algorithm        left-symmetric&lt;BR /&gt;&lt;BR /&gt;           # Devices to use in the RAID array:&lt;BR /&gt;           #&lt;BR /&gt;           device                  /dev/sda1&lt;BR /&gt;           raid-disk               0&lt;BR /&gt;           device                  /dev/sdb1&lt;BR /&gt;           raid-disk               1&lt;BR /&gt;           device                  /dev/sdc1&lt;BR /&gt;           raid-disk               2&lt;BR /&gt;&lt;BR /&gt;           # The spare disk:&lt;BR /&gt;           device                  /dev/sdd1&lt;BR /&gt;           spare-disk              0                      &lt;BR /&gt;&lt;BR /&gt;Let's walk through this example, line by line: &lt;BR /&gt;&lt;BR /&gt;       #&lt;BR /&gt;       # sample raiddev configuration file&lt;BR /&gt;       #&lt;BR /&gt;       raiddev /dev/md0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 28 Dec 2005 03:50:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698390#M67387</guid>
      <dc:creator>Andrew Cowan</dc:creator>
      <dc:date>2005-12-28T03:50:31Z</dc:date>
    </item>
    <item>
      <title>Re: Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698391#M67388</link>
      <description>Thank you but this wont work. I am using hardware Raid - not software.&lt;BR /&gt;&lt;BR /&gt;The Linux kernel can use the Raid controller just fine - rebuilding also works, but I cant find any way to ask the driver of the Raid status.&lt;BR /&gt;&lt;BR /&gt;Unplugging a disk and putting it back in produces nothing in the logs, that I think is kind of strange. But maybe I'm looking in the wrong places?</description>
      <pubDate>Wed, 28 Dec 2005 05:05:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698391#M67388</guid>
      <dc:creator>Guttorm Fjørtoft</dc:creator>
      <dc:date>2005-12-28T05:05:24Z</dc:date>
    </item>
    <item>
      <title>Re: Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698392#M67389</link>
      <description>Someone pointed me to&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.ussg.iu.edu/hypermail/linux/kernel/0302.0/1066.html" target="_blank"&gt;http://www.ussg.iu.edu/hypermail/linux/kernel/0302.0/1066.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;There seems to be a way :)&lt;BR /&gt;&lt;BR /&gt;Now I'll try to get those rpms installed.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 28 Dec 2005 05:14:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698392#M67389</guid>
      <dc:creator>Guttorm Fjørtoft</dc:creator>
      <dc:date>2005-12-28T05:14:53Z</dc:date>
    </item>
    <item>
      <title>Re: Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698393#M67390</link>
      <description>Sorry for spamming the forum replying to myself. But if there are some other Debian users - the simple solution is&lt;BR /&gt;&lt;BR /&gt;apt-get install cpqarrayd&lt;BR /&gt;&lt;BR /&gt;Hope this helps someone else :)</description>
      <pubDate>Wed, 28 Dec 2005 05:38:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698393#M67390</guid>
      <dc:creator>Guttorm Fjørtoft</dc:creator>
      <dc:date>2005-12-28T05:38:00Z</dc:date>
    </item>
    <item>
      <title>Re: Detecting failed disks in a Raid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698394#M67391</link>
      <description>As you are using HW RAID I would suggest&lt;BR /&gt;to look at your RAID controller's driver docs for any possibilities how to check the disks' state.&lt;BR /&gt;If any custom utilities exist then they should be mentioned there.&lt;BR /&gt;Also it should be stated in what guise the RAID's disks stat appear in the procfs (or even sysfs if it's that current).&lt;BR /&gt;Maybe you are lucky and the driver even exists as source code, where you could then look for comments and implementaion details.&lt;BR /&gt;If you haven't so done yet install the kernel sources and search there for any hints relating to your controller's driver.&lt;BR /&gt;</description>
      <pubDate>Wed, 28 Dec 2005 05:41:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/detecting-failed-disks-in-a-raid/m-p/3698394#M67391</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2005-12-28T05:41:15Z</dc:date>
    </item>
  </channel>
</rss>

