<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Fail event on /dev/md1 in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017198#M64299</link>
    <description>The md devices are "Multiple Disks", i.e. software RAID devices. They act as containers for the physical disks in RAID or multipath configurations.&lt;BR /&gt;&lt;BR /&gt;For example, if your /dev/md1 is configured to RAID1 (=mirroring) mode, you'll find the /dev/md1 device "contains" /dev/sde and at least one other /dev/sd* disk device. &lt;BR /&gt;&lt;BR /&gt;The simplest way to view your /dev/md* configuration is to run "cat /dev/mdstat".&lt;BR /&gt;&lt;BR /&gt;The kernel shows error messages because the actual disk device /dev/sde is failing. The mdadm daemon has noticed the failing /dev/sde belongs to /dev/md1.&lt;BR /&gt;&lt;BR /&gt;In RAID1 or RAID5 configurations, the warning from mdadm is useful because a single failing disk does not cause any immediate problems. But if a second disk from the same RAID set fails, you will lose data: in this case, the message from mdadm means you're no longer protected against another disk failure.&lt;BR /&gt;&lt;BR /&gt;So you should replace the failing /dev/sde and then re-sync the RAID set. If you don't know how to do this, google for Linux Software-RAID HOWTO.&lt;BR /&gt;&lt;BR /&gt;If you have a RAID0 configuration (to increase performance, not reliability), you may have already lost some data. If you don't have your backups up to date, back up anything important on the disks *now*!!!&lt;BR /&gt;&lt;BR /&gt;MK</description>
    <pubDate>Mon, 11 Jun 2007 14:56:20 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2007-06-11T14:56:20Z</dc:date>
    <item>
      <title>Fail event on /dev/md1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017195#M64296</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Je receive a mdadm deamon mail with this text:&lt;BR /&gt;&lt;BR /&gt;"A Fail event had been detected on md device /dev/md1."&lt;BR /&gt;&lt;BR /&gt;And in log file from kernel :&lt;BR /&gt;&lt;BR /&gt;WARNING:  Kernel Errors Present&lt;BR /&gt;   Info fld=0x2acc396, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x49c3ed1, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x5925fb1, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x5c82101, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x6967dd9, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0xb0e6ec, Current sde: sense key Medium Error...:  1 Time(s)&lt;BR /&gt;   end_request: I/O error, dev sde, sector...:  1 Time(s)&lt;BR /&gt;&lt;BR /&gt;What's it? Am I losing the hards disks?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;C.L.&lt;BR /&gt;&lt;BR /&gt;*********&lt;BR /&gt;Bonjour,&lt;BR /&gt;&lt;BR /&gt;Je viens de recevoir du deamon mdadm le message suivant : &lt;BR /&gt;"A Fail event had been detected on md device /dev/md1."&lt;BR /&gt;&lt;BR /&gt;et régulièrement, j'ai les messages suivants :&lt;BR /&gt;&lt;BR /&gt;WARNING:  Kernel Errors Present&lt;BR /&gt;   Info fld=0x2acc396, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x49c3ed1, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x5925fb1, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x5c82101, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0x6967dd9, Current sde: sense key Recovered Error...:  1 Time(s)&lt;BR /&gt;   Info fld=0xb0e6ec, Current sde: sense key Medium Error...:  1 Time(s)&lt;BR /&gt;   end_request: I/O error, dev sde, sector...:  1 Time(s)&lt;BR /&gt;&lt;BR /&gt;Qu'est ce que cela signifie? suis en train de perdre les disques?&lt;BR /&gt;&lt;BR /&gt;Cordialement,&lt;BR /&gt;C.L.</description>
      <pubDate>Mon, 11 Jun 2007 06:21:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017195#M64296</guid>
      <dc:creator>Cédric L.</dc:creator>
      <dc:date>2007-06-11T06:21:00Z</dc:date>
    </item>
    <item>
      <title>Re: Fail event on /dev/md1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017196#M64297</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;You definitely have a bad sector and should plan on the replacement of the disk that contains /dev/md1&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 11 Jun 2007 07:27:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017196#M64297</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-06-11T07:27:52Z</dc:date>
    </item>
    <item>
      <title>Re: Fail event on /dev/md1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017197#M64298</link>
      <description>Thx,&lt;BR /&gt;&lt;BR /&gt;And for sde??? It's the same thing?&lt;BR /&gt;&lt;BR /&gt;C.L</description>
      <pubDate>Mon, 11 Jun 2007 07:36:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017197#M64298</guid>
      <dc:creator>Cédric L.</dc:creator>
      <dc:date>2007-06-11T07:36:53Z</dc:date>
    </item>
    <item>
      <title>Re: Fail event on /dev/md1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017198#M64299</link>
      <description>The md devices are "Multiple Disks", i.e. software RAID devices. They act as containers for the physical disks in RAID or multipath configurations.&lt;BR /&gt;&lt;BR /&gt;For example, if your /dev/md1 is configured to RAID1 (=mirroring) mode, you'll find the /dev/md1 device "contains" /dev/sde and at least one other /dev/sd* disk device. &lt;BR /&gt;&lt;BR /&gt;The simplest way to view your /dev/md* configuration is to run "cat /dev/mdstat".&lt;BR /&gt;&lt;BR /&gt;The kernel shows error messages because the actual disk device /dev/sde is failing. The mdadm daemon has noticed the failing /dev/sde belongs to /dev/md1.&lt;BR /&gt;&lt;BR /&gt;In RAID1 or RAID5 configurations, the warning from mdadm is useful because a single failing disk does not cause any immediate problems. But if a second disk from the same RAID set fails, you will lose data: in this case, the message from mdadm means you're no longer protected against another disk failure.&lt;BR /&gt;&lt;BR /&gt;So you should replace the failing /dev/sde and then re-sync the RAID set. If you don't know how to do this, google for Linux Software-RAID HOWTO.&lt;BR /&gt;&lt;BR /&gt;If you have a RAID0 configuration (to increase performance, not reliability), you may have already lost some data. If you don't have your backups up to date, back up anything important on the disks *now*!!!&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Mon, 11 Jun 2007 14:56:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/fail-event-on-dev-md1/m-p/4017198#M64299</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2007-06-11T14:56:20Z</dc:date>
    </item>
  </channel>
</rss>

