<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EXT3-fs error(device dm-6) in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671290#M41799</link>
    <description>Thanks for the information; now I'm starting to see what's going on.&lt;BR /&gt;&lt;BR /&gt;The "dmesg" command lists the kernel message buffer. Old messages will only be removed from the buffer when overwritten by newer messages. The size of the message buffer used to be about 16 KB, but it may have been increased in newer kernels. When you run "dmesg", you get everything that's in the buffer - whether the messages are new or old.&lt;BR /&gt;If you want to clear the message buffer (to make it easier to see which messages are new), run "dmesg -c".&lt;BR /&gt;&lt;BR /&gt;The last four messages seem to indicate a state change of some sort on /dev/sda. If that's the point where you hot-swapped the bad disk, it might have caused these messages. &lt;BR /&gt;&lt;BR /&gt;If no new "Buffer I/O error" messages appear after the lines:&lt;BR /&gt;&lt;BR /&gt;SCSI device sda: 859525120 512-byte hdwr sectors (440077 MB)&lt;BR /&gt;sda: Write Protect is off&lt;BR /&gt;sda: Mode Sense: 06 00 10 00&lt;BR /&gt;SCSI device sda: drive cache: write back w/ FUA&lt;BR /&gt;&lt;BR /&gt;then your RAID5 set is probably OK now.&lt;BR /&gt;&lt;BR /&gt;In RHEL 5.1, the dm-* devices no longer exist in /dev, but the kernel error messages still refer to them. No matter: the &lt;NUMBER&gt; in "dm-&lt;NUMBER&gt;" is the minor number of the respective device-mapper device. Based on your /dev/mapper/* listing, the major number of the device-mapper subsystem is 253, so the problematic device is major 253 minor 6, or /dev/mapper/VolGroup00-oraclelv (also known as /dev/VolGroup00/oraclelv).&lt;BR /&gt;&lt;BR /&gt;&amp;gt;ext3_abort called.&lt;BR /&gt;&amp;gt;EXT3-fs error (device dm-6): ext3_journal_start_sb: Detected aborted journal&lt;BR /&gt;&amp;gt;Remounting filesystem read-only&lt;BR /&gt;&lt;BR /&gt;These messages indicate that an error was detected at the filesystem level, and the filesystem was switched to read-only mode to protect the data. You can try to switch it back to read-write mode with:&lt;BR /&gt;&lt;BR /&gt;mount -o remount,rw /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;but usually the system will block this command until the filesystem is checked first.&lt;BR /&gt;&lt;BR /&gt;To check the filesystem, you must stop the applications using it (i.e. Oracle) and unmount it:&lt;BR /&gt;&lt;BR /&gt;umount /dev/VolGroup00/oraclelv&lt;BR /&gt;fsck -C 0 /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;If the filesystem check finds no errors (or can fix all the errors it can find), you can again mount the filesystem and resume using it:&lt;BR /&gt;&lt;BR /&gt;mount /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;MK&lt;/NUMBER&gt;&lt;/NUMBER&gt;</description>
    <pubDate>Tue, 10 Aug 2010 17:28:13 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2010-08-10T17:28:13Z</dc:date>
    <item>
      <title>EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671286#M41795</link>
      <description>Good morning,&lt;BR /&gt;&lt;BR /&gt;Just replaced the bad disk yesterday and still getting errors. I am getting the following errors in dmesg and still one of the file system is read-only mounted. Any ideas?&lt;BR /&gt;&lt;BR /&gt;sd 0:0:0:0: SCSI error: return code = 0x08000002&lt;BR /&gt;sda: Current: sense key: Hardware Error&lt;BR /&gt;    Add. Sense: Internal target failure&lt;BR /&gt;&lt;BR /&gt;Info fld=0x0&lt;BR /&gt;end_request: I/O error, dev sda, sector 235753109&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704297&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704298&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;sd 0:0:0:0: SCSI error: return code = 0x08000002&lt;BR /&gt;sda: Current: sense key: Hardware Error&lt;BR /&gt;    Add. Sense: Internal target failure&lt;BR /&gt;&lt;BR /&gt;Info fld=0x0&lt;BR /&gt;end_request: I/O error, dev sda, sector 235752965&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704279&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704280&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704281&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704282&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704283&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704284&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704285&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 2704286&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Aborting journal on device dm-6.&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;__journal_remove_journal_head: freeing b_committed_data&lt;BR /&gt;ext3_abort called.&lt;BR /&gt;EXT3-fs error (device dm-6): ext3_journal_start_sb: Detected aborted journal&lt;BR /&gt;Remounting filesystem read-only&lt;BR /&gt;SCSI device sda: 859525120 512-byte hdwr sectors (440077 MB)&lt;BR /&gt;sda: Write Protect is off&lt;BR /&gt;sda: Mode Sense: 06 00 10 00&lt;BR /&gt;SCSI device sda: drive cache: write back w/ FUA&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2010 11:00:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671286#M41795</guid>
      <dc:creator>Qcheck</dc:creator>
      <dc:date>2010-08-06T11:00:53Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671287#M41796</link>
      <description>Can anyone please respond what is dm-6 error? How can I tell which disk is bad? We have 4 146 Gig drives with hardware RAID 5. We replaces disk in slot 2 which was bad last week. Now, still get errors in the log file that hardware failure. And now found that one of the file system is read-only mount. How can I tell which disk is failed form O/S side. I have no hardware monitor tools.</description>
      <pubDate>Mon, 09 Aug 2010 13:34:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671287#M41796</guid>
      <dc:creator>Qcheck</dc:creator>
      <dc:date>2010-08-09T13:34:17Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671288#M41797</link>
      <description>&amp;gt; We have 4 146 Gig drives with hardware RAID 5. &lt;BR /&gt;&amp;gt; How can I tell which disk is failed form O/S side. I have no hardware monitor tools.&lt;BR /&gt;&lt;BR /&gt;Your question would be much easier to answer if you had given more information about your set-up:&lt;BR /&gt;- name and version of Linux distribution&lt;BR /&gt;- system manufacturer and model&lt;BR /&gt;- RAID hardware model (if applicable)&lt;BR /&gt;&lt;BR /&gt;First, let's try to find the persistent device-mapper devicename that corresponds to /dev/dm-6:&lt;BR /&gt;&lt;BR /&gt;ls -l /dev/dm-6 /dev/mapper/* /dev/md*&lt;BR /&gt;&lt;BR /&gt;The device that has the same major and minor device numbers as /dev/dm-6 is the device you're looking for.&lt;BR /&gt;&lt;BR /&gt;The next step would be to find out what does /dev/dm-6 do and which hardware-level devices are associated with it. By the error messages I assume /dev/sda is one of them; but are there others?&lt;BR /&gt;&lt;BR /&gt;Possibly useful commands:&lt;BR /&gt;dmsetup table&lt;BR /&gt;dmsetup ls --tree&lt;BR /&gt;cat /proc/mdstat&lt;BR /&gt;pvs&lt;BR /&gt;&lt;BR /&gt;True hardware RAID usually hides the actual physical disks: the only way to get information about the state of the disks is to ask the driver. Usually some RAID-manufacturer-specific diagnostic program is required to get the full report, but basic information may be available in the /proc filesystem. Look into /proc/scsi/&lt;RAID_DRIVER_NAME&gt;/ or /proc/driver/&lt;RAID_DRIVER_NAME&gt;/.&lt;BR /&gt;&lt;BR /&gt;For example, if it's a HP SmartArray hardware RAID which is controlled by the "cciss" driver module, then "cat /proc/driver/cciss/0" would display basic information about the first SmartArray controller on the system (controller 0).&lt;BR /&gt;&lt;BR /&gt;If you had the "hpacucli" (HP Array Configuration Utility CLI) tool installed, the command "hpacucli controller all show config detail" would produce a more verbose report about the SmartArray controllers, including the state, model and serial numbers of all physical disks attached to them.&lt;BR /&gt;&lt;BR /&gt;There's also the "Array Diagnostic Utility" which can produce an even more verbose report.&lt;BR /&gt;&lt;BR /&gt;If you don't have any RAID diagnostic programs installed and cannot install them, the only way to identify a failed disk might be to look at the disk diagnostic LEDs in the server's front panel (if the RAID controller has such LEDs).&lt;BR /&gt;&lt;BR /&gt;MK&lt;/RAID_DRIVER_NAME&gt;&lt;/RAID_DRIVER_NAME&gt;</description>
      <pubDate>Mon, 09 Aug 2010 15:42:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671288#M41797</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-08-09T15:42:54Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671289#M41798</link>
      <description>Dear MK,&lt;BR /&gt;&lt;BR /&gt;Vow, Thank you so much for the response. &lt;BR /&gt;&lt;BR /&gt;Basic environment info:&lt;BR /&gt;RHEL 5.1 with kernel 2.6.18-53.el5&lt;BR /&gt;SunFire X4150 and RAID is done at BIOS level.&lt;BR /&gt;(4) 146 HD with RAID 5.&lt;BR /&gt;&lt;BR /&gt;From the BIOS, I can see all four disk drives in a solid state. And also from the /proc/scsi/scsi file, I can see all four disks as listed below:&lt;BR /&gt; &lt;BR /&gt;Attached devices:&lt;BR /&gt;Host: scsi0 Channel: 00 Id: 00 Lun: 00&lt;BR /&gt;  Vendor: Sun      Model: sys_root         Rev: V1.0&lt;BR /&gt;  Type:   Direct-Access                    ANSI SCSI revision: 02&lt;BR /&gt;Host: scsi0 Channel: 01 Id: 00 Lun: 00&lt;BR /&gt;  Vendor: SEAGATE  Model: ST914602SSUN146G Rev: 0603&lt;BR /&gt;  Type:   Direct-Access                    ANSI SCSI revision: 05&lt;BR /&gt;Host: scsi0 Channel: 01 Id: 01 Lun: 00&lt;BR /&gt;  Vendor: SEAGATE  Model: ST914602SSUN146G Rev: 0603&lt;BR /&gt;  Type:   Direct-Access                    ANSI SCSI revision: 05&lt;BR /&gt;Host: scsi0 Channel: 01 Id: 02 Lun: 00&lt;BR /&gt;  Vendor: SEAGATE  Model: ST914602SSUN146G Rev: 0603&lt;BR /&gt;  Type:   Direct-Access                    ANSI SCSI revision: 05&lt;BR /&gt;Host: scsi0 Channel: 01 Id: 03 Lun: 00&lt;BR /&gt;  Vendor: SEAGATE  Model: ST914602SSUN146G Rev: 0603&lt;BR /&gt;  Type:   Direct-Access                    ANSI SCSI revision: 05&lt;BR /&gt;&lt;BR /&gt;And from the commands you asked to try:&lt;BR /&gt;[root@mtstalpd-rac3 sg]# ls -l /dev/dm-6 /dev/mapper/* /dev/md*&lt;BR /&gt;ls: /dev/dm-6: No such file or directory&lt;BR /&gt;crw------- 1 root root  10, 63 Aug  9 09:01 /dev/mapper/control&lt;BR /&gt;brw-rw---- 1 root disk 253,  0 Aug  9 13:01 /dev/mapper/VolGroup00-LogVol00&lt;BR /&gt;brw-rw---- 1 root disk 253,  9 Aug  9 09:01 /dev/mapper/VolGroup00-LogVol01&lt;BR /&gt;brw-rw---- 1 root disk 253,  4 Aug  9 13:01 /dev/mapper/VolGroup00-LogVol02&lt;BR /&gt;brw-rw---- 1 root disk 253,  2 Aug  9 13:01 /dev/mapper/VolGroup00-LogVol03&lt;BR /&gt;brw-rw---- 1 root disk 253,  3 Aug  9 13:01 /dev/mapper/VolGroup00-LogVol04&lt;BR /&gt;brw-rw---- 1 root disk 253,  1 Aug  9 13:01 /dev/mapper/VolGroup00-LogVol05&lt;BR /&gt;brw-rw---- 1 root disk 253,  7 Aug  9 13:12 /dev/mapper/VolGroup00-oracleadminlv&lt;BR /&gt;brw-rw---- 1 root disk 253,  6 Aug  9 13:12 /dev/mapper/VolGroup00-oraclelv&lt;BR /&gt;brw-rw---- 1 root disk 253,  5 Aug  9 13:01 /dev/mapper/VolGroup00-standby&lt;BR /&gt;brw-rw---- 1 root disk 253,  8 Aug  9 13:01 /dev/mapper/VolGroup00-swaplv&lt;BR /&gt;brw-r----- 1 root disk   9,  0 Aug  9 13:01 /dev/md0&lt;BR /&gt;[root@mtstalpd-rac3 sg]# dmsetup table&lt;BR /&gt;VolGroup00-standby: 0 134217728 linear 8:2 79692160&lt;BR /&gt;VolGroup00-LogVol05: 0 8388608 linear 8:2 16777600&lt;BR /&gt;VolGroup00-oraclelv: 0 33554432 linear 8:2 213909888&lt;BR /&gt;VolGroup00-oraclelv: 33554432 4194304 linear 8:2 515899776&lt;BR /&gt;VolGroup00-LogVol04: 0 16777216 linear 8:2 33554816&lt;BR /&gt;VolGroup00-LogVol03: 0 8388608 linear 8:2 25166208&lt;BR /&gt;VolGroup00-LogVol02: 0 29360128 linear 8:2 50332032&lt;BR /&gt;VolGroup00-LogVol01: 0 134217728 linear 8:2 381682048&lt;BR /&gt;VolGroup00-oracleadminlv: 0 33554432 linear 8:2 247464320&lt;BR /&gt;VolGroup00-LogVol00: 0 16777216 linear 8:2 384&lt;BR /&gt;VolGroup00-swaplv: 0 100663296 linear 8:2 281018752&lt;BR /&gt;[root@mtstalpd-rac3 sg]# dmsetup ls --tree&lt;BR /&gt;VolGroup00-standby (253:5)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol05 (253:1)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-oraclelv (253:6)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol04 (253:3)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol03 (253:2)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol02 (253:4)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol01 (253:9)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-oracleadminlv (253:7)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-LogVol00 (253:0)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;VolGroup00-swaplv (253:8)&lt;BR /&gt; Ã¢Ã¢ (8:2)&lt;BR /&gt;[root@mtstalpd-rac3 sg]# cat /proc/mdstat&lt;BR /&gt;Personalities :&lt;BR /&gt;unused devices: &lt;NONE&gt;&lt;BR /&gt;[root@mtstalpd-rac3 sg]# pvs&lt;BR /&gt;  PV         VG         Fmt  Attr PSize   PFree&lt;BR /&gt;  /dev/sda2  VolGroup00 lvm2 a-   409.72G 161.72G&lt;BR /&gt;[root@mtstalpd-rac3 sg]#&lt;BR /&gt;&lt;BR /&gt;I didn't get any information. So is there a possibility that disk controller is bad?&lt;BR /&gt;&lt;BR /&gt;Thank you so much for your valuable time and I really appreciate.&lt;BR /&gt;&lt;BR /&gt;&lt;/NONE&gt;</description>
      <pubDate>Tue, 10 Aug 2010 13:04:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671289#M41798</guid>
      <dc:creator>Qcheck</dc:creator>
      <dc:date>2010-08-10T13:04:25Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671290#M41799</link>
      <description>Thanks for the information; now I'm starting to see what's going on.&lt;BR /&gt;&lt;BR /&gt;The "dmesg" command lists the kernel message buffer. Old messages will only be removed from the buffer when overwritten by newer messages. The size of the message buffer used to be about 16 KB, but it may have been increased in newer kernels. When you run "dmesg", you get everything that's in the buffer - whether the messages are new or old.&lt;BR /&gt;If you want to clear the message buffer (to make it easier to see which messages are new), run "dmesg -c".&lt;BR /&gt;&lt;BR /&gt;The last four messages seem to indicate a state change of some sort on /dev/sda. If that's the point where you hot-swapped the bad disk, it might have caused these messages. &lt;BR /&gt;&lt;BR /&gt;If no new "Buffer I/O error" messages appear after the lines:&lt;BR /&gt;&lt;BR /&gt;SCSI device sda: 859525120 512-byte hdwr sectors (440077 MB)&lt;BR /&gt;sda: Write Protect is off&lt;BR /&gt;sda: Mode Sense: 06 00 10 00&lt;BR /&gt;SCSI device sda: drive cache: write back w/ FUA&lt;BR /&gt;&lt;BR /&gt;then your RAID5 set is probably OK now.&lt;BR /&gt;&lt;BR /&gt;In RHEL 5.1, the dm-* devices no longer exist in /dev, but the kernel error messages still refer to them. No matter: the &lt;NUMBER&gt; in "dm-&lt;NUMBER&gt;" is the minor number of the respective device-mapper device. Based on your /dev/mapper/* listing, the major number of the device-mapper subsystem is 253, so the problematic device is major 253 minor 6, or /dev/mapper/VolGroup00-oraclelv (also known as /dev/VolGroup00/oraclelv).&lt;BR /&gt;&lt;BR /&gt;&amp;gt;ext3_abort called.&lt;BR /&gt;&amp;gt;EXT3-fs error (device dm-6): ext3_journal_start_sb: Detected aborted journal&lt;BR /&gt;&amp;gt;Remounting filesystem read-only&lt;BR /&gt;&lt;BR /&gt;These messages indicate that an error was detected at the filesystem level, and the filesystem was switched to read-only mode to protect the data. You can try to switch it back to read-write mode with:&lt;BR /&gt;&lt;BR /&gt;mount -o remount,rw /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;but usually the system will block this command until the filesystem is checked first.&lt;BR /&gt;&lt;BR /&gt;To check the filesystem, you must stop the applications using it (i.e. Oracle) and unmount it:&lt;BR /&gt;&lt;BR /&gt;umount /dev/VolGroup00/oraclelv&lt;BR /&gt;fsck -C 0 /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;If the filesystem check finds no errors (or can fix all the errors it can find), you can again mount the filesystem and resume using it:&lt;BR /&gt;&lt;BR /&gt;mount /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;MK&lt;/NUMBER&gt;&lt;/NUMBER&gt;</description>
      <pubDate>Tue, 10 Aug 2010 17:28:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671290#M41799</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-08-10T17:28:13Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671291#M41800</link>
      <description>MK,&lt;BR /&gt;&lt;BR /&gt;Thank you again for your kind and detail response.&lt;BR /&gt;&lt;BR /&gt;Yes, you are right, it was /oracle file system which is having the issue. I did run the fsck on /oracle twice yesterday. But whenever the oracle starts it read-only mount comes back. So I am guessing the disk controller itself must be bad.&lt;BR /&gt;&lt;BR /&gt;Again, after I saw ur response, I did fsck.&lt;BR /&gt;&lt;BR /&gt;umount /dev/VolGroup00/oraclelv&lt;BR /&gt;fsck -C 0 /dev/VolGroup00/oraclelv&lt;BR /&gt;(fixed one journal)&lt;BR /&gt;mount /dev/VolGroup00/oraclelv&lt;BR /&gt;&lt;BR /&gt;Now, I am able to touch but didn't start the oracle yet as DBA is not here.&lt;BR /&gt;&lt;BR /&gt;I/O errors we are getting even after the following:&lt;BR /&gt;SCSI device sda: 859525120 512-byte hdwr sectors (440077 MB)&lt;BR /&gt;sda: Write Protect is off&lt;BR /&gt;sda: Mode Sense: 06 00 10 00&lt;BR /&gt;SCSI device sda: drive cache: write back w/ FUA&lt;BR /&gt;&lt;BR /&gt;So do you think RAID 5 was corrupted?&lt;BR /&gt;&lt;BR /&gt;But when I called SUN they think it is the bad disk. But like I said I can see all four disks. So I am guessing it is something to do with the controller.&lt;BR /&gt;&lt;BR /&gt;Again, thank you so much for teaching me and explaining. So nice of you. &lt;BR /&gt;</description>
      <pubDate>Tue, 10 Aug 2010 18:05:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/4671291#M41800</guid>
      <dc:creator>Qcheck</dc:creator>
      <dc:date>2010-08-10T18:05:40Z</dc:date>
    </item>
    <item>
      <title>Re: EXT3-fs error(device dm-6)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/5508031#M53511</link>
      <description>&lt;P&gt;Hi there ,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It should be much more easy to identify the device ausing the issue by just checking the Array Diagnostic Utility logs :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;example :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Symptom :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;Buffer I/O error on device dm-6, logical block 8201&lt;BR /&gt;lost page write due to I/O error on dm-6&lt;BR /&gt;REISERFS: abort (device dm-6): Journal write error in flush_commit_list&lt;BR /&gt;REISERFS: Aborting journal for filesystem on dm-6&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Matching error on ADU&amp;nbsp; :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Smart Array P400 in Embedded Slot : Storage Enclosure at Port 1I : Box 1 : Drive Cage on Port 1I : Physical Drive 1I:1:4 : Monitor and Performance Parameter Control &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Bus Faults&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8452 (0x2104)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Hot Plug Count&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2104&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Track Rewrite Errors&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2902&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Write Errors After Remap&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2102&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Background Firmware Revision&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x0848&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Media Failures&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2102&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Hardware Errors&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2102&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Aborted Command Failures&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2102&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Spin Up Failures&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2102&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Bad Target Count&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8450 (0x2102)&lt;BR /&gt;&amp;nbsp;&amp;nbsp; Predictive Failure Errors&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0x2104&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The Hard drive on port&amp;nbsp; 1I:1:4&amp;nbsp; is the root cause of the bus faults /time outs , usually the a Firmware upgrade of the Hard drive itself solve the problem if not a classic HW change will fix it definitely.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Jan 2012 11:52:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/ext3-fs-error-device-dm-6/m-p/5508031#M53511</guid>
      <dc:creator>Marwen</dc:creator>
      <dc:date>2012-01-25T11:52:54Z</dc:date>
    </item>
  </channel>
</rss>

