<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Software Raid issue in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967784#M77149</link>
    <description>Red hat search on lsraid.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf" target="_blank"&gt;http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I'm not running oracle on this server, wonder where the darned thing came from.&lt;BR /&gt;&lt;BR /&gt;Will do further research and try and find you an alternate command for checking raid status.&lt;BR /&gt;&lt;BR /&gt;My guess is based on data, the response you posted and prior fun with Red Hat Raid is that there is a problem with raid.&lt;BR /&gt;&lt;BR /&gt;Alternate solution: use fdisk, cfdisk or the gui graphical tool to blow away and re-create the filesystem, see what happens.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Wed, 07 May 2003 18:45:55 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2003-05-07T18:45:55Z</dc:date>
    <item>
      <title>Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967778#M77143</link>
      <description>I recently had to rebuild a system.  In the process I tar'd up 2 20GB files to my RH8.0 server onto a 2x64gb disk stripe of 128GB.&lt;BR /&gt;&lt;BR /&gt;After deleting these files, I am now not able to add any data to the stripe.  I get an error that the disk is full, even though it is only 6GB used out of 130GB.  THis was verified with both du -sk, and df -k.&lt;BR /&gt;&lt;BR /&gt;Since these were plain files (tar archives) I know that nothing would have had them open.  Just to be sure, I rebooted the Linux server, and still can not create data in the stripe with the error "no space left on device"&lt;BR /&gt;&lt;BR /&gt;[root@masterc 1b]# df -k .&lt;BR /&gt;Filesystem           1k-blocks      Used Available Use% Mounted on&lt;BR /&gt;/dev/md1             143347736   5902848 130276160   5% /1b&lt;BR /&gt;&lt;BR /&gt;[root@masterc 1b]# touch test&lt;BR /&gt;touch: creating `test': No space left on device&lt;BR /&gt;&lt;BR /&gt;I have looked with lsof, and see no open files on the device.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any ideas?&lt;BR /&gt;&lt;BR /&gt;Thanks!&lt;BR /&gt;Shannon</description>
      <pubDate>Wed, 07 May 2003 15:17:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967778#M77143</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-07T15:17:28Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967779#M77144</link>
      <description>Hi Shanon,&lt;BR /&gt;&lt;BR /&gt;Check for quota&lt;BR /&gt;#repquota&lt;BR /&gt;&lt;BR /&gt;If you don't want it just turn it off&lt;BR /&gt;#quotaoff&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;how is your raid configuration look like&lt;BR /&gt;&lt;BR /&gt;#lsraid&lt;BR /&gt;</description>
      <pubDate>Wed, 07 May 2003 16:27:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967779#M77144</guid>
      <dc:creator>Sachin Patel</dc:creator>
      <dc:date>2003-05-07T16:27:39Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967780#M77145</link>
      <description>Greetings Shannon,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Here is my output on a working raid 1/0 system.&lt;BR /&gt;&lt;BR /&gt;[root@jerusalem root]# lsraid -A -g -a /dev/md0&lt;BR /&gt;[dev   9,   0] /dev/md0         E21817F0.29578AF2.8ED33C38.7B44A0A9 online&lt;BR /&gt;[dev   3,   2] /dev/hda2        E21817F0.29578AF2.8ED33C38.7B44A0A9 good&lt;BR /&gt;[dev   3,  66] /dev/hdb2        E21817F0.29578AF2.8ED33C38.7B44A0A9 good&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Actual use of the lsraid command.&lt;BR /&gt;&lt;BR /&gt;In your context I'd change it to.&lt;BR /&gt;&lt;BR /&gt;lsraid -A -g -a /dev/md1 &lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 07 May 2003 17:52:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967780#M77145</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-07T17:52:08Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967781#M77146</link>
      <description>Shannon,&lt;BR /&gt;&lt;BR /&gt;Would you be so kind as to post your results from the command. Just curious as to what can cause your results.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Wed, 07 May 2003 18:31:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967781#M77146</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-07T18:31:18Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967782#M77147</link>
      <description>quota's are not enabled&lt;BR /&gt;&lt;BR /&gt;There is no such tool as lsraid on Redhat 8, I'll have to check 9.  But here is the contents of /etc/raidtab.&lt;BR /&gt;&lt;BR /&gt;raiddev             /dev/md0&lt;BR /&gt;raid-level                  0&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/sdc&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/sdd&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;&lt;BR /&gt;raiddev         /dev/md1&lt;BR /&gt;raid-level              0&lt;BR /&gt;nr-raid-disks           2&lt;BR /&gt;chunk-size              64k&lt;BR /&gt;persistent-superblock   1&lt;BR /&gt;nr-spare-disks          0&lt;BR /&gt;        device  /dev/sda&lt;BR /&gt;        raid-disk 0&lt;BR /&gt;        device  /dev/sdb&lt;BR /&gt;        raid-disk 1&lt;BR /&gt;&lt;BR /&gt;There is no issue that I can see with the raid device, the drivers are loaded, it's on line.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Shannon</description>
      <pubDate>Wed, 07 May 2003 18:32:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967782#M77147</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-07T18:32:37Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967783#M77148</link>
      <description>Here is my /etc/raidtab for comparison's sake.&lt;BR /&gt;&lt;BR /&gt;[root@jerusalem root]# more /etc/raidtab&lt;BR /&gt;raiddev             /dev/md7&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda1&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb1&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md2&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda5&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb5&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md0&lt;BR /&gt;raid-level                  1&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda2&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb2&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md4&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda7&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb7&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md6&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda8&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb8&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md1&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda3&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb3&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md3&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb3&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md3&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda6&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb6&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;raiddev             /dev/md5&lt;BR /&gt;raid-level                  1&lt;BR /&gt;nr-raid-disks               2&lt;BR /&gt;chunk-size                  64k&lt;BR /&gt;persistent-superblock       1&lt;BR /&gt;nr-spare-disks              0&lt;BR /&gt;    device          /dev/hda9&lt;BR /&gt;    raid-disk     0&lt;BR /&gt;    device          /dev/hdb9&lt;BR /&gt;    raid-disk     1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Are you sure about the lsraid command? I can find no desupport notice, though my resuls are admittedly off a red hat 7.3 server.&lt;BR /&gt;&lt;BR /&gt;[root@jerusalem root]# find / -name lsraid&lt;BR /&gt;/sbin/lsraid&lt;BR /&gt;[root@jerusalem root]# ll /sbin/lsraid&lt;BR /&gt;-rwxr-xr-x   1 root     root        67272 Apr 15  2002 /sbin/lsraid&lt;BR /&gt;&lt;BR /&gt;Note that its not on the PATH of non-root users.&lt;BR /&gt;&lt;BR /&gt;SEP&lt;BR /&gt;</description>
      <pubDate>Wed, 07 May 2003 18:41:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967783#M77148</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-07T18:41:19Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967784#M77149</link>
      <description>Red hat search on lsraid.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf" target="_blank"&gt;http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I'm not running oracle on this server, wonder where the darned thing came from.&lt;BR /&gt;&lt;BR /&gt;Will do further research and try and find you an alternate command for checking raid status.&lt;BR /&gt;&lt;BR /&gt;My guess is based on data, the response you posted and prior fun with Red Hat Raid is that there is a problem with raid.&lt;BR /&gt;&lt;BR /&gt;Alternate solution: use fdisk, cfdisk or the gui graphical tool to blow away and re-create the filesystem, see what happens.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 07 May 2003 18:45:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967784#M77149</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-07T18:45:55Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967785#M77150</link>
      <description>Hi Shannon,&lt;BR /&gt;&lt;BR /&gt;There is a command lsraid in 8.0.&lt;BR /&gt;&lt;BR /&gt;Try raidstart. what is /proc/mdstat?&lt;BR /&gt;&lt;BR /&gt;Is raidtool rpm install?&lt;BR /&gt;&lt;BR /&gt;Sachin</description>
      <pubDate>Wed, 07 May 2003 19:38:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967785#M77150</guid>
      <dc:creator>Sachin Patel</dc:creator>
      <dc:date>2003-05-07T19:38:41Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967786#M77151</link>
      <description>Obviously the lsraid command is not installed on Shannon's system.&lt;BR /&gt;&lt;BR /&gt;Here is where to get it.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://rpmfind.net/linux/RPM/redhat/8.0/i386/raidtools-1.00.2-3.3.i386.html" target="_blank"&gt;http://rpmfind.net/linux/RPM/redhat/8.0/i386/raidtools-1.00.2-3.3.i386.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Download it, install it, run the command and post the output, it should lead us to a cause.&lt;BR /&gt;&lt;BR /&gt;A pleasure working with you.  The answer would have been faster, but itrc was a dog today.&lt;BR /&gt;&lt;BR /&gt;For you Shannon, I'd be happy to arrange a phone call to work this through(your dime)&lt;BR /&gt;&lt;BR /&gt;stevenprotter@juf.org&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 08 May 2003 00:47:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967786#M77151</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-08T00:47:29Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967787#M77152</link>
      <description>Sorry, these tools are probably also on your CD's for red hat.  That's where mine came from.&lt;BR /&gt;&lt;BR /&gt;SEP&lt;BR /&gt;&lt;BR /&gt;:-)</description>
      <pubDate>Thu, 08 May 2003 00:49:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967787#M77152</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-08T00:49:13Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967788#M77153</link>
      <description>I had the raidtools installed on the CD, but am trying to download the linked software and install this one.&lt;BR /&gt;&lt;BR /&gt;I know there is no lsraid as I run slocate -u every day, and locate lsraid produced nadda ;)&lt;BR /&gt;&lt;BR /&gt;Running raidstart is done at boot, note I have 2 separate Raid stripes, and both are mounted and working and active.&lt;BR /&gt;&lt;BR /&gt;The active and working is verified by using "cat /proc/mdstat"&lt;BR /&gt;&lt;BR /&gt;[root@masterc root]# cat /proc/mdstat&lt;BR /&gt;Personalities : [raid0]&lt;BR /&gt;read_ahead 1024 sectors&lt;BR /&gt;md1 : active raid0 sdb[1] sda[0]&lt;BR /&gt;      143374592 blocks 64k chunks&lt;BR /&gt;&lt;BR /&gt;md0 : active raid0 sdd[1] sdc[0]&lt;BR /&gt;      71687168 blocks 64k chunks&lt;BR /&gt;&lt;BR /&gt;unused devices: &lt;NONE&gt;&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Shannon&lt;/NONE&gt;</description>
      <pubDate>Thu, 08 May 2003 11:46:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967788#M77153</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-08T11:46:10Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967789#M77154</link>
      <description>Installed the raidtools package, and kind of like cat /proc/mdstat here are the results.&lt;BR /&gt;&lt;BR /&gt;[root@masterc tmp]# lsraid -A -g -a /dev/md1&lt;BR /&gt;[dev   9,   1] /dev/md1 E475ADD5.0CB7B814.11161BB8.D84F5860 online&lt;BR /&gt;[dev   8,   0] /dev/sda E475ADD5.0CB7B814.11161BB8.D84F5860 good&lt;BR /&gt;[dev   8,  16] /dev/sdb E475ADD5.0CB7B814.11161BB8.D84F5860 good&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Shannon</description>
      <pubDate>Thu, 08 May 2003 12:12:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967789#M77154</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-08T12:12:39Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967790#M77155</link>
      <description>Okay, I'm stumped.&lt;BR /&gt;&lt;BR /&gt;Perhaps remove and rebuild the filesystem, using the newly installed raid tools. If its empty, when its rebuilt, it should be fine.&lt;BR /&gt;&lt;BR /&gt;I hate messing with Linux disk layout after its running, I'm an LVM man.&lt;BR /&gt;&lt;BR /&gt;If I think of anything actually useful, I'll post.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 08 May 2003 12:40:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967790#M77155</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-08T12:40:53Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967791#M77156</link>
      <description>Yeah, Im stumped too.  All looks okay, and I had no problems until I made those tar files.  I was trying to avoid rebuilding the file system as this raid device has some production data on it.&lt;BR /&gt;&lt;BR /&gt;fsck reveals no errors, so it's very strange.&lt;BR /&gt;&lt;BR /&gt;I'll let you know if I resolve it ;/&lt;BR /&gt;&lt;BR /&gt;Shannon</description>
      <pubDate>Thu, 08 May 2003 14:06:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967791#M77156</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-08T14:06:26Z</dc:date>
    </item>
    <item>
      <title>Re: Software Raid issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967792#M77157</link>
      <description>Looks like it was the raidtools I was using that caused the problem.  After updating, I unmounted and fsck'd.  The fsck found a ton of errors and fixed everything up.  Since the raidtools didnt mess with fsck, must be something else in the package.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks for the nfo, and at least I dont have to rebuild the fs ;)&lt;BR /&gt;&lt;BR /&gt;Shannon</description>
      <pubDate>Thu, 08 May 2003 14:50:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-issue/m-p/2967792#M77157</guid>
      <dc:creator>Shannon Petry</dc:creator>
      <dc:date>2003-05-08T14:50:47Z</dc:date>
    </item>
  </channel>
</rss>

