<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: software raid 0 or LVM in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707154#M42609</link>
    <description>If the lun from the disk array is Raid 5 or 1 then if there is a failure the disk array will take care of it and you just replace the disk and be done with it.&lt;BR /&gt;&lt;BR /&gt;No disk array or OS interruption.&lt;BR /&gt;&lt;BR /&gt;if the lun in the disk array is raid 0 and there is a failure you will have to rebuild the raid set if there is a failure and then have a interruption at the OS.&lt;BR /&gt;&lt;BR /&gt;Thats the nice thing about RAID.&lt;BR /&gt;&lt;BR /&gt;Heck nowadays with the blades and boot from san you do not even have to mirror the boot disk any more since the Blades have a RAID controller and with boot from san the boot disk can be Raid 1 or 5&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Sun, 31 Oct 2010 22:58:10 GMT</pubDate>
    <dc:creator>Emil Velez</dc:creator>
    <dc:date>2010-10-31T22:58:10Z</dc:date>
    <item>
      <title>software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707153#M42608</link>
      <description>hi all&lt;BR /&gt;i am not sure whether this topic has been discussed here before...&lt;BR /&gt;well in our test servers we already have two disks shown from the SAN&lt;BR /&gt;one is 100G and another one is 50G on vRAID5 of eva4400&lt;BR /&gt;I dont want to touch the storage as of now. the dba has asked me if he can  get a single 150G partition. this will be used for oracle test instance of Oracle Apps and Database. Performance is not an issue as only 10-12 tech consultants will be using it.&lt;BR /&gt;I am wondering whether I should do a software raid 0 of the two disks or i should use LVM.&lt;BR /&gt;&lt;BR /&gt;please suggest.&lt;BR /&gt;in both cases even if one LUN fails the whole data is gone. isnt it?&lt;BR /&gt;&lt;BR /&gt;thanks</description>
      <pubDate>Sun, 31 Oct 2010 18:09:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707153#M42608</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-10-31T18:09:41Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707154#M42609</link>
      <description>If the lun from the disk array is Raid 5 or 1 then if there is a failure the disk array will take care of it and you just replace the disk and be done with it.&lt;BR /&gt;&lt;BR /&gt;No disk array or OS interruption.&lt;BR /&gt;&lt;BR /&gt;if the lun in the disk array is raid 0 and there is a failure you will have to rebuild the raid set if there is a failure and then have a interruption at the OS.&lt;BR /&gt;&lt;BR /&gt;Thats the nice thing about RAID.&lt;BR /&gt;&lt;BR /&gt;Heck nowadays with the blades and boot from san you do not even have to mirror the boot disk any more since the Blades have a RAID controller and with boot from san the boot disk can be Raid 1 or 5&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 31 Oct 2010 22:58:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707154#M42609</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2010-10-31T22:58:10Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707155#M42610</link>
      <description>hi thank u for the nice explanation.&lt;BR /&gt;Could u also please throw more light on my first question?&lt;BR /&gt;Whether i am better off creating a LVM or software RAID in the above mentioned scenario</description>
      <pubDate>Mon, 01 Nov 2010 00:45:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707155#M42610</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-01T00:45:38Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707156#M42611</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;we already have two disks shown from the SAN&lt;BR /&gt;one is 100G and another one is 50G on vRAID5 of eva4400&lt;BR /&gt;&lt;BR /&gt;Since the LUNs are vRAID5 on the SAN level, you can go with LVM and build a simple non-mirrored volume from it.&lt;BR /&gt;&lt;BR /&gt;Not really sure what you mean by software raid, with LVM or other volume manager you can achieve a software RAID.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Viktor</description>
      <pubDate>Mon, 01 Nov 2010 09:02:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707156#M42611</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2010-11-01T09:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707157#M42612</link>
      <description>Use LVM.&lt;BR /&gt;&lt;BR /&gt;Just concat the 100 and 50 G "already RAIDED" physical volumes from your SAN as a single 150GB LVM LVOL.&lt;BR /&gt;&lt;BR /&gt;No need to softwre RAID or any kind of RAID as the EVA already stripes your 100  and 50 GB RAID disks accross however many disks you have in your EVA4400.&lt;BR /&gt;</description>
      <pubDate>Mon, 01 Nov 2010 12:28:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707157#M42612</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-11-01T12:28:21Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707158#M42613</link>
      <description>Stripe across them.  Use lvm with distributed stripes.  If you use just raid zero, then your stripe sizes will be small.  I think you'll be better off with the larger stripe sizes given by distributed stripes of lvm.  An exception would be if the number of drives (physical drive count) that the EVA is giving you for the two containers is small then I'd go with just plain Raid 0 as your best chance.  But! because you're on R5, with all your stuff on just two drive containers, it's gonna be slow regardless, IMHO.  Unless, you've got a large chunk of ram in the servers you make, so that he can cache up a lot of I/O in the db_block_buffers area.</description>
      <pubDate>Mon, 01 Nov 2010 14:52:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707158#M42613</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2010-11-01T14:52:19Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707159#M42614</link>
      <description>TP,&lt;BR /&gt;if he does a software raid0 (md) of the 150G and 50G luns - he'll end up with just 100GB. Plus I don't THINK it matters.&lt;BR /&gt;&lt;BR /&gt;These EVA "disks" are already "striped" (RAIDED) behind the EVA -- albeit RAID5... it will still be "fast"&lt;BR /&gt;&lt;BR /&gt;so PV1 - 150GB, PV2 50GB...&lt;BR /&gt;&lt;BR /&gt;Carve an LVM VG out of this two.&lt;BR /&gt;Carve an LVOL of 150 or even 200GB out of this 2.&lt;BR /&gt;&lt;BR /&gt;Maks no diffy.&lt;BR /&gt;</description>
      <pubDate>Mon, 01 Nov 2010 14:59:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707159#M42614</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-11-01T14:59:04Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707160#M42615</link>
      <description>Like I said, it depends how many physical hard drives are behind the array(s).  If it is a low amount 10 or less, I'd definitely use a plain ol' stripe.  If it is a lot of drives, that is multi chunks of multiples of 14 drives across three or four EVA multi racks, across duplexed controllers, etc.  I'd follow the rules of the paper produced about 6 years ago which expound the idea of SAME, which would recommend large chunks being pulled, 1Mg or larger at a time.&lt;BR /&gt;&lt;BR /&gt;Is it gonna matter for 12 technical consultants? Probably not nearly as much as how bad the consultants' code for their mods will probably be.  This is because most of this development for the consultants (either in setup, or actual code mods) will run off of the "vision" demo database, which don't have enough rows in them to present any type of reality for what that system will look like when finally done.  &lt;BR /&gt;&lt;BR /&gt;Like most things, your mileage will vary.&lt;BR /&gt;&lt;BR /&gt;The reason I answered in detail is, he bothered to ask.  If he gets used to using a standard set of methods in deploying these, he will learn, bit by bit, how and why what works best for his environment as he moves toward go-live.  It's a pretty bad thing to to go all the way to go live, and have had no experience or feel of sucesses/failures in setup for go live. &lt;BR /&gt;&lt;BR /&gt;Therefore, only out of admiration for the sysadmin caring enough to take a decent starter shot at a set up, I thought I'd offer what I thought would be best for the solution, even if the results of the outcome for this single decision may be only be incremental.  &lt;BR /&gt;&lt;BR /&gt;Besides, even though small, its still how I attempt to set them up if and when I have the resources to setup test/dev systems.  Sometimes, due to resource constraints, I just don't get to.</description>
      <pubDate>Mon, 01 Nov 2010 18:21:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707160#M42615</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2010-11-01T18:21:36Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707161#M42616</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I'm with Alzhy: I don't see the point for setting another level of striping on the top of the RAID5-out-of-the-box. The throughput was already maximalized with RAID5, why overcomplicate it with another level of striping? &lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Viktor&lt;BR /&gt;</description>
      <pubDate>Tue, 02 Nov 2010 10:53:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707161#M42616</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2010-11-02T10:53:50Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707162#M42617</link>
      <description>thank you all for ur detailed explanations. i will create a LVM out of the two disks.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Use lvm with distributed stripes?&lt;BR /&gt;i didnt understand this. i normally create LVMs with default options.</description>
      <pubDate>Tue, 02 Nov 2010 18:17:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707162#M42617</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-02T18:17:08Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707163#M42618</link>
      <description>&amp;gt;&amp;gt; Use lvm with distributed stripes?&lt;BR /&gt;&amp;gt; i didnt understand this. i normally create LVMs with default options.&lt;BR /&gt;&lt;BR /&gt;LVM is capable of doing a RAID 0, aka. striped volume, he meant this. But as your LUNs are already striped between the physical storage disks, I don't think that you would profit anything by creating a striped LV. So, stay with the default values! ;)&lt;BR /&gt;&lt;BR /&gt;To read about LVM striping, here is a doc:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/B2355-90672/ch08.html" target="_blank"&gt;http://docs.hp.com/en/B2355-90672/ch08.html&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 02 Nov 2010 21:39:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707163#M42618</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2010-11-02T21:39:17Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707164#M42619</link>
      <description>Hi &lt;BR /&gt;&lt;BR /&gt;No need to use striping here as the luns are of different size. And LVM is the best solution for you here as the Redundancy are met at the storage level.&lt;BR /&gt;&lt;BR /&gt;You can create a volume group with these two luns and create a logical volume which is pretty simple. &lt;BR /&gt;&lt;BR /&gt;Software raid levels are available but that don't have the flexibility that lvm can give.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jayakrishnan G Naik</description>
      <pubDate>Wed, 03 Nov 2010 02:47:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707164#M42619</guid>
      <dc:creator>Jayakrishnan G Naik</dc:creator>
      <dc:date>2010-11-03T02:47:11Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707165#M42620</link>
      <description>i created a LVM out of sda2 and sdb1.&lt;BR /&gt;&lt;BR /&gt;/dev/sda1              55G   41G  6.7G  86% /ora1&lt;BR /&gt;/dev/sda2              48G   46G  1.5G  97% /ora2&lt;BR /&gt;/dev/sdb1              101G   78G   16G  83% /ora3&lt;BR /&gt;&lt;BR /&gt;after a few hours the mountpoint on sda1 started misbehaving and since the storage admin didnt bother to create zoning in eva, it affected all the servers.&lt;BR /&gt;we restarted the server in single user mode and removed the mountpoints from /etc/fstab&lt;BR /&gt;and restarted the server. now the mountpoints from SAN are not mounted.&lt;BR /&gt;&lt;BR /&gt;is there any relation to me creating an LVM out of sda2 and sdb1 and the mountpoint on sda1 being affected.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 08:22:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707165#M42620</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-03T08:22:04Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707166#M42621</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;after a few hours the mountpoint on sda1 started misbehaving&lt;BR /&gt;&lt;BR /&gt;What do you mean by 'misbehaving'? How could a _mountpoint_ misbehave??? &lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Viktor</description>
      <pubDate>Wed, 03 Nov 2010 11:33:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707166#M42621</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2010-11-03T11:33:47Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707167#M42622</link>
      <description>sorry for not being clear ...&lt;BR /&gt;df -h was showing the mount point, but i could not list the contents. it simply disappeared.&lt;BR /&gt;&lt;BR /&gt;have a look at the status of the server in the image attached. something to do with this server and the storage but i dont know what.&lt;BR /&gt;&lt;BR /&gt;just before this happened i had created that lvm which i was talking about and our dba was cloning the prod db on the lvm.&lt;BR /&gt;i want to know whether creating of an lvm of the two partitions i mentioned, caused this issue?&lt;BR /&gt;&lt;BR /&gt;/dev/sda1 55G 41G 6.7G 86% /ora1&lt;BR /&gt;/dev/sda2 48G 46G 1.5G 97% /ora2&lt;BR /&gt;/dev/sdb1 101G 78G 16G 83% /ora3&lt;BR /&gt;&lt;BR /&gt;sda1 and 2 are part of one block device and sdb1 is another.&lt;BR /&gt;sda2 and sdb1 are PVs of my LVM. would such a configuration have caused a direct impact on the sda1?&lt;BR /&gt;did that result in the server throwing too many I/Os on the SAN (with no zoning) affecting all servers?&lt;BR /&gt;i know my explanation will appear vague. but has anyone faced a similar situation.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 11:52:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707167#M42622</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-03T11:52:54Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707168#M42623</link>
      <description>Sir.&lt;BR /&gt;&lt;BR /&gt;Do you know what you are doing?&lt;BR /&gt;&lt;BR /&gt;/dev/sda1 55G 41G 6.7G 86% /ora1&lt;BR /&gt;/dev/sda2 48G 46G 1.5G 97% /ora2&lt;BR /&gt;/dev/sdb1 101G 78G 16G 83% /ora3&lt;BR /&gt;&lt;BR /&gt;You claim the above to be LVM - it is not sir unless it is just lost in translation.&lt;BR /&gt;&lt;BR /&gt;If the above is indeed true that you have mounted filesystems on individiual disks - then question will come out and mine will be -- are the above disks SAN (eva4400) disks!? Coz if they are -- then you are NOT using multipathing!!&lt;BR /&gt;&lt;BR /&gt;I suggest you take a very deep breath and let us go over your problem again --  if you still want our help.&lt;BR /&gt;&lt;BR /&gt;Shukran.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 12:17:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707168#M42623</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-11-03T12:17:56Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707169#M42624</link>
      <description>i will start again&lt;BR /&gt;/dev/sda1 55G 41G 6.7G 86% /ora1&lt;BR /&gt;/dev/sda2 48G 46G 1.5G 97% /ora2&lt;BR /&gt;/dev/sdb1 101G 78G 16G 83% /ora3&lt;BR /&gt;&lt;BR /&gt;above was my partition before i did the LVM config.&lt;BR /&gt;i did the LVM config after umounting /dev/sda2 and /dev/sdb1&lt;BR /&gt;i am not sure about the multipathing thing as i didnt do the setup.&lt;BR /&gt;my question was &lt;BR /&gt;sda1 and sda2 are part of one block device and sdb1 is another.&lt;BR /&gt;sda2 and sdb1 are PVs of my LVM. would such a configuration have caused a direct impact on the sda1?&lt;BR /&gt;i was unable to access sda1 at all and the state of the server was as shown the JPEG in my previous post.&lt;BR /&gt;i have now restarted the server after disabling the SAN moutpoints from /etc/fstab&lt;BR /&gt;and now i have manually jus mounted /dev/sda1. all data is intact.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 15:27:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707169#M42624</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-03T15:27:00Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707170#M42625</link>
      <description>pardon my ignorance, &lt;BR /&gt;how do say multipathing is not working correctly? &lt;BR /&gt;infact an HP consultant who came yesterday told us the people who did the configuration (HP) have not done the cabling correctly!!</description>
      <pubDate>Wed, 03 Nov 2010 15:28:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707170#M42625</guid>
      <dc:creator>iinfi1</dc:creator>
      <dc:date>2010-11-03T15:28:47Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707171#M42626</link>
      <description>.ae &lt;BR /&gt;&lt;BR /&gt;hmmmm... HP should have excellent people there.&lt;BR /&gt;&lt;BR /&gt;I guess get your act together sir. Chase whoever manages the EVA4400 to make sure it is "zoned" and / or config'd correctly if you think or others think it is zoned incorrectly.&lt;BR /&gt;&lt;BR /&gt;/dev/sdNN are not the correctr names of EVA multipathed devices!&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 15:43:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707171#M42626</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-11-03T15:43:33Z</dc:date>
    </item>
    <item>
      <title>Re: software raid 0 or LVM</title>
      <link>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707172#M42627</link>
      <description>Re: Hello,&lt;BR /&gt;&lt;BR /&gt;I'm with Alzhy: I don't see the point for setting another level of striping on the top of the RAID5-out-of-the-box. The throughput was already maximalized with RAID5, why overcomplicate it with another level of striping? &lt;BR /&gt;&lt;BR /&gt;If you go and read the paper on SAME for big arrays that are already striped you'll see that there is immense value in using Distributed striping on hardware Raid arrays, as I've indicated.  Like I said before - feel free to ignore.&lt;BR /&gt;&lt;BR /&gt;As far as something being "already maximized" because it's raid 5.  Well, you've just missed the big truck leaving town.  There's so many other things to consider - balance I/O, balancing cards, balancing controllers, balancing san ports, NOT using RAID 5 in certain areas.  Just saying that something is "maximized" because it's R5 is just leaving so much other stuff out.  Which is exactly what his question was, about the other things out there.  &lt;BR /&gt;&lt;BR /&gt;No one should ever consider:&lt;BR /&gt;"Already Maximized" = "R5" &lt;BR /&gt;&lt;BR /&gt;That statement says a lot more about what's not being considered in setup than what has been.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Nov 2010 17:31:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/software-raid-0-or-lvm/m-p/4707172#M42627</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2010-11-03T17:31:18Z</dc:date>
    </item>
  </channel>
</rss>

