<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Hardware RAID in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018883#M8987</link>
    <description>Mike,&lt;BR /&gt;&lt;BR /&gt;The Bird in Hand pub, huh?  Sounds interesting!&lt;BR /&gt;&lt;BR /&gt;Anyway, I've run both RAID5 and RAID0/1 with no difficulties.  Currently I'm running a mix of both on an FC60 (two SC10's RAID0/1 mirrored to each other and another SC10 set up for RAID5).  I haven't had any disk failures in the RAID5 portion of this setup so I can't really report on rebuild time.  However, in the past I had a Model20 Nike array that was entirely RAID5 and used to swap disks out all the time.  Never even noticed a blip on the performance radar.&lt;BR /&gt;&lt;BR /&gt;Hope this helps!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete "it's not even 7:00 AM and Mike's already got me thinking about beer" Randall&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 09 Jul 2003 09:46:13 GMT</pubDate>
    <dc:creator>Pete Randall</dc:creator>
    <dc:date>2003-07-09T09:46:13Z</dc:date>
    <item>
      <title>Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018881#M8985</link>
      <description>Hello campers&lt;BR /&gt;&lt;BR /&gt;HARDWARE RAID QUERY&lt;BR /&gt;**********************&lt;BR /&gt;An email from one of our clients reads:&lt;BR /&gt;&lt;BR /&gt;Our Database Management System (STAR1000) is configured to copy every data set to...&lt;BR /&gt;1] Optical Disc (Jukebox)&lt;BR /&gt;2] DAT(4 mm)&lt;BR /&gt;3] System External Cache (SCSI Disc)&lt;BR /&gt;&lt;BR /&gt;Therefore we have two permanent backups&lt;BR /&gt;&amp;amp; one backup on Cache for fast access&lt;BR /&gt;The latter will be deleted periodically  by the system to make room for new ones&lt;BR /&gt;We'd like to replace the Optical Disc &amp;amp; &lt;BR /&gt;the external Cache with a HADWARE RAID System&lt;BR /&gt;&lt;BR /&gt;The STAR1000 Software people have recommended RAID5, &lt;BR /&gt;but I have read in &lt;A href="http://www.acnc.com/04_01_00html" target="_blank"&gt;www.acnc.com/04_01_00html&lt;/A&gt;  that RAID5 is difficult to rebuild in the event of disc failure&lt;BR /&gt;&lt;BR /&gt;Q1] Is this true?&lt;BR /&gt;Q2] If so, is there a better RAID level which would be suitable for us?&lt;BR /&gt;&lt;BR /&gt;We require a RAID System which can...&lt;BR /&gt;Easily be set up&lt;BR /&gt;Expanded (add more disks )&lt;BR /&gt;maintained&lt;BR /&gt;&amp;amp; in the event of disc failure, can easily be rebuild&lt;BR /&gt;We also expect a good rate of Read and Write Performance&lt;BR /&gt;***************&lt;BR /&gt;&lt;BR /&gt;All your insights/experiences/advice please&lt;BR /&gt;On this balmy July day in the heart of England&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Mike "A beer garden less than a mile away at The Bird In Hand pub &amp;amp; I'm stuck indoors" Fisher</description>
      <pubDate>Wed, 09 Jul 2003 09:33:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018881#M8985</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-09T09:33:38Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018882#M8986</link>
      <description>Hi Mike,&lt;BR /&gt;&lt;BR /&gt;Hardware RAIDs should normally repair themselves, either with an existing spare disk or when you insert a new one, so I would disagree that RAID 5 is difficult to rebuild. The important point is that you will lose data if more than 1 disk fails. RAID 5 is seen as a very good compromise for price/availability.&lt;BR /&gt;&lt;BR /&gt;You might like to consider RAID 1 which is effectively disk mirroring, gives a better redundancy but costs more in disk investment. You will only lose data if both sides of the same disk mirror fail.&lt;BR /&gt;&lt;BR /&gt;In any case, ensure you keep up the tape backups!&lt;BR /&gt;&lt;BR /&gt;I'd suggest checking out HP's XP storage solutions at &lt;A href="http://www.hp.com!" target="_blank"&gt;www.hp.com!&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Ollie.</description>
      <pubDate>Wed, 09 Jul 2003 09:44:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018882#M8986</guid>
      <dc:creator>Ollie R</dc:creator>
      <dc:date>2003-07-09T09:44:57Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018883#M8987</link>
      <description>Mike,&lt;BR /&gt;&lt;BR /&gt;The Bird in Hand pub, huh?  Sounds interesting!&lt;BR /&gt;&lt;BR /&gt;Anyway, I've run both RAID5 and RAID0/1 with no difficulties.  Currently I'm running a mix of both on an FC60 (two SC10's RAID0/1 mirrored to each other and another SC10 set up for RAID5).  I haven't had any disk failures in the RAID5 portion of this setup so I can't really report on rebuild time.  However, in the past I had a Model20 Nike array that was entirely RAID5 and used to swap disks out all the time.  Never even noticed a blip on the performance radar.&lt;BR /&gt;&lt;BR /&gt;Hope this helps!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete "it's not even 7:00 AM and Mike's already got me thinking about beer" Randall&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Jul 2003 09:46:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018883#M8987</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-07-09T09:46:13Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018884#M8988</link>
      <description>Hi,&lt;BR /&gt;It is not more difficult to rebuild a Raid 5 set, (it is handled automatic by the controller), but it may take longer time to complete (then to rebuild a Raid 10 set). It will also (at least for most controllers) take longer time to create a Raid 5 set (some hours for a large set).&lt;BR /&gt;&lt;BR /&gt;When maintaining or expanding the difference is more up to the Raid controller then the Raid level.&lt;BR /&gt;&lt;BR /&gt;Raid 5 have good read performance, write performance is perhaps less good. If you want max read/write performance you should chose Raid 10.</description>
      <pubDate>Wed, 09 Jul 2003 10:17:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018884#M8988</guid>
      <dc:creator>Leif Halvarsson_2</dc:creator>
      <dc:date>2003-07-09T10:17:32Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018885#M8989</link>
      <description>Ollie:&lt;BR /&gt;Useful thoughts&lt;BR /&gt;XP range is too expensive I think&lt;BR /&gt;&lt;BR /&gt;Pete:&lt;BR /&gt;Can your mixture idea be achieved via hardware RAID?&lt;BR /&gt;If so - how? in general terms please&lt;BR /&gt;&lt;BR /&gt;And just to drive you crazy Pete:&lt;BR /&gt;On my way home from work I pass these pubs It's 8 miles of country roads with the odd village [Some of them very odd indeed]&lt;BR /&gt;&lt;BR /&gt;Bird in Hand&lt;BR /&gt;Bull&lt;BR /&gt;Black Bull&lt;BR /&gt;Rose &amp;amp; Crown&lt;BR /&gt;Lyggon Arms &lt;BR /&gt;Why Not?&lt;BR /&gt;Bell&lt;BR /&gt;Oddfellows Arms&lt;BR /&gt;Red Lion&lt;BR /&gt;White Lion&lt;BR /&gt;Coach House&lt;BR /&gt;Fleece&lt;BR /&gt;Black Eagle&lt;BR /&gt;Gate Hangs Well [no one knows why]&lt;BR /&gt;Stag&lt;BR /&gt;Seven Stars&lt;BR /&gt;Archers&lt;BR /&gt;Park&lt;BR /&gt;&lt;BR /&gt;I'll stop there because then I enter town&lt;BR /&gt;[Winebar country - Aaaaagh !!]</description>
      <pubDate>Wed, 09 Jul 2003 10:31:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018885#M8989</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-09T10:31:08Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018886#M8990</link>
      <description>Thanks Leif&lt;BR /&gt;&lt;BR /&gt;Any more for any more?&lt;BR /&gt;&lt;BR /&gt;Supplementary question:&lt;BR /&gt;&lt;BR /&gt;The Hardware RAID controllers can be aet to any type of RAID such as 1, 2, 3, 4, 5, 6, 7, 10, 53, 0+1? ['cept for HVD10-type]?</description>
      <pubDate>Wed, 09 Jul 2003 10:53:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018886#M8990</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-09T10:53:12Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018887#M8991</link>
      <description>Mike,&lt;BR /&gt;&lt;BR /&gt;You asked "Can your mixture idea be achieved via hardware RAID?".  The answer is yes - that's what an FC60/SC10 is - hardware RAID.  You use ammgr software to set up the LUNs and define what type of RAID they are: 1, 0/1, or 5.&lt;BR /&gt;&lt;BR /&gt;In answer to your question about what RAID levels can be set up, it may be different depending on the hardware, but, in general, the commonly seen levels (and the only ones really worth bothering with) are 1(mirrored), 0/1(striped and mirrored), or 5(striped with parity).  Capacity for RAID 1 or 0/1 is 50% and for RAID5 is from 66% for a 3-disk LUN to 83% for a 6-disk LUN, with a 5-disk LUN at 80% being the most common.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete&lt;BR /&gt;&lt;BR /&gt;With all those pubs, how do you ever manage to get home?&lt;BR /&gt;;^)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Jul 2003 11:28:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018887#M8991</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-07-09T11:28:22Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018888#M8992</link>
      <description>That more or less answers my questions&lt;BR /&gt;To give others a chance I'll leave the bunnies 'til later&lt;BR /&gt;&lt;BR /&gt;Pete:&lt;BR /&gt;I don't always&lt;BR /&gt;Once you've seen the barmaids you wouldn't want to either&lt;BR /&gt;&lt;BR /&gt;Mike "PC" Fisher&lt;BR /&gt;&lt;BR /&gt;Before anyone starts&lt;BR /&gt;The barmaids call themselves barmaids&lt;BR /&gt;However the term "barwench" guarantees that you'll get home&lt;BR /&gt;[via the ER]</description>
      <pubDate>Wed, 09 Jul 2003 11:42:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018888#M8992</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-09T11:42:58Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018889#M8993</link>
      <description>It is not harder to rebuild RAID 5. Simply it uses another algorithm to rebuid. RAID 5 is more capacity-wise. RAID 1 should be used for audio/video streaming (logging/recordkeeping applications etc, large block size transfers, heavy writes), RAID 5 for multitasking applications, transaction processing, databases (different block size transfers, heavy reads)&lt;BR /&gt;Eugeny</description>
      <pubDate>Wed, 09 Jul 2003 12:25:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018889#M8993</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-07-09T12:25:11Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018890#M8994</link>
      <description>Pete [but any one else please feel free to chip in] &lt;BR /&gt;***&lt;BR /&gt;PROPOSED SOLUTION:&lt;BR /&gt;&lt;BR /&gt;RAID 5&lt;BR /&gt;2 x HBA's&lt;BR /&gt;FC60 with dual controllers&lt;BR /&gt;2 x SC10's each with 5 x 73Gb Disc&lt;BR /&gt;***&lt;BR /&gt;HOWEVER YOU'VE WRITTEN:&lt;BR /&gt;&lt;BR /&gt;"Anyway, I've run both RAID 5 and RAID 0/1 with no difficulties&lt;BR /&gt;Currently I'm running a mix of both on an FC60 [two SC10's RAID 0/1 mirrored to each other and another SC10 set up for RAID 5]..."&lt;BR /&gt;***&lt;BR /&gt;QUESTIONS:&lt;BR /&gt;&lt;BR /&gt;Now I'm looking at your "mixture" idea&lt;BR /&gt;To see if it might be of benefit to my client&lt;BR /&gt;Frankly I haven't grasped what you're doing :)&lt;BR /&gt; &lt;BR /&gt;The 3rd SC10 is outside the FC60?&lt;BR /&gt;What's the advantage[s] of this mixture over pure RAID 5?&lt;BR /&gt;&lt;BR /&gt;Mike "Running in. Please overtake" Fisher</description>
      <pubDate>Fri, 11 Jul 2003 10:58:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018890#M8994</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-11T10:58:07Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018891#M8995</link>
      <description>Mike,&lt;BR /&gt;in FC60 when you create LUN you tell disk array which RAID level it will be. For example, you create LUN 0 of RAID 1 with disks in SC10-0 and SC10-1, and LUN 1 with disks in SC10-2 and SC10-3 etc and at the same time you can create LUN 2 of RAID 5 with 4 disks from all 4 SC10s. Just an example.&lt;BR /&gt;There's important note: when you create LUN use disks installed in different SC10 enclosures. In case one enclosure fill fail completely you'll have redundancy loss, not data loss&lt;BR /&gt;Eugeny</description>
      <pubDate>Fri, 11 Jul 2003 11:22:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018891#M8995</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-07-11T11:22:13Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018892#M8996</link>
      <description>Mike,&lt;BR /&gt;&lt;BR /&gt;The third SC10 is also part of the FC60.  It originally contained just the first two SC10s which were configured as RAID0/1 (mirrored and striped).  Later, we added the third SC10 which we wanted to use for our development copy of the database (we share this FC60 between two non-ServiceGuard servers - definitely unsupported, don't tell anybody).  Since we needed to fit 180GB of mirrored data (occupying 360GB of space) into just 180GB, we chose to use RAID5, which gave us 144GB of useable space (obviously we had trim a little extraneous data but we made it fit).&lt;BR /&gt;&lt;BR /&gt;That's the difference between RAID0/1 and RAID5.  The RAID0/1 configuration gives the most protection and performance.  The RAID5 gives the most storage efficiency while sacrificing some protection and some performance.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2003 11:41:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018892#M8996</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-07-11T11:41:50Z</dc:date>
    </item>
    <item>
      <title>Re: Hardware RAID</title>
      <link>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018893#M8997</link>
      <description>Eugeny &amp;amp; Pete&lt;BR /&gt;&lt;BR /&gt;Thank you - It makes sense now&lt;BR /&gt;Not applicable to this client,&lt;BR /&gt;but useful for something else that I have in mind&lt;BR /&gt;&lt;BR /&gt;Mike "2hrs 56mins from The Gate" Fisher</description>
      <pubDate>Fri, 11 Jul 2003 12:04:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/hardware-raid/m-p/3018893#M8997</guid>
      <dc:creator>Mike Fisher_5</dc:creator>
      <dc:date>2003-07-11T12:04:18Z</dc:date>
    </item>
  </channel>
</rss>

