<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: autoraid 12H disk array in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669509#M4189</link>
    <description>Thanks for the information, looks like it's &lt;BR /&gt;a problem with oracle not hardware.</description>
    <pubDate>Thu, 21 Feb 2002 19:05:40 GMT</pubDate>
    <dc:creator>james gould</dc:creator>
    <dc:date>2002-02-21T19:05:40Z</dc:date>
    <item>
      <title>autoraid 12H disk array</title>
      <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669506#M4186</link>
      <description>Have an autoraid 12H with a mix of 4gig and&lt;BR /&gt;18 gig.  Replaced two of the 4 gigs with two&lt;BR /&gt;18gigs.  The bottom four drives were already&lt;BR /&gt;changed from 4 to 18 some months ago.&lt;BR /&gt;&lt;BR /&gt;Should there be any types of performance &lt;BR /&gt;problems by doing this? Noticing that certain&lt;BR /&gt;LUNS are taking a big hit.  Also this weekend&lt;BR /&gt;and upgrade to the oracle database was done so&lt;BR /&gt;I am trying to see if the hardware is causing&lt;BR /&gt;any of the problems.</description>
      <pubDate>Thu, 21 Feb 2002 18:09:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669506#M4186</guid>
      <dc:creator>james gould</dc:creator>
      <dc:date>2002-02-21T18:09:17Z</dc:date>
    </item>
    <item>
      <title>Re: autoraid 12H disk array</title>
      <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669507#M4187</link>
      <description>No quite the reverse: &lt;BR /&gt;you should have more space available for raid 1/0 giving you better perf.&lt;BR /&gt;&lt;BR /&gt;However having said that, DURING the migration of data from one disk to another (ie pull out one 4G, replace with 18G, rebuild occurs) the rebuild will take up a lot of the autoraid CPU and i/o.  If it really bothers you you can lower the rebuild priority, but it'll take longer in this case.  Then remove the final 4G drive and insert the 2nd 18G drive.  The same will happen, but this time the data will be moved into RAID 1/0.&lt;BR /&gt;When all is finished, things will be great!&lt;BR /&gt;(unless you use the extra space to create a lun and fill it up with data...)&lt;BR /&gt;&lt;BR /&gt;post up the output of&lt;BR /&gt;arraydsp -a &lt;ID&gt;&lt;BR /&gt;&lt;BR /&gt;where &lt;BR /&gt;arraydsp -i&lt;BR /&gt;returns the &lt;ID&gt; (serial number)&lt;BR /&gt;&lt;BR /&gt;Later,&lt;BR /&gt;Bill&lt;BR /&gt;&lt;BR /&gt;&lt;/ID&gt;&lt;/ID&gt;</description>
      <pubDate>Thu, 21 Feb 2002 18:29:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669507#M4187</guid>
      <dc:creator>Bill McNAMARA_1</dc:creator>
      <dc:date>2002-02-21T18:29:39Z</dc:date>
    </item>
    <item>
      <title>Re: autoraid 12H disk array</title>
      <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669508#M4188</link>
      <description>No, this should improve your performance since you now have more unallocated space on the 12H. This leaves more room for RAID 1/0.&lt;BR /&gt;You will take a performance hit until the rebuild/balance completes.&lt;BR /&gt;&lt;BR /&gt;Bear in mind, that performance tools will often mislead you about how busy a LUN is on an array. In the 12H, there is really no reason to divide a VG into more than 2 LUN's with primary path X on LUNA and primary path Y on LUNB. You then stripe each LVOL in this VG across both LUNS. You could, create this same VG with 8 LUNS and things would APPEAR better because no LUN would APPEAR to be a bottleneck. In reality, the actual throughput is no better with 8 LUN's than 2 LUN's because the 12H will distribute the I/O across all the available physical disks anyway.&lt;BR /&gt;&lt;BR /&gt;Regards, Clay</description>
      <pubDate>Thu, 21 Feb 2002 18:31:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669508#M4188</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2002-02-21T18:31:51Z</dc:date>
    </item>
    <item>
      <title>Re: autoraid 12H disk array</title>
      <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669509#M4189</link>
      <description>Thanks for the information, looks like it's &lt;BR /&gt;a problem with oracle not hardware.</description>
      <pubDate>Thu, 21 Feb 2002 19:05:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669509#M4189</guid>
      <dc:creator>james gould</dc:creator>
      <dc:date>2002-02-21T19:05:40Z</dc:date>
    </item>
    <item>
      <title>Re: autoraid 12H disk array</title>
      <link>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669510#M4190</link>
      <description>points? ;)</description>
      <pubDate>Fri, 22 Feb 2002 12:25:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/autoraid-12h-disk-array/m-p/2669510#M4190</guid>
      <dc:creator>Bill McNAMARA_1</dc:creator>
      <dc:date>2002-02-22T12:25:14Z</dc:date>
    </item>
  </channel>
</rss>

