<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic 12H performance in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619625#M3413</link>
    <description>Hi&lt;BR /&gt;I have a performance pb on the 12H i use the arrydsp -r for recommandation it show a recommandation to add a disk after doing this the recomondation is showing "no recommandation performance is optimal"but still pb in perf i use &lt;BR /&gt;arraydsp -m to get performance metrics the SCSI Q is very hihg (98,100 ...)&lt;BR /&gt;have you an idea to solve this problem.&lt;BR /&gt;LUN are used for raw device of Oracle database.&lt;BR /&gt;Regards</description>
    <pubDate>Sun, 25 Nov 2001 11:05:12 GMT</pubDate>
    <dc:creator>Farid Feknous</dc:creator>
    <dc:date>2001-11-25T11:05:12Z</dc:date>
    <item>
      <title>12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619625#M3413</link>
      <description>Hi&lt;BR /&gt;I have a performance pb on the 12H i use the arrydsp -r for recommandation it show a recommandation to add a disk after doing this the recomondation is showing "no recommandation performance is optimal"but still pb in perf i use &lt;BR /&gt;arraydsp -m to get performance metrics the SCSI Q is very hihg (98,100 ...)&lt;BR /&gt;have you an idea to solve this problem.&lt;BR /&gt;LUN are used for raw device of Oracle database.&lt;BR /&gt;Regards</description>
      <pubDate>Sun, 25 Nov 2001 11:05:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619625#M3413</guid>
      <dc:creator>Farid Feknous</dc:creator>
      <dc:date>2001-11-25T11:05:12Z</dc:date>
    </item>
    <item>
      <title>Re: 12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619626#M3414</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;AutoRAID works in both Raid 0/1 and Raid 5.  Raid 0/1 is disk striping with mirroring and Raid 5 is disk striping with distributed parity.&lt;BR /&gt;&lt;BR /&gt;AutoRAID migrates data from Raid 0/1 to Raid 5 and back according to the frequency of use.  Raid 0/1 stores the most used data and Raid 5 stores the least used, since Raid 0/1 is much faster but uses more space, whereas Raid 5 is slower but uses space more efficiently.&lt;BR /&gt;&lt;BR /&gt;When it reaches a certain amount, it will continue to migrate from Raid 0/1 to Raid 5 and back all the time, thus reducing efficiency.  It is recommended that you leave 50% of the LUN empty so that you have a lot of space allocated for Raid 0/1.&lt;BR /&gt;&lt;BR /&gt;In your case, I think you have to increase an extra hard disk, i.e. 2 harddisks altogether.  Also, if you increase one harddisk of higher storage space than the others, it will only allocate the size of the other harddisks in the disk.  You have to add two harddisks in order to allocate the full space of higher-capacity disks.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Vince</description>
      <pubDate>Mon, 26 Nov 2001 14:02:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619626#M3414</guid>
      <dc:creator>Vincent Farrugia</dc:creator>
      <dc:date>2001-11-26T14:02:05Z</dc:date>
    </item>
    <item>
      <title>Re: 12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619627#M3415</link>
      <description>Hi Vincent&lt;BR /&gt;&lt;BR /&gt;You mean 50% of LUN or 50% of the entire disk array.&lt;BR /&gt;&lt;BR /&gt;Regards</description>
      <pubDate>Tue, 27 Nov 2001 04:59:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619627#M3415</guid>
      <dc:creator>Farid Feknous</dc:creator>
      <dc:date>2001-11-27T04:59:16Z</dc:date>
    </item>
    <item>
      <title>Re: 12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619628#M3416</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;It has to be 50% of the total space of the disk array, to have half the space free for mirroring in Raid 0/1.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Vince</description>
      <pubDate>Tue, 27 Nov 2001 15:01:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619628#M3416</guid>
      <dc:creator>Vincent Farrugia</dc:creator>
      <dc:date>2001-11-27T15:01:06Z</dc:date>
    </item>
    <item>
      <title>Re: 12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619629#M3417</link>
      <description>The 12h uses unallocated space for disk mirroring. I haven't seen percentages of recommended space unallocated, but I like to keep my LUNs at a workable size without excessive free space. It is also recommended to create more smaller LUNs and mount them into one VG. Be careful because the max LUNs on a 12h is 7.&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Tue, 27 Nov 2001 17:05:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619629#M3417</guid>
      <dc:creator>Steve Labar</dc:creator>
      <dc:date>2001-11-27T17:05:16Z</dc:date>
    </item>
    <item>
      <title>Re: 12H performance</title>
      <link>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619630#M3418</link>
      <description>Hi:&lt;BR /&gt;&lt;BR /&gt;If you really want to make your 12H perform, there are two criteria:&lt;BR /&gt;&lt;BR /&gt;1) Each volume group should be comprised of 2 LUN's of equal size. LunA should be primary path Controller X (alternate Y). LunB should be primary path Controller Y (alternate X). You then strip each logical volume in the volume group across both LUNS in 64k stripes.&lt;BR /&gt;If all of your Oracle data is going on the 12H, it is perfectly fine to put everything in one Volume Group with multilple LUN's. The important point is that you are now fully utilizing both external scsi paths and thus all four internal paths.&lt;BR /&gt;&lt;BR /&gt;2) Allocate no more than about 60% of the array as LUN's. This keeps the AutoRAID in RAID 10 all the time.&lt;BR /&gt;&lt;BR /&gt;The number of LUN's (other than as outlined above) has no impact on performance and actually 8 LUNS (0-7) are allowed. If you have OnlineJFS, you can use the mounty options convosync=direct,mincache=direct,delaylog,nodatainlog,suid,rw to achieve performance result which are indistinguishable from rawdisk with the convenience of conventional files. Those options bypass the unix file buffers just like raw/io. I would use those options for data and indices and use the options delaylog,suid,rw,nodatainlog for archive and redo logs. If all of this is going to an array, you can actually do this with just two filesystems. DBA's panic but there is no need; the 12H distributes the data across all the disks.&lt;BR /&gt;&lt;BR /&gt;Regards, Clay&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Nov 2001 21:11:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/12h-performance/m-p/2619630#M3418</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2001-11-27T21:11:22Z</dc:date>
    </item>
  </channel>
</rss>

