<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Any idea about improve I/O performance? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542419#M875076</link>
    <description>Hi Phil.&lt;BR /&gt;&lt;BR /&gt;First, thank you.&lt;BR /&gt;&lt;BR /&gt;Second, I was looking for the SAME methodology at ORACLE site but I couldn?t find anything (what a "same" ...). Please send me the whitepaper if you can.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
    <pubDate>Fri, 29 Jun 2001 06:21:48 GMT</pubDate>
    <dc:creator>Lukas Grijander</dc:creator>
    <dc:date>2001-06-29T06:21:48Z</dc:date>
    <item>
      <title>Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542409#M875066</link>
      <description>I've an ORACLE db across 4 fc60 LUNS (data, idx, rbs, etc balanced across all the LUNS).&lt;BR /&gt;&lt;BR /&gt;Each LUN has 4 disks on RAID-5, but I'm planning to change to RAID-0/1 (as soon as posible).&lt;BR /&gt;&lt;BR /&gt;Any idea about fc60 stripe size of LUNS? Now is 8 Kb.&lt;BR /&gt;&lt;BR /&gt;Any idea about filesystem block size? Now is 8 Kb.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.</description>
      <pubDate>Tue, 19 Jun 2001 16:07:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542409#M875066</guid>
      <dc:creator>Lukas Grijander</dc:creator>
      <dc:date>2001-06-19T16:07:36Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542410#M875067</link>
      <description>Hi Rafael,&lt;BR /&gt;&lt;BR /&gt;  according the documentation is "The optimum stripe segment size is the smallest size that will rarely force I/Os to a second stripe." If you're using 4 disks to create RAID 0/1 striping, i.e. 2 disks for the original data and 2 disks to hold the mirror. And if all I/Os are 8KByte, then a stripe size of 4KByte would be the optimum. But if the vxfs driver would group multiple 8KB I/Os into 1 64KB I/O, then the optimum would be 32KB. Because all stripes needed for the I/O request can be read (or written) simultaneously. Please note that the largest blocksize vxfs supports is 8KB.&lt;BR /&gt;&lt;BR /&gt;HTH, cu l8r, Edgar.</description>
      <pubDate>Wed, 20 Jun 2001 07:59:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542410#M875067</guid>
      <dc:creator>Edgar Matzinger</dc:creator>
      <dc:date>2001-06-20T07:59:53Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542411#M875068</link>
      <description>Hi there.&lt;BR /&gt;You talk about LUN's. If you use several controler interfaces, psread it thru the different interfaces as well. Otherwise your your controler will be the bottleneck.&lt;BR /&gt;Rgds&lt;BR /&gt;Alexander M. Ermes&lt;BR /&gt;</description>
      <pubDate>Wed, 20 Jun 2001 08:42:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542411#M875068</guid>
      <dc:creator>Alexander M. Ermes</dc:creator>
      <dc:date>2001-06-20T08:42:31Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542412#M875069</link>
      <description>Hi Alexander.&lt;BR /&gt;&lt;BR /&gt;I use 2 controlers.&lt;BR /&gt;&lt;BR /&gt;thanks.</description>
      <pubDate>Wed, 20 Jun 2001 10:39:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542412#M875069</guid>
      <dc:creator>Lukas Grijander</dc:creator>
      <dc:date>2001-06-20T10:39:09Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542413#M875070</link>
      <description>Hi Edgar.&lt;BR /&gt;&lt;BR /&gt;Should I understand that ...? &lt;BR /&gt;&lt;BR /&gt;1.- The fc60 stripe_size must be block_size / (number of data disks).&lt;BR /&gt;&lt;BR /&gt;2.- Vxfs has an 8 kb bs. I think JFS On_line bs may be larger than 8 kb. I read somewhere "vxfs (with JFS On_line) likes bs of 64 kb".&lt;BR /&gt;&lt;BR /&gt;In any case I haven't JFS On_line.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;</description>
      <pubDate>Wed, 20 Jun 2001 10:49:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542413#M875070</guid>
      <dc:creator>Lukas Grijander</dc:creator>
      <dc:date>2001-06-20T10:49:27Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542414#M875071</link>
      <description>Use two controllers to access the luns.&lt;BR /&gt;ie don't put all data io traffic down just one FC60 controller.&lt;BR /&gt;When creating the LUN, make sure that you create it across enclosures (or split busses)&lt;BR /&gt;&lt;BR /&gt;Split bus mode will be faster, only 5 disks/bus.&lt;BR /&gt;Full bus mode will have 10 disk/bus.&lt;BR /&gt;&lt;BR /&gt;What does amdsp -a show&lt;BR /&gt;&lt;BR /&gt;Is the FC60 the only thing on the fc loop?&lt;BR /&gt;&lt;BR /&gt;Later,&lt;BR /&gt;Bill</description>
      <pubDate>Thu, 21 Jun 2001 09:43:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542414#M875071</guid>
      <dc:creator>Bill McNAMARA_1</dc:creator>
      <dc:date>2001-06-21T09:43:20Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542415#M875072</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;If you want to improve IO perf, you should purchase Online JFS.&lt;BR /&gt;With this soft, you mount oracle filesystems with "-mincache=direct" option.&lt;BR /&gt;This will give perf such as raw-device datafiles.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Patrice.</description>
      <pubDate>Thu, 21 Jun 2001 13:24:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542415#M875072</guid>
      <dc:creator>MARTINACHE</dc:creator>
      <dc:date>2001-06-21T13:24:07Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542416#M875073</link>
      <description>Thanks Patrice.&lt;BR /&gt;&lt;BR /&gt;I don't know if I have understood so well, sorry, but I'm going to try giving you more information.&lt;BR /&gt;&lt;BR /&gt;There are 5 HP9000 "against" 1 fc60.&lt;BR /&gt;&lt;BR /&gt;In the fc60 there are 52 disks, 16 of them are used by the machine of the question (ORACLE, etc).&lt;BR /&gt;&lt;BR /&gt;These 16 are accross the 6 scsi channels.&lt;BR /&gt;scsi 1 - 2 disks&lt;BR /&gt;scsi 2 - 2 disks&lt;BR /&gt;scsi 3 - 4 disks&lt;BR /&gt;scsi 4 - 4 disks&lt;BR /&gt;scsi 5 - 2 disks&lt;BR /&gt;scsi 6 - 2 disks&lt;BR /&gt;&lt;BR /&gt;Every LUN was created across different SCSI.&lt;BR /&gt;&lt;BR /&gt;Is enought or you need more information?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Thu, 21 Jun 2001 15:16:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542416#M875073</guid>
      <dc:creator>Lukas Grijander</dc:creator>
      <dc:date>2001-06-21T15:16:15Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542417#M875074</link>
      <description>Hi there,&lt;BR /&gt;&lt;BR /&gt;Oracle has a methodology called "SAME" (Stripe and Mirror Everything).  You may be able to find the whitepaper on this (it was presented at Open World last November); if not let me know and I'll post a copy.&lt;BR /&gt;&lt;BR /&gt;Basically, you should create a single LUN which is striped and mirrored; in your case, it will be 8 disks plus 8 disks. Keeping the controllers balanced it will be the disks on channels 1-3 mirrored to the disks on channels 4-6.&lt;BR /&gt;&lt;BR /&gt;You asked about stripe depth.  The SAME methodolgy suggests as large a stripe depth as possible (up to 1 MB) in order to optimise multiblock operations.&lt;BR /&gt;&lt;BR /&gt;I will suggest that if you archive you should create two LUNs, one for archives and one for the rest of the database.  It would be sufficient to allocate 2 of the 16 disks to your archives (I'd mirror them), this depends of course on how big your archive area needs to be compared with your disk size.&lt;BR /&gt;&lt;BR /&gt;This configuration should allow for good performance (90-95% of optimal in most situations).  In order to squeeze the last 5-10% out of the system, you will need to I/O profile your application, but you'll be limited only having 16 disks.&lt;BR /&gt;&lt;BR /&gt;You can make things a lot more complicated and try to fine-tune the performance, but unfortunately as the nature of your application usage changes over time it is likely your disk configuration would need to change as well.&lt;BR /&gt;&lt;BR /&gt;Using the SAME methodology eliminates a lot of the administrative nonsense that you're trying to go through to achieve good performance.  &lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;:-Phil</description>
      <pubDate>Thu, 28 Jun 2001 19:43:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542417#M875074</guid>
      <dc:creator>Phil Miesle</dc:creator>
      <dc:date>2001-06-28T19:43:31Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542418#M875075</link>
      <description>&lt;BR /&gt;Weve recently being doing the same tests on an FC60 with 3 SC10's for an Oracle DB.&lt;BR /&gt;&lt;BR /&gt;After much testing the fastest throughput we got was; 166 MB/sec&lt;BR /&gt;&lt;BR /&gt;This was using RAID1 on the FC60 (not RAID0/1) with 4k stripe size, and using lvm striping with a 64k blocksize. Both stripe sizes are the optimum. The FC60 manual quotes 170MB/s as the max throughput so were getting very close!&lt;BR /&gt;&lt;BR /&gt;Once you setup your lvols this way test them with;&lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/vgXX/rlvolYY of=/dev/null bs=1024k count=50&lt;BR /&gt;&lt;BR /&gt;and see if you can get the time down to 0.3 sec (50MB/0.3=166 MB/s). For some as yet unknown reason using RAID0/1 was a bit slower (just over 120MB/s).&lt;BR /&gt;</description>
      <pubDate>Thu, 28 Jun 2001 20:36:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542418#M875075</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2001-06-28T20:36:18Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542419#M875076</link>
      <description>Hi Phil.&lt;BR /&gt;&lt;BR /&gt;First, thank you.&lt;BR /&gt;&lt;BR /&gt;Second, I was looking for the SAME methodology at ORACLE site but I couldn?t find anything (what a "same" ...). Please send me the whitepaper if you can.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Fri, 29 Jun 2001 06:21:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542419#M875076</guid>
      <dc:creator>Lukas Grijander</dc:creator>
      <dc:date>2001-06-29T06:21:48Z</dc:date>
    </item>
    <item>
      <title>Re: Any idea about improve I/O performance?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542420#M875077</link>
      <description>Rafael (et. al.),&lt;BR /&gt;&lt;BR /&gt;Attached is the SAME methodology paper presented at OpenWorld.  I hope you find it useful!&lt;BR /&gt;&lt;BR /&gt;:-Phil</description>
      <pubDate>Fri, 29 Jun 2001 08:08:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-idea-about-improve-i-o-performance/m-p/2542420#M875077</guid>
      <dc:creator>Phil Miesle</dc:creator>
      <dc:date>2001-06-29T08:08:22Z</dc:date>
    </item>
  </channel>
</rss>

