<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic performance i/o in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619735#M816065</link>
    <description>I have a doubt... the machine is a HP rp3410 with a storage unit SAM, the discs are formed in way RAW. The question is in which it forms handles better to the load the controllers, when having several volumenes and 10 internal discs, distribute the access by disc or volume? send the distribution... wanted to them to know his opinion of the best form so that i/o is but fast.&lt;BR /&gt;&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr012 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr013 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr014 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr015 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr016 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr017 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr018 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr019 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr020 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr021 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr022 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr023 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr024 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr025 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr026 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr027 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr028 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr029 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr030 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr031 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr032 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr033 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr034 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr001 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr002 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr003 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr004 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr005 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr006 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr007 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr008 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr009 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr010 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr011 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr012 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr013 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr014 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr015 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr016 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr017 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr018 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr019 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr020 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr022 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr023 1996&lt;BR /&gt;</description>
    <pubDate>Tue, 06 Sep 2005 10:57:35 GMT</pubDate>
    <dc:creator>Fredy Correa</dc:creator>
    <dc:date>2005-09-06T10:57:35Z</dc:date>
    <item>
      <title>performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619735#M816065</link>
      <description>I have a doubt... the machine is a HP rp3410 with a storage unit SAM, the discs are formed in way RAW. The question is in which it forms handles better to the load the controllers, when having several volumenes and 10 internal discs, distribute the access by disc or volume? send the distribution... wanted to them to know his opinion of the best form so that i/o is but fast.&lt;BR /&gt;&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr012 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr013 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr014 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr015 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr016 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr017 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr018 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr019 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr020 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr021 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr022 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr023 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr024 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr025 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr026 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr027 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr028 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr029 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr030 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr031 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr032 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr033 1996&lt;BR /&gt;DATOS1 /dev/vgcx001/rlvdwmpr034 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr001 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr002 1996&lt;BR /&gt;DATOS1 /dev/vgcx002/rlvdwmpr003 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr004 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr005 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr006 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr007 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr008 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr009 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr010 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr011 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr012 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr013 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr014 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr015 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr016 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr017 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr018 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr019 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr020 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr022 1996&lt;BR /&gt;DATOS2 /dev/vgcx002/rlvdwmpr023 1996&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Sep 2005 10:57:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619735#M816065</guid>
      <dc:creator>Fredy Correa</dc:creator>
      <dc:date>2005-09-06T10:57:35Z</dc:date>
    </item>
    <item>
      <title>Re: performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619736#M816066</link>
      <description>I know this is not the answer that you are looking for but, when you talk about databases with 10 or more disk volumes and you are worried about disk performance, in the same sentence with server model rp3410 is making it totally irrelevant. In my opinion, if you are talking about a big database and disk i/o  performance issues, you need to get rid of the desktop workstation replacement class rp3410 and go up to something like rp4440. After than, you can worry about the disk I/O performance.</description>
      <pubDate>Tue, 06 Sep 2005 13:11:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619736#M816066</guid>
      <dc:creator>Mel Burslan</dc:creator>
      <dc:date>2005-09-06T13:11:02Z</dc:date>
    </item>
    <item>
      <title>Re: performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619737#M816067</link>
      <description>I don't thing the machine in question is right for large database service.&lt;BR /&gt;&lt;BR /&gt;That being the case, raid 1 is recommended for data,index and hot logs that get written to a lot.&lt;BR /&gt;&lt;BR /&gt;You need data protection. Raid 1 will provide decent performance so long as the database is not too large.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 06 Sep 2005 13:21:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619737#M816067</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2005-09-06T13:21:31Z</dc:date>
    </item>
    <item>
      <title>Re: performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619738#M816068</link>
      <description>I UNDERSTAND THAT THIS MODEL IS NOT RECOMMENDABLE FOR GREAT DATA BASES, THE QUESTION IS, At LEVEL DE UNIX FOR The ACCESSES OF DISCS IT IS BETTER TO HAVE VOLUMENES Or DISCS RAW?</description>
      <pubDate>Tue, 06 Sep 2005 14:15:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619738#M816068</guid>
      <dc:creator>Fredy Correa</dc:creator>
      <dc:date>2005-09-06T14:15:58Z</dc:date>
    </item>
    <item>
      <title>Re: performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619739#M816069</link>
      <description>The overhead of LVM is so small as to be negligible. All that is happening is one extra level of i/o indirection in the device switch table so that logical block 5000, for exapmle, is translated into block 645 of /dev/rdsk/cXtYdZ. In theory, if you could balence the i/o perfectly, raw disk would be better but the real world differences are extremely difficult to measure and balancing is much easier and more convenient with logical volumes. &lt;BR /&gt;&lt;BR /&gt;I haven't used actual raw disk i/o for database in years but instead have used logical raw volumes.&lt;BR /&gt;&lt;BR /&gt;Even more surprising (unless you have multiple hosts accessing the data; e.g Oracle RAC), you might just find that fully cooked i/o gives better performance. Also, don't overlook that you can use the OnlineJFS vxfs mount options convosync=direct,mincache=direct to completely bypass the buffer cache and yet allow you to use regular files.&lt;BR /&gt;&lt;BR /&gt;You really need to do some measurement with your hardware and database because there is no "one size fits all" answer.</description>
      <pubDate>Tue, 06 Sep 2005 15:03:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619739#M816069</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2005-09-06T15:03:42Z</dc:date>
    </item>
    <item>
      <title>Re: performance i/o</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619740#M816070</link>
      <description>In response to your last posting:&lt;BR /&gt;If I was choosing raw disk vs lvm - both w/o file systems - I'd definitely choose lvm.  This is because while still having the advantage of speed in raw/io you'd also have the flexibility of lvm.  You can mirror disks, stripe for speed, use lvm commands to move/duplicate data, etc.  I'm pretty sure that lvm itself poses very little overhead on the system, especially if not responsible for a mirror.  Also, consider that you'd have to put all of your data into only 10 tablespaces (each disk).  With lvm you'd have the ability to create new tablespaces as needed/wanted by just getting a new lvol created out of an existing vg.&lt;BR /&gt;&lt;BR /&gt;Anyway, I vote for using lvm.</description>
      <pubDate>Tue, 06 Sep 2005 17:01:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o/m-p/3619740#M816070</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-09-06T17:01:58Z</dc:date>
    </item>
  </channel>
</rss>

