<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Oracle and LV in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318028#M876941</link>
    <description>&lt;BR /&gt;I'm with the others here, in suggesting striping the disks together as a first choice, and #3 as alternative. I would just use extent based striping (with /etc/lvmpvg file and lvcreate -L NNNNN -D y -s g -n lv.. vg..)&lt;BR /&gt;&lt;BR /&gt;The main reason for my reply is to ask why you would 'bother' with LVM for 1PV -&amp;gt; VG -&amp;gt; 1 LV configurations? What value is LVM adding besides potentially nicer naming and more CPU overhead? Kindly explain why not simply create a filesystem on the PV device itself?&lt;BR /&gt;(Future growth? software mirroring? ...)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 29 Jun 2004 23:06:03 GMT</pubDate>
    <dc:creator>Hein van den Heuvel</dc:creator>
    <dc:date>2004-06-29T23:06:03Z</dc:date>
    <item>
      <title>Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318019#M876932</link>
      <description>Hi All, we have to migrate from a Symmetrix to a Clariion storage and in the same time we have to modify DB layout structure.&lt;BR /&gt;We must have three LV on different spindles in which move all datafiles.&lt;BR /&gt;There are three different scenarios:&lt;BR /&gt;&lt;BR /&gt;1)&lt;BR /&gt;LV1=Index1+index2&lt;BR /&gt;LV2=Data1+Data2&lt;BR /&gt;LV3=system+redo+rbs+temp&lt;BR /&gt;2)&lt;BR /&gt;LV1=data1+index1&lt;BR /&gt;LV2=data2+index2&lt;BR /&gt;LV3=system+redo+rbs+temp&lt;BR /&gt;3)&lt;BR /&gt;LV1=data1+index2&lt;BR /&gt;Lv2=data2+index1&lt;BR /&gt;LV3=system+redo+rbs+temp&lt;BR /&gt;&lt;BR /&gt;There are 2 type of data (data1+data2) that offen we have to join&lt;BR /&gt;We suppose oracle sorts in RAM&lt;BR /&gt;DB istance is a 'DSS'&lt;BR /&gt;Which configuration should you choice?&lt;BR /&gt;&lt;BR /&gt;I hope this question isn't off-topic...</description>
      <pubDate>Tue, 29 Jun 2004 08:07:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318019#M876932</guid>
      <dc:creator>Mauro Gatti</dc:creator>
      <dc:date>2004-06-29T08:07:57Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318020#M876933</link>
      <description>Hi Mauro,&lt;BR /&gt;I dont get the " nust have thress LV on different spindles", how are you managing that with lvm unless you put them (spindles=pv?) in different VGs.&lt;BR /&gt;&lt;BR /&gt;My solution would be "Stripe!!!"&lt;BR /&gt;But alternatively I would choose scenario 3&lt;BR /&gt;&lt;BR /&gt;All the best&lt;BR /&gt;Victor</description>
      <pubDate>Tue, 29 Jun 2004 08:20:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318020#M876933</guid>
      <dc:creator>Victor BERRIDGE</dc:creator>
      <dc:date>2004-06-29T08:20:29Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318021#M876934</link>
      <description>seems like striping will get you more spindles.&lt;BR /&gt;&lt;BR /&gt;Oracle recommends raid 1 or raid 10 for data and index.&lt;BR /&gt;&lt;BR /&gt;Take that into account, this is an opportunity, not a problem.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 29 Jun 2004 08:30:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318021#M876934</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2004-06-29T08:30:08Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318022#M876935</link>
      <description>This is LV structure:&lt;BR /&gt;&lt;BR /&gt;LV1=PV1&lt;BR /&gt;LV2=PV2&lt;BR /&gt;LV3=PV3&lt;BR /&gt;&lt;BR /&gt;PV1=LUN1=disk1+disk2+disk3+disk4+disk5 in RAID5&lt;BR /&gt;PV2=LUN2=disk6+disk7+disk8+disk9+disk10 in RAID 5&lt;BR /&gt;PV3=LUN3=disk11+disk12+disk13+disk14+disk15 in RAID 5&lt;BR /&gt;&lt;BR /&gt;Raid (and striping) is menaged by storage</description>
      <pubDate>Tue, 29 Jun 2004 08:36:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318022#M876935</guid>
      <dc:creator>Mauro Gatti</dc:creator>
      <dc:date>2004-06-29T08:36:12Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318023#M876936</link>
      <description>I know this isn't a problem.&lt;BR /&gt;&lt;BR /&gt;EMC wrote on your white papers RAID 5 on clariion based on 5 disks is very fast so its performance is similar than RAID 1.</description>
      <pubDate>Tue, 29 Jun 2004 08:38:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318023#M876936</guid>
      <dc:creator>Mauro Gatti</dc:creator>
      <dc:date>2004-06-29T08:38:51Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318024#M876937</link>
      <description>I understand you are using a complete array per lv is that so?&lt;BR /&gt;Of course RAID5 well done has similar perfs...  more LUNs does improve performances.&lt;BR /&gt;Also you havent mentionned how they will be connected 2 scsi controllers? 4 scsi controllers 2 FC?&lt;BR /&gt;&lt;BR /&gt;Depending on your answer can I tell you what I would suggest if you are looking for performance otherwise go ahead...&lt;BR /&gt;&lt;BR /&gt;All the best&lt;BR /&gt;Victor</description>
      <pubDate>Tue, 29 Jun 2004 08:54:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318024#M876937</guid>
      <dc:creator>Victor BERRIDGE</dc:creator>
      <dc:date>2004-06-29T08:54:05Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318025#M876938</link>
      <description>I recommend again reading this document from Oracle (needs creation of an account over metalink). This explicitly shows the Oracle position on the subject. If you don't want to look at the whole document, have a look at the tab at the end.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&amp;amp;p_id=30286.1" target="_blank"&gt;http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&amp;amp;p_id=30286.1&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Personnaly, I would go for third possibility, and think having LV3 on RAID1 would be better&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Jun 2004 08:54:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318025#M876938</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-06-29T08:54:16Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318026#M876939</link>
      <description>Victor, I've 2 FC with powerpath.</description>
      <pubDate>Tue, 29 Jun 2004 09:09:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318026#M876939</guid>
      <dc:creator>Mauro Gatti</dc:creator>
      <dc:date>2004-06-29T09:09:01Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318027#M876940</link>
      <description>Mauro,&lt;BR /&gt;I better now, since you have powerpath you hope to dynamicaly balance the load through the 2 HBAs so in that case no need (theoretically that is...) of stripping...&lt;BR /&gt;I must say that my experience has proved that more you have luns better are your throughput (but that doesnt mean create tons of small ones...).The machine here that has the best throughtput is configured with luns in the same VG coming from different arrays.&lt;BR /&gt;So even if EMC says... I would create 3 luns/per array and pick one of each that I would put in a volume group, you would end up with 3 VGs having 3 pv from 3 different arrays.&lt;BR /&gt;Up to you after if you want to create just one FS in the VG or one for the data and one for the indexes (my choice).&lt;BR /&gt;&lt;BR /&gt;Oh I have some stats:&lt;BR /&gt;On a HDS9980V 2 partions from a IBMp650 accessing the bay both having the same oracle database (one is for test) of about 200GB the first box has it in one vg/1lv/1pv&lt;BR /&gt;the other with 3vg and a total of 12 pv&lt;BR /&gt;The same (big) query take 40min on the second&lt;BR /&gt;and more 2 hours on the first. they both have the same HBAs and go to the same switch and have dynamic loading. Unfortunately the first is the production but Im pleased I had nothing to do with this configuration...&lt;BR /&gt;&lt;BR /&gt;All the best&lt;BR /&gt;Victor</description>
      <pubDate>Tue, 29 Jun 2004 09:33:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318027#M876940</guid>
      <dc:creator>Victor BERRIDGE</dc:creator>
      <dc:date>2004-06-29T09:33:54Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318028#M876941</link>
      <description>&lt;BR /&gt;I'm with the others here, in suggesting striping the disks together as a first choice, and #3 as alternative. I would just use extent based striping (with /etc/lvmpvg file and lvcreate -L NNNNN -D y -s g -n lv.. vg..)&lt;BR /&gt;&lt;BR /&gt;The main reason for my reply is to ask why you would 'bother' with LVM for 1PV -&amp;gt; VG -&amp;gt; 1 LV configurations? What value is LVM adding besides potentially nicer naming and more CPU overhead? Kindly explain why not simply create a filesystem on the PV device itself?&lt;BR /&gt;(Future growth? software mirroring? ...)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Jun 2004 23:06:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318028#M876941</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-06-29T23:06:03Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318029#M876942</link>
      <description>You would have a better chance of disk I/O balance by having a large number of LUN's and striping across the whole lot.&lt;BR /&gt;&lt;BR /&gt;e.g for 32 LUN's using 64k stripe size.&lt;BR /&gt;&lt;BR /&gt;lvcreate -n &lt;LV name=""&gt; -i 32 -I 64 -l &lt;SIZE&gt; /dev/myvg&lt;BR /&gt;&lt;BR /&gt;The also have a look at scsictl queue depths. The default is 8, but I would suggest 16.&lt;/SIZE&gt;&lt;/LV&gt;</description>
      <pubDate>Tue, 29 Jun 2004 23:49:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318029#M876942</guid>
      <dc:creator>Michael Tully</dc:creator>
      <dc:date>2004-06-29T23:49:20Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle and LV</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318030#M876943</link>
      <description>RAID 5 is "so fast" on clariion only by comparison to RAID S on Symmetrix.  Cache can mask the write penalty, but if you are doing enough writes, the cache runs out of space and you are left with the back end performance of the array, which is mostly dictated by the number of spindles.  Benchmarks are always done with many spindles, so beware, if the number of disks  in the RAID group is less than what you had on the Symmetrix for the equivalent data.</description>
      <pubDate>Wed, 30 Jun 2004 17:24:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-and-lv/m-p/3318030#M876943</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2004-06-30T17:24:37Z</dc:date>
    </item>
  </channel>
</rss>

