<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Multiple LUNs vs single LUN in HPUX 11iv3 in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192563#M650217</link>
    <description>Well LVM is a must nowadays , especially in a mission-critical environment 24x7. And since we use LVM an I/O sequence (from the OS perspective) when you have a single LUN (translated in a PV) I see it as a sequential sequence; when using multiple LUNs (multiple PVs in a VG) I see it as a parallel sequence(expecially if the LV is stripped). &lt;BR /&gt;Am I right , or is it just my imagination ? &lt;BR /&gt;&lt;BR /&gt;Thank you.&lt;BR /&gt;L.</description>
    <pubDate>Thu, 13 Aug 2009 10:21:17 GMT</pubDate>
    <dc:creator>Liviu I.</dc:creator>
    <dc:date>2009-08-13T10:21:17Z</dc:date>
    <item>
      <title>Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192558#M650212</link>
      <description>Hello, &lt;BR /&gt;&lt;BR /&gt;Considering an EVA 8100 (or any EVA with active-active capability) and an HP-UX 11iv3 server what would be the best design principle in order to maximize storage access performance (IOPS and BW), to use 1 vdisk of 500GB (LVM: 1 PV in 1 VG with with single or multiple LVs) or 5 vdisks of 100GB (LVM: 5 PVs in 1 VG with single or multiple LVs) ? &lt;BR /&gt;&lt;BR /&gt;I think that since the server has 2 dual-port HBAs (dual-fabric SAN),  the OS supports native multipath, the storage supports active-active, the best idea is to be able to use the parallelism at any level given multiple IO operations, thus I would adopt the latter ... &lt;BR /&gt;&lt;BR /&gt;I would appreciate any opinion/suggestion eventually doc.&lt;BR /&gt;&lt;BR /&gt;Thank you&lt;BR /&gt;L.</description>
      <pubDate>Wed, 12 Aug 2009 05:42:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192558#M650212</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-12T05:42:49Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192559#M650213</link>
      <description>Hi,&lt;BR /&gt;I would use multiple paths and disks to get the benefits of lvm striping and using both controllers/caches.&lt;BR /&gt;&lt;BR /&gt;Remember that the EVA use multiples of 8 for expansion and the good old binary system.&lt;BR /&gt;&lt;BR /&gt;For example 4 Paths 8 Disks 64GB sounds nice.&lt;BR /&gt;&lt;BR /&gt;Some Hands on for the EVA:&lt;BR /&gt;&lt;A href="http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-2787ENW.pdf" target="_blank"&gt;http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-2787ENW.pdf&lt;/A&gt;</description>
      <pubDate>Wed, 12 Aug 2009 06:22:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192559#M650213</guid>
      <dc:creator>Stephan._1</dc:creator>
      <dc:date>2009-08-12T06:22:46Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192560#M650214</link>
      <description>Liviu,&lt;BR /&gt;&lt;BR /&gt;If said disk is to be used under ASM in a Oracle - I will sugegst for say 5x100GB Vdisks and path prefrenced alternating between A and B controllers. That way you fully harness the bandwidth to your EVA and the dual controllers.&lt;BR /&gt;&lt;BR /&gt;But you can also do a comparative performance since it is quite EASY to colapse and ercarve VDISKS (LUNS) on an EVA.&lt;BR /&gt;&lt;BR /&gt;I suggest you use ORACLEs ORION Tool which is very easy to use and simulates comparative IO and even striping that  happens in ASM. The tool is at: &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.oracle.com/technology/software/tech/orion/index.html" target="_blank"&gt;http://www.oracle.com/technology/software/tech/orion/index.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Let us know your results.&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Aug 2009 17:38:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192560#M650214</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2009-08-12T17:38:47Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192561#M650215</link>
      <description>Thank you for your ideas. &lt;BR /&gt;Well, unfortunately Orion testing is not an option so far, since I am talking about a production environment (and the latest tests I ran, with IOmeter, turned the storage unresponsive). ASM is also not used now. But for sure I appreciate your idea. &lt;BR /&gt;After reading the HP whitepaper svo suggested, I draw a conclusion , and I would appreciate any comments on this : page 22 (controller balancing influences performance): I would split the disk-group in 5 vdisks (LUNs) in order to have flexibility in balancing the workload. But the following phrase can ignore all your plans : &lt;BR /&gt;&lt;BR /&gt;"HP Continuous Access EVA requires that all LUNs within a DR group be owned by the same controller. Load balancing is performed at the DR group level and not the individual LUN. Additional configuration information can be found in the HP StorageWorks Continuous Access EVA implementation guide." &lt;BR /&gt;&lt;BR /&gt;Thus the flexibility is reduced when using Continuous Access. &lt;BR /&gt;As an example : if one has 2 applications each needing 500 GB space; and it has a DR group for each , then it makes no sense to split each 500GB in 5 LUNS since all of them will be owned by the same controller. &lt;BR /&gt;Another note, stating that there is no performance improvement when splitting on the same controller: &lt;BR /&gt;&lt;BR /&gt;Striping LUNs within a disk group on the same controller provides no additional performance value. The EVA automatically stripes each LUN across all disks in a disk group.&lt;BR /&gt;&lt;BR /&gt;Since the discussion turned to somehow a storage design issue (well they are very much related , and I'd be glad to have here members with experience in both domains), I would like to ask what do you think about my question from the LVM point of view ? I refer here at your experience in balancing filesystem access for performance improvement. Logical Volumes stripped or not ? &lt;BR /&gt;&lt;BR /&gt;Thank you. &lt;BR /&gt;L. &lt;BR /&gt;</description>
      <pubDate>Thu, 13 Aug 2009 07:08:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192561#M650215</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-13T07:08:19Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192562#M650216</link>
      <description>Hi,&lt;BR /&gt;in my opinion you will only get a final answer by testing and comparing the results.&lt;BR /&gt;&lt;BR /&gt;Following the whitepaper splitting and stripping doesn't make sense, so the only reason to use lvm will be the flexibility if you need to change it (increase, .. ).&lt;BR /&gt;&lt;BR /&gt;In our enviroment we use EVA's as described above, even if we don't get, according the whitepaper, benefit of it.&lt;BR /&gt;&lt;BR /&gt;If you look at &lt;A href="http://communities.vmware.com/thread/73745" target="_blank"&gt;http://communities.vmware.com/thread/73745&lt;/A&gt; - these guys tested a lot of EVA's with iometer without any problems, so perhaps you give it a second try.</description>
      <pubDate>Thu, 13 Aug 2009 09:22:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192562#M650216</guid>
      <dc:creator>Stephan._1</dc:creator>
      <dc:date>2009-08-13T09:22:06Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192563#M650217</link>
      <description>Well LVM is a must nowadays , especially in a mission-critical environment 24x7. And since we use LVM an I/O sequence (from the OS perspective) when you have a single LUN (translated in a PV) I see it as a sequential sequence; when using multiple LUNs (multiple PVs in a VG) I see it as a parallel sequence(expecially if the LV is stripped). &lt;BR /&gt;Am I right , or is it just my imagination ? &lt;BR /&gt;&lt;BR /&gt;Thank you.&lt;BR /&gt;L.</description>
      <pubDate>Thu, 13 Aug 2009 10:21:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192563#M650217</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-13T10:21:17Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192564#M650218</link>
      <description>My experience is limited to HP-UX 11i v2, I have no v3 running here yet.&lt;BR /&gt;I also use LVM, multipathing, mirroring, ... and noticed that HP-UX as an IO queue per LUN.&lt;BR /&gt;&lt;BR /&gt;So, if you have 1 big lun of 500GB, all IO is done sequentially. If yo have 5 luns of 100GB, 5 IO's can be done in parallel. So, in theory, you can flush your IO to the EVA's NVRAM 5 times faster with 5 LUN's as with 1 LUN, provided that your IO-card is not a bottleneck and the EVA's disk can keep up with the IO throughput.&lt;BR /&gt;Reading is the same 5 reads can be done in parallel, but at the bottom line, the EVA has to get the data from the same set of disks, so if the EVA itself is heavily loaded, it probably won't make any difference.</description>
      <pubDate>Wed, 19 Aug 2009 05:45:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192564#M650218</guid>
      <dc:creator>Wim Rombauts</dc:creator>
      <dc:date>2009-08-19T05:45:37Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192565#M650219</link>
      <description>Thank you for your reply, thus you confirm what I thought. I can conclude somehow that it's better to use parallelism as much as possible: in the worst case, the bottleneck will be somewhere else on the SAN, EVA, etc. But from the point of view of a HP-UX admin design, it should be the best architecture one could achieve. &lt;BR /&gt;Liviu.</description>
      <pubDate>Wed, 19 Aug 2009 06:22:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192565#M650219</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-19T06:22:08Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192566#M650220</link>
      <description>L - the following may provide ye with good reading.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf" target="_blank"&gt;http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf&lt;/A&gt;</description>
      <pubDate>Wed, 19 Aug 2009 12:54:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192566#M650220</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2009-08-19T12:54:44Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192567#M650221</link>
      <description>Now this is a great paper, although a little old. It would be great to find one with a new version of HP-UX, Oracle and EVA. But this comes as a confirmation of what I almost knew (or guessed), and eventually it can be used in a real world scenario design. &lt;BR /&gt;&lt;BR /&gt;Anyway I still have to understand the stripe size LVM mechanisms and the fine tuning according to different data types.  &lt;BR /&gt;&lt;BR /&gt;L.</description>
      <pubDate>Wed, 19 Aug 2009 14:31:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192567#M650221</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-19T14:31:16Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192568#M650222</link>
      <description>My HP-UX (11.11/1.31) on EVA8XXX / Oracle Recipe is we provide a 100 to 200 GB Vdisk sizes with a volume or a filesystem sitting on top of each Vdisk. SCSI QDepth I have set to 32. I leave it to the DBAs to stripe their datafiles accross these filesystems.&lt;BR /&gt;&lt;BR /&gt;Under ASM, it is the same -- 100 or 200 GB VDisks. We expect ASM to just stripe/Balance accross these volumes.&lt;BR /&gt;&lt;BR /&gt;Several years ago though -- I ahd a mini experiment with an adventurous DBA. I had him compare the performance of a DB that was on a single 1TB LUN versus 5 200GB LUNs. With the 1TB LUN experiment -- I set my Q depth to 128. There were hardly any difference in performance. Back End RAID setup was VRAID10.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Aug 2009 14:42:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192568#M650222</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2009-08-19T14:42:23Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192569#M650223</link>
      <description>I did the same having 100 GB vdisks, but the drawback I find is the harder management: having to present many "small" vdisks instead of 1 "big"; having to scan/import many PV's. But this is maybe my fault, since I should have had scripts for this. &lt;BR /&gt;&lt;BR /&gt;My conclusion to this thread would be that it is better to have parallel access at any layer in an I/O stream of transactions, even if it is not a performance improvement in some situations.It turns out to be not a notable performance penalty while staying on the safe side also. &lt;BR /&gt;&lt;BR /&gt;Thank you all who replied. &lt;BR /&gt;L. &lt;BR /&gt;</description>
      <pubDate>Wed, 19 Aug 2009 17:50:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192569#M650223</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-19T17:50:28Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192570#M650224</link>
      <description>thank you</description>
      <pubDate>Tue, 25 Aug 2009 03:26:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192570#M650224</guid>
      <dc:creator>Liviu I.</dc:creator>
      <dc:date>2009-08-25T03:26:41Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple LUNs vs single LUN in HPUX 11iv3</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192571#M650225</link>
      <description>I realize this is an old thread, but I haven't seen this sort of information posted after searching for it. We have had favorable performance benefits with splitting the filesystem of an Oracle database onto multiple LUNs on the same physical disk group on an EMC SAN. The originial queue length of 2-3 per path (c14 and c16 are separate FC cards) led us to believe that Oracle and the filesystem were paralleling read requests sufficiently well, but it seemed that somewhere in block transfer world, I/Os were being serialized (the SAN never reported a queue depth of &amp;gt;1 on the LUN, and the backing disks were not saturated). This is HP-UX 11iv2, Oracle 10g, on LVM/VxFS with EMC PowerPath 5.1.0. Some sar -d output before and after the split:&lt;BR /&gt;&lt;BR /&gt;device   %busy   avque   r+w/s  blks/s  avwait  avserv &lt;BR /&gt;           c14t0d0   98.01    2.48     481    8234    2.14    4.43 &lt;BR /&gt;           c16t0d0   97.51    2.80     499    8554    2.23    4.25&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;device   %busy   avque   r+w/s  blks/s  avwait  avserv &lt;BR /&gt;          c14t1d0   80.00    0.50     768   75824    0.00    1.61&lt;BR /&gt;          c16t1d0   75.00    0.50     713   70936    0.00    1.54&lt;BR /&gt;          c14t1d1   25.00    0.50     208    7216    0.00    2.03&lt;BR /&gt;          c16t1d1   30.50    0.50     226    8960    0.00    2.15&lt;BR /&gt;          c14t1d2   40.50    0.50     461   15776    0.00    1.77&lt;BR /&gt;          c16t1d2   47.00    0.50     528   18032    0.00    1.87&lt;BR /&gt;          c14t1d3   30.50    0.50     386   13288    0.01    1.81&lt;BR /&gt;          c16t1d3   35.00    0.50     430   14360    0.00    1.89&lt;BR /&gt;          c14t1d4    0.50    0.50       2      40    0.00    2.93&lt;BR /&gt;          c16t1d4    1.50    0.50       3      48    0.00    6.45&lt;BR /&gt;          c14t1d5    7.50    0.50     101    2557    0.00    1.66&lt;BR /&gt;          c16t1d5    9.00    0.51     100    2115    0.00    1.78&lt;BR /&gt;&lt;BR /&gt;Application performance is improved, and tools like sar and GlancePlus no longer indicate a disk bottleneck. This was achieved with no hardware changes.</description>
      <pubDate>Wed, 20 Jan 2010 15:24:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/multiple-luns-vs-single-lun-in-hpux-11iv3/m-p/5192571#M650225</guid>
      <dc:creator>R Bray</dc:creator>
      <dc:date>2010-01-20T15:24:57Z</dc:date>
    </item>
  </channel>
</rss>

