<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514315#M625201</link>
    <description>I think you are possibly looking at this upside down...  loadsharing your luns over all 8 HBA's will mean that you are unlikely to suffer a HBA/port bottleneck due to throughput/bandwidth.  &lt;BR /&gt;&lt;BR /&gt;If you are sure that there in no chance of a bottleneck happening at the HBA/ports with 2, or 4 ports then you can use that number; though I would keep it an even number.&lt;BR /&gt;&lt;BR /&gt;I for one would probably not opt for the latter as you are more likly to wish that you load shred the LUNs over more ports than over less.... but that is my opionion...&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
    <pubDate>Wed, 30 Mar 2005 17:37:41 GMT</pubDate>
    <dc:creator>Tim D Fulford</dc:creator>
    <dc:date>2005-03-30T17:37:41Z</dc:date>
    <item>
      <title>Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514306#M625192</link>
      <description>Those that have large server configurations and more than 1 pair of FC-HBAs (or more than 2 FC-HBA's) - do you present/access your LUNS accross however many HBAs or do you simply dual path each LUN and divvy up the LUNS accross however many pairs of FC-HBAs?&lt;BR /&gt;&lt;BR /&gt;As I am preparing to cutover big servers (8-HBAs) to an XP12K - I am faced with a dilemma whether to present each LUN accross all 8 HBAs hence having 8 paths for each LUN! Or simply use a pair for each LUN. &lt;BR /&gt;&lt;BR /&gt;For example:&lt;BR /&gt;I have 80 LUNs to a server that has 8 HBAs. On an EVA, the traditional practice in our site was to preent each LUN to each HBA - so each LUN has 8 paths. Would it not be efficient on an XP to instead present 20 LUNs each to each pair of HBA's? Or it really does not matter?&lt;BR /&gt;&lt;BR /&gt;Thanks for any advice.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 09:09:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514306#M625192</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-03-30T09:09:03Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514307#M625193</link>
      <description>If I understood your posting correctly, you have 8 HBAs. It means that a pv has 8 alternate paths. without a software like secure path, you can just make use of it as alternate paths and not load balancing in it's real term.&lt;BR /&gt;&lt;BR /&gt;Say for a LUN, you have three pvs.&lt;BR /&gt;&lt;BR /&gt;Now, I will set primary path different for all three PVs going through three different HBAs and alternate paths to another three HBAs&lt;BR /&gt;&lt;BR /&gt;Anil</description>
      <pubDate>Wed, 30 Mar 2005 09:27:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514307#M625193</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2005-03-30T09:27:06Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514308#M625194</link>
      <description>I plan to use both VxVM DMP (active/active) as well as SecurePath for XP (actually AutoPath -- which is also Active/Active pathing).&lt;BR /&gt;&lt;BR /&gt;So is there an advantage to say (in my example) multipath each of the 80 LUNs to all 8 FC-HBAs?&lt;BR /&gt;&lt;BR /&gt;Or should I reap better results if I say simply use a pair of FC-HBas (4 of them in this case) and present/dual-path each LUN to each pair (20 to each pair) ?&lt;BR /&gt;&lt;BR /&gt;One advantage that I can see with the latter is I may be able to isolate FC-PATHs/LUNS say for really heavy hitting volumes/filesystems -- ie. redo logs, temp tablespaces.. etc...&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 09:31:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514308#M625194</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-03-30T09:31:38Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514309#M625195</link>
      <description>Unless you have software like EMC Powerpath - load balancing will have to be manual...&lt;BR /&gt;&lt;BR /&gt;One way to do it, is after you create your vgs, reduce a path - then vgextend it back in....the primary path will now be the next path....&lt;BR /&gt;&lt;BR /&gt;Example:&lt;BR /&gt;&lt;BR /&gt;Say I have 4 paths:&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c60t0d5&lt;BR /&gt;   PV Name                     /dev/dsk/c62t0d5 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c56t0d5 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c58t0d5 Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4314&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c60t0d7&lt;BR /&gt;   PV Name                     /dev/dsk/c62t0d7 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c56t0d7 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c58t0d7 Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4314&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I could reduce /dev/dsk/c60t0d5 then extend it back in:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c60t0d5&lt;BR /&gt;   PV Name                     /dev/dsk/c62t0d5 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c56t0d5 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c58t0d5 Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4314&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c62t0d7&lt;BR /&gt;   PV Name                     /dev/dsk/c56t0d7 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c58t0d7 Alternate Link&lt;BR /&gt;   PV Name                     /dev/dsk/c60t0d7 Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4314&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;A lot of what you want to do depends on number of vg's - if you have 1 big one - then put all the paths in it....if you have multiple vg's - then you might want to split them up - across say pairs of hba's...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 09:36:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514309#M625195</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2005-03-30T09:36:20Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514310#M625196</link>
      <description>the maximum we see on our hosts would be 4 paths and we use all of them, interleaving the primary path to achieve a bit of loadbalancing.&lt;BR /&gt;&lt;BR /&gt;with eight paths, I'd definitely do it the same way but built some scripts around it to avoid human errors (i.e. one device having 5, one 8, and one 7, with the primarys overloading only the first three), as this might have You end up with hundred or more disk paths.&lt;BR /&gt;also emphasize deleting old device files &lt;BR /&gt;(showing up as "at ???" in lssf /dev/dsk/ctd...) to prevent false alarms from EMS.&lt;BR /&gt;&lt;BR /&gt;And actually I'd stop using pvlinks in favour of a more transparent solution like powerpath, where a simple command will supply a good overview of the link states.</description>
      <pubDate>Wed, 30 Mar 2005 09:37:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514310#M625196</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-03-30T09:37:07Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514311#M625197</link>
      <description>If you are going to use VxVM DMP and secure, then certainly go for all 8 HBAs, in effect you are distributing the i/os over the 8 HBAs, which is cerainly good.&lt;BR /&gt;&lt;BR /&gt;I hope there is no restriction on no.of paths over which you can load balance in DMP or with secure path. And even if it is there, I do not think it would be that low.&lt;BR /&gt;&lt;BR /&gt;Anil</description>
      <pubDate>Wed, 30 Mar 2005 09:37:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514311#M625197</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2005-03-30T09:37:22Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514312#M625198</link>
      <description>Nelson,&lt;BR /&gt;Strictly from a maintenance perspective - I feel that 8 is too many to keep alive to each LUN.  I've got the same setup as you do - but have elected to follow the logic that I'd like one connection to each ACP in the XP *per LUN*  (we still use all 8 connections considering all LUNs).  I make sure that when creating a VG, I've got a nice rotation of the HBA's and the SAN switch ports too (rotate the paths on your switch - or at least make them one-to-one with your HBA's).  That way, when the LV's are created you can see that there is a multiple path (if you SAN switch is handled logically as one-to-one to the HBAs) rotation of hardware as you progress down the extent list.  Keep in mind that each LUN with have 4 different HBA of it's down, but since you've got many LUNS, you should end up with all of the HBAs in play in each lvol of each vg - so that you'd be hitting a different hba/sanport in/sanport out/CL in path/ACP cpu for each next grab of data...&lt;BR /&gt;&lt;BR /&gt;Of course, the LUNs I've built work the same way, I allocate space for LUNS in a circular queue - one from ACP0, then next from ACP1, the next from ACP2, then next from ACP3, then next from ACP0... etc...&lt;BR /&gt;Actually, it's a quite a lot of work to get this all done...&lt;BR /&gt;&lt;BR /&gt;If you rotate and match HBA to SAN port to CLI to ACP to LUN in rotation - You'll end up with a nice, redundant and well-balanced I/O load on the XP.  Be aware that from what I'm told - if you've got AutoPath - that all this work is not necessary.  Still, even if you have AutoPath - I like the idea of having an alternate path to each CLI following on to each different ACP (4 ACPs is the max - but you can certainly have less which might mean you want a different rotation method).&lt;BR /&gt;&lt;BR /&gt;Anyway, what this method above should do is give lots of available hardware width to get I/O transfers done, and I believe is a better solution than just two.  Of course, following the same logic, one could say the same of using 8 ports, or even reducing to two and spreading your I/O using other balancing methods.</description>
      <pubDate>Wed, 30 Mar 2005 10:32:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514312#M625198</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-03-30T10:32:05Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514313#M625199</link>
      <description>Messr Joubert:&lt;BR /&gt;&lt;BR /&gt;For my VxVM 3.5 DMP systems, I actually use an ASL (array support library) that removes the manual work of picking and ensuring my VxVM volumes get built so the component LUNS are taken cosnidering the arrar's innards -- ACP, array group, CHiP presentation and HBA paths.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;What I am am simply unsure is the wisdom and necessity and performance need of presenting each LDEV/LUN to 8 front-ends (ChiP port) and to 8 HBAs. Am I better off presenting each LDEV/LUN to pairs of ChiP ports and hence to pairs of HBAs - so 80 LDevs will mean 20 Luns presented to each pair of 4.&lt;BR /&gt;&lt;BR /&gt;What do you think?&lt;BR /&gt;&lt;BR /&gt;In an AutoPath environment which does active/active as well - I think the same argument applies.&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 11:37:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514313#M625199</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-03-30T11:37:54Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514314#M625200</link>
      <description>OK, so to put simply - faced with the same issue - I elected to present 4 paths to each LUN - it matched the hardware I had for a round-robin path allocation strategy.  &lt;BR /&gt;&lt;BR /&gt;I like the idea of two per LUN for administrative simplicity - however, I use four to get an edge in performance.  I think managing 8 is too many, however, if you've no problem managing that many - than I would use all 8.&lt;BR /&gt;&lt;BR /&gt;This one (for me) would follow the Cajun saying &lt;BR /&gt;"De More Dat - De More Better", but its FAR (oh so far) from the last word on things - it's just how I felt I could best utilize the resources I have.</description>
      <pubDate>Wed, 30 Mar 2005 12:24:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514314#M625200</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-03-30T12:24:03Z</dc:date>
    </item>
    <item>
      <title>Re: Server with more than 2 FC-HBAs - LoadBalance/Presentation of LUNs  from an Array</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514315#M625201</link>
      <description>I think you are possibly looking at this upside down...  loadsharing your luns over all 8 HBA's will mean that you are unlikely to suffer a HBA/port bottleneck due to throughput/bandwidth.  &lt;BR /&gt;&lt;BR /&gt;If you are sure that there in no chance of a bottleneck happening at the HBA/ports with 2, or 4 ports then you can use that number; though I would keep it an even number.&lt;BR /&gt;&lt;BR /&gt;I for one would probably not opt for the latter as you are more likly to wish that you load shred the LUNs over more ports than over less.... but that is my opionion...&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Wed, 30 Mar 2005 17:37:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/server-with-more-than-2-fc-hbas-loadbalance-presentation-of-luns/m-p/3514315#M625201</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2005-03-30T17:37:41Z</dc:date>
    </item>
  </channel>
</rss>

