<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Distributed/strict allocation in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687340#M644552</link>
    <description>This is not possible with LVM&lt;BR /&gt;&lt;BR /&gt;You have an alternate link that is there for redundancy, not load balancing.&lt;BR /&gt;VxVM with the fully licensed verison on HP-UX 11i would give you that functionality</description>
    <pubDate>Wed, 20 Mar 2002 18:41:54 GMT</pubDate>
    <dc:creator>melvyn burnard</dc:creator>
    <dc:date>2002-03-20T18:41:54Z</dc:date>
    <item>
      <title>Distributed/strict allocation</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687339#M644551</link>
      <description>I have a L3000/11.0 server with 2 Fibre Channel cards (a5158a) connected to a Shark array.  Everything is working ok, but I'm having some problems configuring the volume group the way I want.&lt;BR /&gt;&lt;BR /&gt;There are two device files which point to the same LUN on the Shark, one file for each Fibre card.  c7t0d0 and c9t0d0.&lt;BR /&gt;&lt;BR /&gt;I created the physical volume, the /dev/vgibm directory and the group file. Then created the volume group with this command:&lt;BR /&gt;&lt;BR /&gt;vgcreate -g PVG3 /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0&lt;BR /&gt;&lt;BR /&gt;I then created the logical volume using this command:&lt;BR /&gt;&lt;BR /&gt;lvcreate -D y -s g -n ora_rbs /dev/vgibm&lt;BR /&gt;&lt;BR /&gt;Then I try to extend the volume using this command:&lt;BR /&gt;&lt;BR /&gt;lvextend -L 256 /dev/vgibm/ora_rbs PVG3&lt;BR /&gt;&lt;BR /&gt;When I attempt the lvextend I get this message:&lt;BR /&gt;&lt;BR /&gt;lvextend: Not enough free physical extents available.&lt;BR /&gt;Logical volume "/dev/vgibm/ora_rbs" could not be extended.&lt;BR /&gt;Failure possibly caused by PVG-Strict or Distributed allocation policies.&lt;BR /&gt;&lt;BR /&gt;I think the nature of the problem is that I'm trying to do extent-based striping over two device files, that really point to the same device.&lt;BR /&gt;&lt;BR /&gt;What I'm trying to accomplish is load-balancing across both of the Fibre cards.  I didn't want to simply have pv-links for fail-over.&lt;BR /&gt;&lt;BR /&gt;I've done this before with an AutoRAID array, however I had 2 different LUNs in the PVG.  The difference now is that I'm pointing to a single LUN and trying to force it to load balance between the cards.  Does anyone know if this is possible?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Tim</description>
      <pubDate>Wed, 20 Mar 2002 18:28:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687339#M644551</guid>
      <dc:creator>Tim Medford</dc:creator>
      <dc:date>2002-03-20T18:28:24Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed/strict allocation</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687340#M644552</link>
      <description>This is not possible with LVM&lt;BR /&gt;&lt;BR /&gt;You have an alternate link that is there for redundancy, not load balancing.&lt;BR /&gt;VxVM with the fully licensed verison on HP-UX 11i would give you that functionality</description>
      <pubDate>Wed, 20 Mar 2002 18:41:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687340#M644552</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2002-03-20T18:41:54Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed/strict allocation</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687341#M644553</link>
      <description>Melvyn is right.  What you are trying to do will not work.&lt;BR /&gt;&lt;BR /&gt;You can't load balance over 2 controllers to the same disk by striping.  To do striping you have to be going to different physical disks (or meta volumes).&lt;BR /&gt;&lt;BR /&gt;Unless you have something like EMC PowerPath to load balance, the best you can hope for with your set up is redundency in case the primary controller goes bad.&lt;BR /&gt;&lt;BR /&gt;For your set up I would just do:&lt;BR /&gt;&lt;BR /&gt;# vgcreate /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0&lt;BR /&gt;&lt;BR /&gt;This will give you your alternate paths.&lt;BR /&gt;&lt;BR /&gt;# lvcreate -L 256 -n ora_rbs /dev/vgibm&lt;BR /&gt;&lt;BR /&gt;To create your 256 MB logical volume.&lt;BR /&gt;&lt;BR /&gt;There's no point in setting this up in a PVG or doing the PVG-strict allocation since there is really just one disk.</description>
      <pubDate>Wed, 20 Mar 2002 18:54:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687341#M644553</guid>
      <dc:creator>Patrick Wallek</dc:creator>
      <dc:date>2002-03-20T18:54:56Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed/strict allocation</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687342#M644554</link>
      <description>Thanks for the information, I suspected this was the case and just wanted to confirm.&lt;BR /&gt;&lt;BR /&gt;I think what I will do is have our mainframe guy change the allocation on the Shark.  If he gives me 2x50gb LUNS instead of a single 100gb LUN I should be able to balance them the way I want.&lt;BR /&gt;&lt;BR /&gt;Thanks again.</description>
      <pubDate>Wed, 20 Mar 2002 19:04:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/distributed-strict-allocation/m-p/2687342#M644554</guid>
      <dc:creator>Tim Medford</dc:creator>
      <dc:date>2002-03-20T19:04:48Z</dc:date>
    </item>
  </channel>
</rss>

