HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Distributed/strict allocation
Operating System - HP-UX
1833065
Members
2677
Online
110049
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-20-2002 10:28 AM
03-20-2002 10:28 AM
I have a L3000/11.0 server with 2 Fibre Channel cards (a5158a) connected to a Shark array. Everything is working ok, but I'm having some problems configuring the volume group the way I want.
There are two device files which point to the same LUN on the Shark, one file for each Fibre card. c7t0d0 and c9t0d0.
I created the physical volume, the /dev/vgibm directory and the group file. Then created the volume group with this command:
vgcreate -g PVG3 /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0
I then created the logical volume using this command:
lvcreate -D y -s g -n ora_rbs /dev/vgibm
Then I try to extend the volume using this command:
lvextend -L 256 /dev/vgibm/ora_rbs PVG3
When I attempt the lvextend I get this message:
lvextend: Not enough free physical extents available.
Logical volume "/dev/vgibm/ora_rbs" could not be extended.
Failure possibly caused by PVG-Strict or Distributed allocation policies.
I think the nature of the problem is that I'm trying to do extent-based striping over two device files, that really point to the same device.
What I'm trying to accomplish is load-balancing across both of the Fibre cards. I didn't want to simply have pv-links for fail-over.
I've done this before with an AutoRAID array, however I had 2 different LUNs in the PVG. The difference now is that I'm pointing to a single LUN and trying to force it to load balance between the cards. Does anyone know if this is possible?
Thanks,
Tim
There are two device files which point to the same LUN on the Shark, one file for each Fibre card. c7t0d0 and c9t0d0.
I created the physical volume, the /dev/vgibm directory and the group file. Then created the volume group with this command:
vgcreate -g PVG3 /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0
I then created the logical volume using this command:
lvcreate -D y -s g -n ora_rbs /dev/vgibm
Then I try to extend the volume using this command:
lvextend -L 256 /dev/vgibm/ora_rbs PVG3
When I attempt the lvextend I get this message:
lvextend: Not enough free physical extents available.
Logical volume "/dev/vgibm/ora_rbs" could not be extended.
Failure possibly caused by PVG-Strict or Distributed allocation policies.
I think the nature of the problem is that I'm trying to do extent-based striping over two device files, that really point to the same device.
What I'm trying to accomplish is load-balancing across both of the Fibre cards. I didn't want to simply have pv-links for fail-over.
I've done this before with an AutoRAID array, however I had 2 different LUNs in the PVG. The difference now is that I'm pointing to a single LUN and trying to force it to load balance between the cards. Does anyone know if this is possible?
Thanks,
Tim
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-20-2002 10:41 AM
03-20-2002 10:41 AM
Solution
This is not possible with LVM
You have an alternate link that is there for redundancy, not load balancing.
VxVM with the fully licensed verison on HP-UX 11i would give you that functionality
You have an alternate link that is there for redundancy, not load balancing.
VxVM with the fully licensed verison on HP-UX 11i would give you that functionality
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-20-2002 10:54 AM
03-20-2002 10:54 AM
Re: Distributed/strict allocation
Melvyn is right. What you are trying to do will not work.
You can't load balance over 2 controllers to the same disk by striping. To do striping you have to be going to different physical disks (or meta volumes).
Unless you have something like EMC PowerPath to load balance, the best you can hope for with your set up is redundency in case the primary controller goes bad.
For your set up I would just do:
# vgcreate /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0
This will give you your alternate paths.
# lvcreate -L 256 -n ora_rbs /dev/vgibm
To create your 256 MB logical volume.
There's no point in setting this up in a PVG or doing the PVG-strict allocation since there is really just one disk.
You can't load balance over 2 controllers to the same disk by striping. To do striping you have to be going to different physical disks (or meta volumes).
Unless you have something like EMC PowerPath to load balance, the best you can hope for with your set up is redundency in case the primary controller goes bad.
For your set up I would just do:
# vgcreate /dev/vgibm /dev/dsk/c7t0d0 /dev/dsk/c9t0d0
This will give you your alternate paths.
# lvcreate -L 256 -n ora_rbs /dev/vgibm
To create your 256 MB logical volume.
There's no point in setting this up in a PVG or doing the PVG-strict allocation since there is really just one disk.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-20-2002 11:04 AM
03-20-2002 11:04 AM
Re: Distributed/strict allocation
Thanks for the information, I suspected this was the case and just wanted to confirm.
I think what I will do is have our mainframe guy change the allocation on the Shark. If he gives me 2x50gb LUNS instead of a single 100gb LUN I should be able to balance them the way I want.
Thanks again.
I think what I will do is have our mainframe guy change the allocation on the Shark. If he gives me 2x50gb LUNS instead of a single 100gb LUN I should be able to balance them the way I want.
Thanks again.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP