- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- How many physical volumes can and should you have ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:45 AM
тАО01-02-2002 09:45 AM
My question is can we add anymore disks to vg01 so that we can complete the mirroring or is there a work around to mirror outside vg01.
The error recieved when trying to extend the volume group:
# vgextend /dev/vg01 /dev/dsk/c0t10d0
vgextend: Couldn't install the physical volume "/dev/dsk/c0t10d0".
Too many links
Here is the contents of /dev/vg01:
brw-r----- 1 root sys 64 0x010001 Sep 16 2000 /dev/vg01/lvol1
brw-r----- 1 root sys 64 0x01000d Sep 16 2000 /dev/vg01/lvol13
brw-r----- 1 root sys 64 0x01000e Sep 16 2000 /dev/vg01/lvol14
brw-r----- 1 root sys 64 0x01000f Sep 16 2000 /dev/vg01/lvol15
brw-r----- 1 root sys 64 0x010010 Sep 16 2000 /dev/vg01/lvol16
brw-r----- 1 root sys 64 0x010011 Sep 16 2000 /dev/vg01/lvol17
brw-r----- 1 root sys 64 0x010012 Sep 16 2000 /dev/vg01/lvol18
brw-r----- 1 root sys 64 0x010013 Sep 16 2000 /dev/vg01/lvol19
brw-r----- 1 root sys 64 0x010002 Sep 16 2000 /dev/vg01/lvol2
brw-r----- 1 root sys 64 0x010014 Sep 16 2000 /dev/vg01/lvol20
brw-r----- 1 root sys 64 0x010015 Sep 16 2000 /dev/vg01/lvol21
brw-r----- 1 root sys 64 0x010016 Sep 16 2000 /dev/vg01/lvol22
brw-r----- 1 root sys 64 0x010017 Sep 16 2000 /dev/vg01/lvol23
brw-r----- 1 root sys 64 0x010018 Sep 16 2000 /dev/vg01/lvol24
brw-r----- 1 root sys 64 0x010019 Sep 16 2000 /dev/vg01/lvol25
brw-r----- 1 root sys 64 0x01001a Sep 16 2000 /dev/vg01/lvol26
brw-r----- 1 root sys 64 0x01001b Sep 16 2000 /dev/vg01/lvol27
brw-r----- 1 root sys 64 0x01001c Sep 16 2000 /dev/vg01/lvol28
brw-r----- 1 root sys 64 0x01001d Sep 16 2000 /dev/vg01/lvol29
brw-r----- 1 root sys 64 0x010003 Sep 16 2000 /dev/vg01/lvol3
brw-r----- 1 root sys 64 0x01001e Sep 16 2000 /dev/vg01/lvol30
brw-r----- 1 root sys 64 0x01001f Sep 16 2000 /dev/vg01/lvol31
brw-r----- 1 root sys 64 0x010020 Sep 16 2000 /dev/vg01/lvol32
brw-r----- 1 root sys 64 0x010022 Sep 16 2000 /dev/vg01/lvol33
brw-r----- 1 root sys 64 0x010023 Sep 16 2000 /dev/vg01/lvol34
brw-r----- 1 root sys 64 0x010024 Sep 16 2000 /dev/vg01/lvol35
brw-r----- 1 root sys 64 0x010025 Sep 16 2000 /dev/vg01/lvol36
brw-r----- 1 root sys 64 0x010009 Oct 18 20:39 /dev/vg01/lvol37
brw-r----- 1 root sys 64 0x010004 Sep 16 2000 /dev/vg01/lvol4
brw-r----- 1 root sys 64 0x010005 Sep 16 2000 /dev/vg01/lvol5
brw-r----- 1 root sys 64 0x010006 Sep 16 2000 /dev/vg01/lvol6
brw-r----- 1 root sys 64 0x010007 Sep 16 2000 /dev/vg01/lvol7
brw-r----- 1 root sys 64 0x010008 Sep 16 2000 /dev/vg01/lvol8
Here is the lvmtab output for vg01:
/dev/vg01
/dev/dsk/c0t3d0
/dev/dsk/c0t4d0
/dev/dsk/c1t6d0
/dev/dsk/c2t3d0
/dev/dsk/c2t4d0
/dev/dsk/c2t5d0
/dev/dsk/c1t5d0
/dev/dsk/c1t8d0
/dev/dsk/c1t9d0
/dev/dsk/c2t11d0
/dev/dsk/c2t12d0
/dev/dsk/c2t13d0
/dev/dsk/c2t14d0
/dev/dsk/c2t15d0
/dev/dsk/c2t2d0
/dev/dsk/c2t6d0
Any assistance would be GREAT!!!
Regards,
Tony Escujuri
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:47 AM
тАО01-02-2002 09:47 AM
Re: How many physical volumes can and should you have in one VG?
We don't won't to lose the data on the LV's in VG01
Thanks,
Tony Escujuri
tony@unixadm.net
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:49 AM
тАО01-02-2002 09:49 AM
Re: How many physical volumes can and should you have in one VG?
The default value for the number of PVs in a volume group is 16. If you don't specify -p option while creating the volume group, then you will be limited to 16 and you need to recreate the volume group. Take backup, recreate the volume group and logical volumes and then restore.
The maximum number of PVs a VG can have is 255. Checkout the vgcreate man page for limits on PVs, LVs, Alt Links etc.,
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:53 AM
тАО01-02-2002 09:53 AM
Re: How many physical volumes can and should you have in one VG?
Unless vg01 was created with a specification of 'max_pv' that was larger than the default value of 16, you are "out-of-luck"!
The value of 'max_pv' (among others) is set during the volume group creation ('vgcreate').
A 'vgdisplay /dev/vg01' (in this case) will show the limit as "Max PV". In order to increase it, you will need to destroy and recreate the volume group.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:54 AM
тАО01-02-2002 09:54 AM
Re: How many physical volumes can and should you have in one VG?
You can not recreate a volume group....
If you hit the max on physical volumes, then
JUST CREATE A NEW VOLUME GROUP..
and apply the option as mentioned above
vgcreate -p _ _ _
I generally slice mine up into 4gb for each disk (it's how they sliced up the EMC before I came..so I held with it for continuity). At 4gb you can create a volume group that will allow for 60 disks. Obviously a larger slice on disk and less physical volumes. So a disk at 1gb slice you could have 255 physical volumes....
BUT the easiest way is to just create another volume group.....honest the developers and dba's will gobble them up no matter what vg the disks are connected to..
Rgrds,
Rit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:54 AM
тАО01-02-2002 09:54 AM
Re: How many physical volumes can and should you have in one VG?
GL,
C
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:59 AM
тАО01-02-2002 09:59 AM
Re: How many physical volumes can and should you have in one VG?
As stated, you need to "rebuild" your VG. So make sure you have TWO good and verifiable backups. Then delete the VG, recreate it with enough PV's (-p 32 [or higher] in vgcreate), then restore your data!
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 10:18 AM
тАО01-02-2002 10:18 AM
Re: How many physical volumes can and should you have in one VG?
More input:
Unfortunately, where I'm at they don't like backups other than mirroring.
So my game plane is this.
Break all the mirrors
Throw in a few jamicais with 18Gbytes drives (they are currenently using 8G disks) and then mirror to this. Break the newly mirrored drives and then make the lovl*b lv's the primary in the fstab.
Then switch out the old drives after the database is proven to be functioning correclty for a few days and then rename the lvol*b minus the "b"
Next I will swith out the 8gig drives with 18G drives.
Since I'm stuck with poor mirror, no "REAL" backups and I can't lose the data.... is this a good way to proceed with this.
Any suggestions would be GREAT..... Thanks!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 10:23 AM
тАО01-02-2002 10:23 AM
SolutionGL,
C
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 11:11 AM
тАО01-02-2002 11:11 AM
Re: How many physical volumes can and should you have in one VG?
What would happen if someone accidentally did an 'rm -r *' from a database directory or from the / directory?
You have absolutely GOT to have some sort of backup other than mirroring. Mirroring will not solve the problem of someone doing an rm from a really bad location.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 11:22 AM
тАО01-02-2002 11:22 AM
Re: How many physical volumes can and should you have in one VG?
I am afraid your solution may not work unless the you already have the first PV in the volume group is of 18G size or you created the volume group with a non-default -e (max_pe) value. If not, when you extend the volume group with this 18GB, it will allocate only 8GB out of it.
Backup method is only the best way here.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 11:50 AM
тАО01-02-2002 11:50 AM
Re: How many physical volumes can and should you have in one VG?
Sridhar's point is well taken. In all likelihood, your volume group was created with default values for 'max_pv' but also other even more important attributes like 'max_pe'. Again, have a look at the man pages for 'vgcreate'. Once set, 'max_pe', 'max_lv', 'max_pv' and 'pe_size' are fixed.
For the long term, I would urge you to use your new 18GB disk as the starting point for creating a *new* volume group to which you then copy your existing data.
Once you create a new volume group, with appropriate values for the aforementioned attributes, create a temporary mountpoint for the new volume group's logical volumes, and using 'cpio' copy your data from the old mountpoint to the new one (below). Then simply edit /etc/fstab replacing the new mountpoint name with the old. When you remount, your data will be visible on the new physical disk. Thus:
# cd olddir
# find . -depth -print|cpio -pudlmv newdir
...then unmount both 'olddir' and 'newdir'; edit /etc/fstab to mount 'olddir' on the *new* volume group's logical volume(s); edit /etc/fstab to eliminate mounting 'newdir' and re-mount 'olddir'.
The 'cpio' options used preserve modification timestamps of files and creates (sub)directories as necessary.
BTW, remember that mirroring is done at the *logical volume* level, not by physical disk. Thus, mirroring is confined to a logical volume *within* its associated volume group.
Regards!
...JRF...