- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: VG creation
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-09-2010 06:30 AM
тАО02-09-2010 06:30 AM
Re: VG creation
Currently in vguat1, the /dev/dsk/c3t4d0 has 4374 extents of size 32 MeB, so its total size is 4374 * 32 MeB = 139968 MeB.
But the vgoradata volume group has PE size of 16 MeB and the "Max PE per PV" value is 1618. Without modifying these parameters, you can get only 1618 * 16 MeB = 25888 MeB, or just about 1/5 of the total capacity of /dev/dsk/c3t4d0.
You have HP-UX 11.23, so the vgmodify command should be available if the appropriate patch (PHCO_35524 or a superseding patch) has been installed.
The extent size of vgoradata was chosen at the time it was created, and the vgmodify command cannot change it. Instead, you can use vgmodify to increase the "Max PE per PV" value.
To cover the full size of c3t4d0 using 16 MeB extents, you need to increase the "Max PE per PV" on vgoradata to 8748. This may or may not be possible.
To see if it's possible to make the change, run:
vgmodify -p 10 -n -e 8748 -r vgoradata
If vgmodify says:
[...] VGRA for the disk is too big for the specified parameters. Decrease max_PVs and/or max_PEs.
then vgmodify cannot help you.
If vgmodify says:
Review complete. Volume group not modified
this means "OK, vgmodify can do it".
If it says:
vgmodify: New configuration does not require PE renumbering. Re-run without -n.
This is even better: you don't need to use the -n option. Leaving it out makes the operation simpler and safer.
The procedure for modifying the VG:
First, run "man vgmodify" and read it to understand what you're going to do, and to learn how to recover if the vgmodify process gets interrupted or fails.
You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:
(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata
vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)
After this, you can re-activate the VG and mount the filesystems again.
If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:
vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2010 03:46 AM
тАО02-10-2010 03:46 AM
Re: VG creation
Could you please explain the steps, if it is in cluster environment?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2010 05:05 AM
тАО02-10-2010 05:05 AM
Re: VG creation
Anyway, the basic procedure is mostly the same.
Do the VG modification on the primary node.
Just replace "stop oracle, unmount the vgoradata LVs and deactivate the VG" in the basic procedure with "halt the package" (on all nodes, if a RAC cluster), and "re-activate the VG and mount the filesystems" with "restart the package".
There is one extra procedure in the end.
Whenever you add or remove PVs to/from cluster VGs, you must create a new VG map file and re-import the VG to all other nodes, to make the other nodes aware of the change.
-------------
1.) Begin by creating a new map file on the node you used to change the VG:
vgexport -v -s -p -m vgoradata.map vgoradata
With the '-p' option, this command does not actually export the VG: it only creates the map file.
Also check the minor device number of the VG group file:
ll /dev/vgoradata/group
The response will be something like:
crw-r--r-- 1 root sys 64 0x020000 Jul 22 2008 /dev/vgoradata/group
In this example, the minor device number is 0x020000. Find the respective number in your system and remember it: it should be unique to each VG on the cluster.
Copy the new vgoradata.map file from the primary node to all failover nodes, and on each of them, export and re-import the VG so that the nodes will become aware of the changes done on the primary node:
2.) Export the vgoradata VG:
vgexport vgoradata
3.) Re-create the group file:
mkdir /dev/vgoradata
mknod /dev/vgoradata/group c 64 0xNN0000
(Replace the 0xNN0000 with the correct minor device number: it must be the same as on the primary node.)
4.) Re-import the VG.
vgimport -v -s -m vgoradata.map vgoradata
5.) If you have more than two nodes, repeat steps 2-4 on each failover node as necessary.
-----------
If vguat1 is a cluster volume group too, you must do the same procedure (steps 1-5) with vguat1 too.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2010 06:41 AM
тАО02-10-2010 06:41 AM
Re: VG creation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2010 12:33 PM
тАО02-11-2010 12:33 PM
Re: VG creation
umask 655
prior to the dirrectory creation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2010 06:15 PM
тАО02-11-2010 06:15 PM
Re: VG creation
Follow Advice from >>MK<<
You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:
(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata
vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)
After this, you can re-activate the VG and mount the filesystems again.
If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:
vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0
if you say for "Basic Service Guard Cluster" -> I' would assume you have (2 node cluster setup One Active another one passive / failover node)
>>For Basic Service Guard Cluster.<<< below are steps you need do address/ changes to adoptive node /failover node , in case of package failover the adoptive node can auto fialover sucesfully.
Active Node or Primary Node
Create a map file
# vgexport -pvs -m
Copy the mapfile to the other node
You Can use (ftp/rcp/scp) the
Activate in share mode
# vgchange -a s /dev/vgxx
On the other node:
1. Export the VG
# vgexport /dev/vgxx
2. Recreate the directory
# mkdir -p /dev/vgxx
3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)
4. Preview the vgimport to check for any possible error
# vgimport -pvs -m
where mapfile is the one copied from the first node
5. If no error, remove the preview mode
# vgimport -vs -m
6. Activate in share mode
# vgchange -a s /dev/vgxx
Regards,
Johnson
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2010 06:51 PM
тАО02-11-2010 06:51 PM
Re: VG creation
hence you going "vgmodify" under (Service Guard)
# cmhaltpkg -n
# vgchange -a n vgoradata
# vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)
# vgextend vgoradata /dev/dsk/c3t4d0
vgexport -pvs -m
# scp *.map abc@node02a:/home/abc -->Copy the mapfile to the other node.You Can use (ftp/rcp/scp) the
# vgchange -a s /dev/vgxx
(take note if you are using the "different vg" in "pkg.conf.file" you need cmapplyconf if you do any changes your cluster pkg.congiuration file ,
(in your case assume ur just doing vgmodify only)
On the other node:
1. Export the VG
# vgexport /dev/vgxx
2. Recreate the directory
# mkdir -p /dev/vgxx
3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)
4. Preview the vgimport to check for any possible error
# vgimport -pvs -m
where mapfile is the one copied from the first node
5. If no error, remove the preview mode
# vgimport -vs -m
6. Activate in share mode
# vgchange -a s /dev/vgxx
- « Previous
-
- 1
- 2
- Next »