cancel
Showing results for 
Search instead for 
Did you mean: 

VG creation

SOLVED
Go to solution
V.P
Frequent Advisor

VG creation

Dear Admins,
Our requirement is unused disk (PV) from the VG: vguat1 should be moved and added to the VG: vgoradata, since the space is less in VG: vgoradata.
I could see that /dev/dsk/c3t4d0 is not used. Enclosed is the file for your reference. Please revert for any more details.
Server: HP-UX B.11.23 U ia64
Could any one help me to I accomplish this task?
Thanks in advance.
Regards,
V.P
16 REPLIES
V.P
Frequent Advisor

Re: VG creation

VG : vgoradata deatils enclosed
smatador
Honored Contributor

Re: VG creation

Hi,
If c3t4d0 is not used, you could detach it with vgreduce vguat1 /dev/dsk/c3t4d0.
After that you could vgextend vgoradata /dev/dsk/c3t4d0 with it.
HTH
James R. Ferguson
Acclaimed Contributor

Re: VG creation

Hi:

Since '/dev/dsk/c3t4d0' is unused, 'vgreduce' it from the volume group, 'vguat1'. To add it to the 'vgoradata' volume group, you would use 'vgextend'.

The problem that you have, however, is that the maximum number of physical extents for any physical volume in 'vgoradata' is 1618 (Max PE per PV). The physical volume you want to add can hold 4374 extents.

Thus, if you merely 'vgextend' c3t4d0 into the 'vgoradata' volume group, you are only going to gain 1618 extents of usable space.

You should 'vgmodify' the 'vgoradata' volume group to increase the maximum number of physical extents that can be allocated from any of the physical volumes in the volume group.

See the manpages for 'vgreduce', 'vgextend' and 'vgmodify' for more information.

Regards!

...JRF...
Robert Salter
Respected Contributor

Re: VG creation

How big is the c3t4d0 disk? In your vguat1 vol grp the biggest disk can be 140 gb, in the vgoradata the biggest disk can only be 25 Gb.

check your Max_PE_per_PV settings in the vol grps. If the disk is truly 140 Gb, then you'd be wasting a lot of disk space by putting it in vgoradata.
Time to smoke and joke
V.P
Frequent Advisor

Re: VG creation

Dear JRF/Robert,

Can you please provide some details.

Thanks again
V.P
Frequent Advisor

Re: VG creation

Dear Robert,

HOSTNAME /paradmin >diskinfo /dev/rdsk/c3t4d0
SCSI describe of /dev/rdsk/c3t4d0:
vendor: HP
product id: DG0146FAMWL
type: direct access
size: 143374744 Kbytes
bytes per sector: 512
James R. Ferguson
Acclaimed Contributor

Re: VG creation

Hi (again):

> Can you please provide some details.

The manpages include some good examples. I urge you to read them.

Regards!

...JRF...
Robert Salter
Respected Contributor
Solution

Re: VG creation

It's just as James said, the number of available extents on the disk are greater than the number of extents allowed that the vgoradata vol grp is set up to use. The disk has 4374 extents, the vgoradata vol grp will only use 1618 of them wasting 2756 extents of space.

You will have to modify the number of Max_Pe_per_PV setting in vgoradata. You can use vgmodify, the man pages will help. You are using 11.23 so you may have to download a patch to install 'vgmodify', PHCO_35524. here's a link to a pdf og vgmodify.

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01920387/c01920387.pdf (BSC link updated by admin)
Time to smoke and joke
V.P
Frequent Advisor

Re: VG creation

Thanks a lot :)
Matti_Kurkela
Honored Contributor

Re: VG creation

If you simply follow smatador's advice, you'll find you won't get to use the full capacity of /dev/dsk/c3t4d0.

Currently in vguat1, the /dev/dsk/c3t4d0 has 4374 extents of size 32 MeB, so its total size is 4374 * 32 MeB = 139968 MeB.

But the vgoradata volume group has PE size of 16 MeB and the "Max PE per PV" value is 1618. Without modifying these parameters, you can get only 1618 * 16 MeB = 25888 MeB, or just about 1/5 of the total capacity of /dev/dsk/c3t4d0.

You have HP-UX 11.23, so the vgmodify command should be available if the appropriate patch (PHCO_35524 or a superseding patch) has been installed.

The extent size of vgoradata was chosen at the time it was created, and the vgmodify command cannot change it. Instead, you can use vgmodify to increase the "Max PE per PV" value.

To cover the full size of c3t4d0 using 16 MeB extents, you need to increase the "Max PE per PV" on vgoradata to 8748. This may or may not be possible.

To see if it's possible to make the change, run:
vgmodify -p 10 -n -e 8748 -r vgoradata

If vgmodify says:

[...] VGRA for the disk is too big for the specified parameters. Decrease max_PVs and/or max_PEs.

then vgmodify cannot help you.

If vgmodify says:

Review complete. Volume group not modified

this means "OK, vgmodify can do it".

If it says:

vgmodify: New configuration does not require PE renumbering. Re-run without -n.

This is even better: you don't need to use the -n option. Leaving it out makes the operation simpler and safer.


The procedure for modifying the VG:

First, run "man vgmodify" and read it to understand what you're going to do, and to learn how to recover if the vgmodify process gets interrupted or fails.

You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:

(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata

vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)

After this, you can re-activate the VG and mount the filesystems again.

If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:

vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0

MK
MK
V.P
Frequent Advisor

Re: VG creation

Thanks a lot MK.

Could you please explain the steps, if it is in cluster environment?
Matti_Kurkela
Honored Contributor

Re: VG creation

A basic ServiceGuard cluster, or a RAC active/active cluster?

Anyway, the basic procedure is mostly the same.

Do the VG modification on the primary node.

Just replace "stop oracle, unmount the vgoradata LVs and deactivate the VG" in the basic procedure with "halt the package" (on all nodes, if a RAC cluster), and "re-activate the VG and mount the filesystems" with "restart the package".

There is one extra procedure in the end.

Whenever you add or remove PVs to/from cluster VGs, you must create a new VG map file and re-import the VG to all other nodes, to make the other nodes aware of the change.

-------------

1.) Begin by creating a new map file on the node you used to change the VG:

vgexport -v -s -p -m vgoradata.map vgoradata

With the '-p' option, this command does not actually export the VG: it only creates the map file.

Also check the minor device number of the VG group file:

ll /dev/vgoradata/group

The response will be something like:

crw-r--r-- 1 root sys 64 0x020000 Jul 22 2008 /dev/vgoradata/group

In this example, the minor device number is 0x020000. Find the respective number in your system and remember it: it should be unique to each VG on the cluster.

Copy the new vgoradata.map file from the primary node to all failover nodes, and on each of them, export and re-import the VG so that the nodes will become aware of the changes done on the primary node:

2.) Export the vgoradata VG:

vgexport vgoradata

3.) Re-create the group file:

mkdir /dev/vgoradata
mknod /dev/vgoradata/group c 64 0xNN0000

(Replace the 0xNN0000 with the correct minor device number: it must be the same as on the primary node.)

4.) Re-import the VG.

vgimport -v -s -m vgoradata.map vgoradata

5.) If you have more than two nodes, repeat steps 2-4 on each failover node as necessary.

-----------

If vguat1 is a cluster volume group too, you must do the same procedure (steps 1-5) with vguat1 too.

MK
MK
V.P
Frequent Advisor

Re: VG creation

Basic Service Guard Cluster.
Gordon Sjodin
Frequent Advisor

Re: VG creation

I would run :

umask 655

prior to the dirrectory creation
Johnson Punniyalingam
Honored Contributor

Re: VG creation

>>Basic Service Guard Cluster.<<<


Follow Advice from >>MK<<

You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:

(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata

vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)

After this, you can re-activate the VG and mount the filesystems again.

If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:

vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0


if you say for "Basic Service Guard Cluster" -> I' would assume you have (2 node cluster setup One Active another one passive / failover node)


>>For Basic Service Guard Cluster.<<< below are steps you need do address/ changes to adoptive node /failover node , in case of package failover the adoptive node can auto fialover sucesfully.

Active Node or Primary Node

Create a map file
# vgexport -pvs -m /dev/vgxx

Copy the mapfile to the other node
You Can use (ftp/rcp/scp) the to adoptive / failover node

Activate in share mode
# vgchange -a s /dev/vgxx

On the other node:
1. Export the VG
# vgexport /dev/vgxx

2. Recreate the directory
# mkdir -p /dev/vgxx

3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)

4. Preview the vgimport to check for any possible error
# vgimport -pvs -m /dev/vgxx
where mapfile is the one copied from the first node

5. If no error, remove the preview mode
# vgimport -vs -m /dev/vgxx

6. Activate in share mode
# vgchange -a s /dev/vgxx


Regards,
Johnson
Problems are common to all, but attitude makes the difference
Johnson Punniyalingam
Honored Contributor

Re: VG creation

Apologies for missing out some things
hence you going "vgmodify" under (Service Guard)

# cmhaltpkg -n -v
# vgchange -a n vgoradata
# vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)
# vgextend vgoradata /dev/dsk/c3t4d0
vgexport -pvs -m /dev/vgxx
# scp *.map abc@node02a:/home/abc -->Copy the mapfile to the other node.You Can use (ftp/rcp/scp) the to adoptive / failover node
# vgchange -a s /dev/vgxx

(take note if you are using the "different vg" in "pkg.conf.file" you need cmapplyconf if you do any changes your cluster pkg.congiuration file ,

(in your case assume ur just doing vgmodify only)

On the other node:
1. Export the VG
# vgexport /dev/vgxx

2. Recreate the directory
# mkdir -p /dev/vgxx

3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)

4. Preview the vgimport to check for any possible error
# vgimport -pvs -m /dev/vgxx
where mapfile is the one copied from the first node

5. If no error, remove the preview mode
# vgimport -vs -m /dev/vgxx

6. Activate in share mode
# vgchange -a s /dev/vgxx
Problems are common to all, but attitude makes the difference