1752708 Members
6006 Online
108789 Solutions
New Discussion юеВ

Re: VG creation

 
SOLVED
Go to solution
Matti_Kurkela
Honored Contributor

Re: VG creation

If you simply follow smatador's advice, you'll find you won't get to use the full capacity of /dev/dsk/c3t4d0.

Currently in vguat1, the /dev/dsk/c3t4d0 has 4374 extents of size 32 MeB, so its total size is 4374 * 32 MeB = 139968 MeB.

But the vgoradata volume group has PE size of 16 MeB and the "Max PE per PV" value is 1618. Without modifying these parameters, you can get only 1618 * 16 MeB = 25888 MeB, or just about 1/5 of the total capacity of /dev/dsk/c3t4d0.

You have HP-UX 11.23, so the vgmodify command should be available if the appropriate patch (PHCO_35524 or a superseding patch) has been installed.

The extent size of vgoradata was chosen at the time it was created, and the vgmodify command cannot change it. Instead, you can use vgmodify to increase the "Max PE per PV" value.

To cover the full size of c3t4d0 using 16 MeB extents, you need to increase the "Max PE per PV" on vgoradata to 8748. This may or may not be possible.

To see if it's possible to make the change, run:
vgmodify -p 10 -n -e 8748 -r vgoradata

If vgmodify says:

[...] VGRA for the disk is too big for the specified parameters. Decrease max_PVs and/or max_PEs.

then vgmodify cannot help you.

If vgmodify says:

Review complete. Volume group not modified

this means "OK, vgmodify can do it".

If it says:

vgmodify: New configuration does not require PE renumbering. Re-run without -n.

This is even better: you don't need to use the -n option. Leaving it out makes the operation simpler and safer.


The procedure for modifying the VG:

First, run "man vgmodify" and read it to understand what you're going to do, and to learn how to recover if the vgmodify process gets interrupted or fails.

You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:

(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata

vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)

After this, you can re-activate the VG and mount the filesystems again.

If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:

vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0

MK
MK
V.P
Frequent Advisor

Re: VG creation

Thanks a lot MK.

Could you please explain the steps, if it is in cluster environment?
Matti_Kurkela
Honored Contributor

Re: VG creation

A basic ServiceGuard cluster, or a RAC active/active cluster?

Anyway, the basic procedure is mostly the same.

Do the VG modification on the primary node.

Just replace "stop oracle, unmount the vgoradata LVs and deactivate the VG" in the basic procedure with "halt the package" (on all nodes, if a RAC cluster), and "re-activate the VG and mount the filesystems" with "restart the package".

There is one extra procedure in the end.

Whenever you add or remove PVs to/from cluster VGs, you must create a new VG map file and re-import the VG to all other nodes, to make the other nodes aware of the change.

-------------

1.) Begin by creating a new map file on the node you used to change the VG:

vgexport -v -s -p -m vgoradata.map vgoradata

With the '-p' option, this command does not actually export the VG: it only creates the map file.

Also check the minor device number of the VG group file:

ll /dev/vgoradata/group

The response will be something like:

crw-r--r-- 1 root sys 64 0x020000 Jul 22 2008 /dev/vgoradata/group

In this example, the minor device number is 0x020000. Find the respective number in your system and remember it: it should be unique to each VG on the cluster.

Copy the new vgoradata.map file from the primary node to all failover nodes, and on each of them, export and re-import the VG so that the nodes will become aware of the changes done on the primary node:

2.) Export the vgoradata VG:

vgexport vgoradata

3.) Re-create the group file:

mkdir /dev/vgoradata
mknod /dev/vgoradata/group c 64 0xNN0000

(Replace the 0xNN0000 with the correct minor device number: it must be the same as on the primary node.)

4.) Re-import the VG.

vgimport -v -s -m vgoradata.map vgoradata

5.) If you have more than two nodes, repeat steps 2-4 on each failover node as necessary.

-----------

If vguat1 is a cluster volume group too, you must do the same procedure (steps 1-5) with vguat1 too.

MK
MK
V.P
Frequent Advisor

Re: VG creation

Basic Service Guard Cluster.
Gordon Sjodin
Frequent Advisor

Re: VG creation

I would run :

umask 655

prior to the dirrectory creation
Johnson Punniyalingam
Honored Contributor

Re: VG creation

>>Basic Service Guard Cluster.<<<


Follow Advice from >>MK<<

You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:

(shutdown Oracle)
umount /dev/vgoradata/arch
umount /dev/vgoradata/data1
umount /dev/vgoradata/data2
...
vgchange -a n vgoradata

vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)

After this, you can re-activate the VG and mount the filesystems again.

If the vgmodify was successful, it should be possible to move the PV to this VG, just like smatador suggested:

vgreduce vguat1 /dev/dsk/c3t4d0
vgextend vgoradata /dev/dsk/c3t4d0


if you say for "Basic Service Guard Cluster" -> I' would assume you have (2 node cluster setup One Active another one passive / failover node)


>>For Basic Service Guard Cluster.<<< below are steps you need do address/ changes to adoptive node /failover node , in case of package failover the adoptive node can auto fialover sucesfully.

Active Node or Primary Node

Create a map file
# vgexport -pvs -m /dev/vgxx

Copy the mapfile to the other node
You Can use (ftp/rcp/scp) the to adoptive / failover node

Activate in share mode
# vgchange -a s /dev/vgxx

On the other node:
1. Export the VG
# vgexport /dev/vgxx

2. Recreate the directory
# mkdir -p /dev/vgxx

3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)

4. Preview the vgimport to check for any possible error
# vgimport -pvs -m /dev/vgxx
where mapfile is the one copied from the first node

5. If no error, remove the preview mode
# vgimport -vs -m /dev/vgxx

6. Activate in share mode
# vgchange -a s /dev/vgxx


Regards,
Johnson
Problems are common to all, but attitude makes the difference
Johnson Punniyalingam
Honored Contributor

Re: VG creation

Apologies for missing out some things
hence you going "vgmodify" under (Service Guard)

# cmhaltpkg -n -v
# vgchange -a n vgoradata
# vgmodify -p 10 -n -e 8748 vgoradata
(or if the "-n" option was found to be unnecessary, don't use it.)
# vgextend vgoradata /dev/dsk/c3t4d0
vgexport -pvs -m /dev/vgxx
# scp *.map abc@node02a:/home/abc -->Copy the mapfile to the other node.You Can use (ftp/rcp/scp) the to adoptive / failover node
# vgchange -a s /dev/vgxx

(take note if you are using the "different vg" in "pkg.conf.file" you need cmapplyconf if you do any changes your cluster pkg.congiuration file ,

(in your case assume ur just doing vgmodify only)

On the other node:
1. Export the VG
# vgexport /dev/vgxx

2. Recreate the directory
# mkdir -p /dev/vgxx

3. Recreate the VG group file
# mknod /dev/vgxx/group c 64 0xMM0000
where MM is a unique identifier (ex 01 for vg01)

4. Preview the vgimport to check for any possible error
# vgimport -pvs -m /dev/vgxx
where mapfile is the one copied from the first node

5. If no error, remove the preview mode
# vgimport -vs -m /dev/vgxx

6. Activate in share mode
# vgchange -a s /dev/vgxx
Problems are common to all, but attitude makes the difference