Add new Vg in cluster server

 
axshah
Occasional Contributor

Add new Vg in cluster server

Dear all,

 

I have to create a new filesystem with new volume group in server which is under cluster environment.

 

I have to assign LUN from storage HP MSA P2000.

 

Please go through the below output of cmviewcl from server

# cmviewcl

                CLUSTER                        STATUS
                eccdbciv_cluster            up

               

                NODE          STATUS     STATE
                eccdbciv      up              running

 

                         PACKAGE       STATUS          STATE        AUTO_RUN         NODE
                         eccRPV             up                running           enabled         eccdbciv

 

              NODE          STATUS     STATE
             eccapp1v        up          running

 

I need step by step procdure to assign LUN in storage and after that what to do in server side.

1 REPLY 1
singh sanjeev
Trusted Contributor

Re: Add new Vg in cluster server

Assign the Lun from storage and make them visisble to both node. identify the disk device file on both node.


PROCESS
The following example uses a fictitious volume group to demonstrate the
process.

1. On one node in the cluster, create the volume group using SAM or manual
commands. Be careful in SAN environments not to select a disk already in
use on another server outside of the cluster. SAM cannot insure
this!
# pvcreate <options> /dev/dsk/____
Repeat as needed.

Create the volume group directory:
# mkdir /dev/vg07
# mknod /dev/vg07 group c 64 0x070000
\_ unique

The "group" special file must have a unique minor number amongst other
group files. If the VG will support NFS mounts, the minor number must
be the same across all nodes in the number.
# vgcreate vg07 /dev/dsk/___ ...

2. With the node in cluster state (cmviewcl), imprint the cluster ID on the
VG. Doing so prevents accidental VG activation and permits the VG to be
activated by the package control script:
# vgchange -c y vg07

3. Activate the VG (note exclusive activation mode now required):
# vgchange -a e vg07

4. Create logical volumes in the new VG. Mirroring can be accomplished at
this time if desired. Example:
# lvcreate -L 70 vg07 ...
# lvextend -m 1 /dev/vg07/lvol1

5. Create a file system on the new logical volume if needed. Example:
# newfs -F vxfs /dev/vg07/rlvol1
# mkdir <mount point directory> (do this on all nodes)
Repeat for each logical volume as needed.

6. If it's host package is up, mount the logical volumes to the file
systems:
# mount /dev/vg07/lvol1 /vg07/lvol1_mount
...

The VG and logical volumes are now ready for use.

If the VG was created with the package down, deactivate the VG:
# vgchange -a n vg07

7. Create a map file prior to vgimport'ing the VG on other nodes.

# vgexport -pvs -m /etc/lvmconf/map.vg07 vg07

The results of this command will produce a file of the following
format:

VGID 2c80715b3462331e (-s option add this unique VG ID)
2 lv2
3 lv3
4 lv4
1 lv1
5 lv5
\ \_ custom lvol names
\
\_ lvol numbers

Copy the map file to the other nodes. Example:
# rcp /etc/lvmconf/map.vg07 (othernode):/etc/lvmconf/map.vg07
NOTE: "othernode" is a reference to the hostname of the destination
server

8. On the other nodes in the cluster, prepare to vgimport the new VG:
# mkdir /dev/vg07
# mknod /dev/vg07/group c 64 0x0N0000
/|\
... where N is a unique number among group files
NOTE NFS restriction for minor number in step 1 above.

9. The map file created in step 7 avoids the need to specify each disk with
the vgimport command. When used with the '-s' option and a map
file headed by the VGID, vgimport causes LVM to inspect all attached
disks, loading /etc/lvmtab with those matching the VGID.
Import the new VG on the adoptive node:
# vgimport -vs -m /etc/lvmconf/map.vg07 vg07

10. To insure that future cmapplyconf operations do no uncluster the VG,
locate and edit the cluster configuration file and add the new volume
group name.

Locating the cluster configuration file
There is is no naming convention for the file. The SAM utility names
it /etc/cmcluster/cmclconfig.ascii. Admins sometimes call it
cluster.ascii. >If< the file cannot be found on one of the
nodes, reconstitute it with: # cmgetconf cluster.ascii

Add a reference to the cluster configuration file:

VOLUME_GROUP /dev/vg07

Copy the file to the other nodes as backup.

11. Add the VG, LVOL and mount points to the package control script that
controls the new VG.

Example lines added to package control script:

VG[7]="vg07"
and
LV[4]="/dev/vg07/lvol1"; FS[4]="/sg1"; FS_MOUNT_OPT[4]="-o rw"
LV[5]="/dev/vg07/lvol2"; FS[5]="/sg2"; FS_MOUNT_OPT[5]="-o rw"
LV[6]="/dev/vg07/lvol3"; FS[6]="/dump5"; FS_MOUNT_OPT[6]="-o rw"
LV[7]="/dev/vg07/lvol4"; FS[7]="/depot"; FS_MOUNT_OPT[7]="-o rw"
\ / /
Note consecutive incremented index value

12. Check the script for syntax errors:
# sh -n <pkg.cntl script&gt;

13. Copy the updated control script to the adoptive node(s).

14. Insure the modified package control script works by testing package
startup and stop when downtime is available.

To stop a currently running package:
# cmhaltpkg <package name>

To start a package on a specific node:
# cmrunpkg -n <nodename> <pkg name>
Drop the '-n <nodename>' if the package is to be started on the
current node.


NOTE: IT is not necessary to 'cmapplyconf' the cluster.ascii since the
cluster ID is already imprinted on the new VG. It is also not necessary to
cmapplyconf the package configuration file, since it was not modified. Only
the package control script was updated.
Sanjeev Singh