Operating System - HP-UX
1856404 Members
7258 Online
104112 Solutions
New Discussion

Re: serviceguard / lockdisk1

 
SOLVED
Go to solution
galenr
Advisor

serviceguard / lockdisk1

Hello All- I am confused here. after running cmapplyconf if fails indicating my lockdisk on the client node is not a directory....?

:unable to stat /dev/lockdisk1/group,Not a directory

:couldn't access the list of logical voulumes for volume groupd "/dev/lockdisk1"

:initializing exclusive VG /dev/lockdisk1 for node ginsu

:unable to open /dev/lockdisk1: not a directory

these statements are from the client's /var/adm/syslog/syslog.log, is there a specific logfile with error messages specific to MCSG ?

As I can cd to /dev/lockdisk1/group.. what does it mean "not a directory" ?
4 REPLIES 4
Sridhar Bhaskarla
Honored Contributor
Solution

Re: serviceguard / lockdisk1

Hi,

Can you do a vgdisplay -v lockdisk1?. The name itself is confusing. How about making it lockvg1?.

group should be a device file. If your cd is successful, then it is a problem.

I believe the volume group was not created properly. Try recreating it.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
galenr
Advisor

Re: serviceguard / lockdisk1

SRI-
Seems u know the MCSG inside out. If at all possible can you instruct me on the actual procedure for creating a lockdisk in a two node cluster?
Sridhar Bhaskarla
Honored Contributor

Re: serviceguard / lockdisk1

Hi Gaven,

First of all, I am not an expert. I only try to share what I know. If you address it in general, then you may get a better and faster response from the other members.

You do not need to create a seperate lock vg. The cluster lock disk can be part of any of your data volume group. But you can follow this procedure even if you want to create a seperate lockvg. You cannot configure a two node cluster without a cluster lock. I confine our discussion to only disks.

1. Make sure the disks are seen on both nodes intended for all the shared volume groups.

Start with Node2, the failover node.

2. Create volume groups on node2 using the following procedure. I am taking example vgs and disks.

#pvcreate (-f) /dev/rdsk/c0t0d0
#mkdir /dev/vg01
#mknod /dev/vg01/group c 64 0x0x0000
(x should be an unique number. Do a ll /dev/vg*/group, note down the minor numbers and use the next one)
#vgcreate -n vg01 -p 255 -l 255 /dev/dsk/c0t0d0
(customize with other options if you want to)
#vgextend vg01 /dev/dsk/c1t0d0
(c1t0d0 is alternate link to c0t0d0)
#vgextend vg01 /dev/dsk/c0t0d1
#vgextend vg01 /dev/dsk/c1t0d1
(extend the vg01 with all the other pvs as planned)
#lvcreate -n datavol1 -L 1000 vg01
#extendfs -F vxfs /dev/vg01/rdatavol1

(repeat above step for all other logical volumes)

3. Repeat step 2 for all the other shared volume groups.

4. Preview export the volume groups to generate the map files.

vgexport -p -v -s -m /tmp/vg01.map vg01

(you can use vgexport with -f option and generate disks which is much helpful in preserving the lvmtab order)

repeat this step for all the shared volumegroups.

5. De-activate the shared volume groups on node2

vgchange -a n vg01

Do the following on node1.

6. Copy /tmp/*.map files from node2 into /tmp directory.

7. Import the volume groups

#mkdir /dev/vg01
#mknod /dev/vg01/group c 64 0x010000
(use the same minor number that you used on node2)
#vgimport -v -s -m /tmp/vg01.map vg01
#vgchange -a y vg01

Repeat the above step for all the other volume groups.

8. Let say we use vg01 as lock vg with c0t0d0 as the lock disk. Generate the cluster config file using cmquerycl. If you already did it, you can ignore the step

#cmquerycl -C /etc/cmcluster/cmclconfig.ascii -n node1 -n node2

Edit cmclconfig.ascii file and replace the values for FIRST_CLUSTER_LOCK_VG with vg01. Now under the definition of each node, you will see FIRST_CLUSTER_LOCK_PV . Replace it's value with c0t0d0. If c0t0d0 is the disk on node2, there is no guarantee that it will be the same on node1. However only c# may change. Do a strings /etc/lvmtab and note down the device file on each system. Replace FIRST_CLUSTER_LOCK_PV under each node with the corresponding device file. Do not use alternate link here. If you want to use c0t0d0 on node1 and c1t0d0 on node2, then you will have to make c1t0d0 as the primary link on node2.

Make other modifications to the config file. Each parameter is detailed in that file to help you.

Once you are sure the configuration file is ready, go ahead and apply it

#cmcheckconf -C /etc/cmcluster/cmclconfig.ascii

It may give out errors if it find any issues with network, disks etc., Fix them.

#cmapplyconf -C /etc/cmcluster/cmclconfig.ascii

This will take care of initializing cluster lock vg and disk along with other components.

Take the following document as the ference.

http://docs.hp.com/hpux/onlinedocs/B3936-90026/B3936-90026.html

Try it and post any issues you encounter here. Someone may assist you.


-Sri
You may be disappointed if you fail, but you are doomed if you don't try
galenr
Advisor

Re: serviceguard / lockdisk1

Sri-
Thanx much .... I obviously did not understand the concept and was attempting to create a lock disk on separate volumes on both nodes. Thanx again for the clarification.... and in the immortal words of Mr. Han from Bruce Lee's Enter The Dragon..."your skills are extra-ordinary"