Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
Showing results for 
Search instead for 
Did you mean: 

problem of device file after vgextend in cluster

Frequent Advisor

problem of device file after vgextend in cluster

I have a cluster with 2 nodes rp5450 manage by MC/Service Guard.
My configuration is active/standby.

Since I don't have enough space in volume group I took one disk from another disk array.
then I did the following :

1)add the disk
2) ioscan -fnC disk .
I created lun 1 with the control panel of the disk array

the new disk path /dev/dsk/cXtYdZ

3)Created the physical volume

pvcreate -f //dev/dsk/cXtYdZ
4)extend the vg
vgextend /dev/vgXX /dev/dsk/cXtYdZ

1)umount the mount point

#umount /<>

2)extend the logical volume

#lvextend -L <> //dev/dsk/cXtYdZ
#extendfs /dev/dsk/cXtYdZ

3)mount the file system

#mount <>

Until the reboot everything worked correctly.
When I try to retarted the cluster, I got from sam (Cluster configuration , View Cluster Requirement )
The following errors

â disk at /dev/dsk/c0t1d0 on node dbmsfin2 does not have ID or disk labelâ
â vgora is configured diferently on node dbmsfin1 than on node dbmsfin2

â /dev/vgora on node dbmsfin2 does not appear to have a physical volume corresponding to /dev/dsk/c6t0d1 on node dbmsfin1â

I notify that on 2 nodes ioscan does not return the same hardware path.

How can I have the same configuration in the 2 nodes?
May I perfom the same operation in 2 nodes?
In attachement see the ioscan report with the 2 nodes.

I found a disk between the hardware address with the device file
Disk 3 on dbmsfin1: disk 3 0/4/0/0.1.0 sdisk CLAIMED DEVICE HP C5447A
/dev/dsk/c5t1d0 /dev/rdsk/c5t1d0
Disk 3 on dbmsfin2 : disk 3 0/4/0/0.1.0 sdisk CLAIMED DEVICE HP C5447A
/dev/dsk/c4t1d0 /dev/rdsk/c4t1d0
And the difference continue with all disk of the diskarray.

Thank you for
Bharat Katkar
Honored Contributor

Re: problem of device file after vgextend in cluster

To make it similar i would work it this way:

On dbmsfin1:
# vgexport -p -s -v -m mapfilename vgname

On dbmsfin2:

(ftp mapfile from 1 to 2 and note the /dev/vgxx/group file's minor no. on node 1)
# vgexport vgname
# mkdir /dev/vgname
# mknod /dev/vgname/group c 64
# vgimport -s -v -m mapfilename vgname

This will get you identical Special file structure for that VGxx.

Hope that helps.

p.s. Be careful while using vgexport command without -p option. See man vgexport for more details.
You need to know a lot to actually know how little you know
Ashwani Kashyap
Honored Contributor

Re: problem of device file after vgextend in cluster

Ok the error you are getting is because you have cluster aware volume group that has X number of disks on one node but a disk less on the second node .
You got to make sure that you have the same number of disks in the VG's on both the nodes .

When you added the disks/LUN , make sure that it is seen from both the nodes .
Make a note of the Hardware path and the device file of the new disk/LUN on the both the servers , it might be different .

Once you have that , follwo Bharat's procedure above , and you should be all right .