Operating System - HP-UX
1751975 Members
4520 Online
108784 Solutions
New Discussion юеВ

Re: Problem at adding new disk to VG in cluster env.

 
SOLVED
Go to solution
Deniz Cendere
Frequent Advisor

Problem at adding new disk to VG in cluster env.

Hi all,

I have added a new PV to vg01 on node1 in a high availability cluster.
I extended logical volume and the file system.
Then I exported the VG information on node1.
vgexport -pvs -m /tmp/map.file vg01

I imported the map.file on node2.
vgexport vg01
mkdir /dev/vg01
mknod /dev/vg01/group c 64 0x010000
vgimport -v -s -m /tmp/vg01.map vg01

I am using EMC storage.

When I do "ioscan -fnC disk" on node1 and on node2, I see the disks with different adresses.
The controller numbers are different.
node1;
/dev/dsk/c12t2d3
/dev/dsk/c13t2d3 alternative link
node2;
/dev/dsk/c10t2d3
/dev/dsk/c8t2d3 alternative link


My first problem is;
1- After vgimport when I look at the /etc/lvmtab on NODE2(failover node), I couldn't see the new disks.

My second problem is;

2- Although I see the new disks in /etc/lvmtab on NODE1(primary node), when I look at the output of "vgscan -pav"
it says that new disks are not part of a Volume Group
and contains no LVM information

Third problem is;
3- On NODE1 and NODE2, When I look at the VGID s of the disks in my volume group VG01 , I see the VGID s of all old disks as 418F2EDE but see the VGID of new disks as 0 , see below,

echo 2000?8c+8x|adb /dev/dsk/c12t3d1 (old disk)
2000: LVMREC010xD7A2 9760 421F 398C 0xD7A2 9760 418F2EDE

echo 2000?8c+8x|adb /dev/dsk/c12t2d3 (new disk)
2000: LVMREC010xD7A2 975F 4296 0xED3A 0 0 0 0

The OUTPUT of NODE1 and NODE2 are the same for VGIDs.

ARE THE THREE PROBLEMS RELATED W─░TH EACH OTHER?

Thanks

Deniz
8 REPLIES 8
Devender Khatana
Honored Contributor

Re: Problem at adding new disk to VG in cluster env.

Hi,

It seems you have not presented the news LUN properly to your system. There is difference in the controllers of your target disk which means possibally you are getting this LUN accessed through a different controllers on both nodes or you have not done similar settings on both nodes for accessing this LUN.

Possibally this LUN is different on both nodes causing VGID to differ. You can check this by creating new seperate VG (For testing )on this disk at one node and then try to access the data having import on other node.

How many FC controllers your system have ?

Having you created new zone for getting this new LUN accessed ? Was this LUN allready there in system earlier or is newly created ?

HTH,
Devender
Impossible itself mentions "I m possible"
Deniz Cendere
Frequent Advisor

Re: Problem at adding new disk to VG in cluster env.

Hi,

I have 2 FC controllers on each node.
I have already 4 disks in this VG and add a new one now.

There is no problem when I look with "ioscan -fnC disk" and "pvdisplay" commands on both of the nodes.

I think I'm misunderstood.

I see the VGID of new disk as 0 on both of the nodes. I mean I see the same VGID for the new disk on both of the nodes but the VGID of the new disk differ from the other disks which are in the same VG.
vgexport -s puts the VGID in the mapfile and when I do vgimport -s on the second node I cant see the new disk in the /etc/lvmtab file of the second node.

There is no problem in the primary node when I look at the vgdisplay output and /etc/lvmtab. But as I said when I look at the VGID of the new disk, I see 0. And when I do vgscan -pav , it says that the new disk doesn't belong to a volume group although I see it in the output of the vgdisplay command.
Bharat Katkar
Honored Contributor

Re: Problem at adding new disk to VG in cluster env.

Hi Deniz,
Try using vgimport command as:

vgimport -v -s -m /tmp/vg01.map vg01 PV1 PV2 PV3 ....

If you know what PV vg01 belongs to.

Hope that works.
Regards,
You need to know a lot to actually know how little you know
Deniz Cendere
Frequent Advisor

Re: Problem at adding new disk to VG in cluster env.

Hi Bharat,
thanks for your reply,
I thougth to try this, maybe it solves the problem on the second node by updating the /etc/lvmtab
but I am wondering about the VGID.
I'm afraid of if there can be a problem about it.
Output of "vgscan -pav" says that the new disk doesn't belong to the vg01. So if vgscan runs at a time in the future it will update the /etc/lvmtab and will delete the new disk from it. I read that for EMC disks vgscan queries the VGID to find out which disks belong to the VG.

Do you think that doing vgimport (as you said) on the second node puts the VGID of VG01 into VGRA of
the physical device?
Bharat Katkar
Honored Contributor

Re: Problem at adding new disk to VG in cluster env.

Hi Debiz,
Why i suggested that becuase you have different device names for same LUN for two nodes. And i believe it should solve your problem.
And yes the VGID after this should remain the same becuase you are explicitly specifying the PV names.
Regards,
You need to know a lot to actually know how little you know
Bob_Vance
Esteemed Contributor
Solution

Re: Problem at adding new disk to VG in cluster env.

Of course, the VGIDs will be the same when viewed on the 2 nodes, because we are looking at the same disk. But the VGID should match the old disks, of course.

If you did everything correctly and got no errors, maybe it points to a bug in the vgexport/import -s.
For one thing, if the cXtY match for device names between the 2 nodes for the original PVs, then they should match for the new PV.

Notice that the CPUID of the new PVID (first 4 bytes are the CPUID; next 4 bytes are creation date) is not even correct, D7A2 975F.
Where did you do the 'pvcreate' ?
Are you sure that this LUN is not being accessed by another system??

You can get the hex value of the CPUID that should be in the LVM header by doing this command:

... # (echo ob=16 ; uname -i) | bc

If you do this on all your systems, you'll eventually find "D7A2975F" which will tell you where it was created.


If the new space in the LVOL/FS has not been used, yet, I would consider starting all over:

node1 # vgexport vg01

You may get an error. You could remove /etc/lvmtab, /dev/vg01, and then vgscan.

node2 # fsadm ... ## reduce your FS size back down
node2 # lvreduce -l ... ## reduce your LVOL size
node2 # vgreduce vg01 /dev/dsk/ /dev/dsk/ # remove new PV from vg01 (both paths)
node2 # ioscan
node2 # insf -e
node2 # ioscan -fnCdisk
....... determine device names of new LUN
node2 # pvcreate /dev/rdsk/ # just one of the names
node2 # vgextend vg01 /dev/dsk/ /dev/dsk/ # both paths

At this point, verify that vg01 on node1 is correct.

Then continue with node2.

node2 # vgexport -pvs -m /tmp/map.file vg01
node2 # scp /tmp/map.file node2:/tmp/map.file

node1 # mkdir /dev/vg01
node1 # mknod /dev/vg01/group c 64 0x010000
node1 # vgimport -v -s -m /tmp/vg01.map vg01


hth
bv
"The lyf so short, the craft so long to lerne." - Chaucer
Deniz Cendere
Frequent Advisor

Re: Problem at adding new disk to VG in cluster env.

Hi all,

I thought something was wrong with the LVMREC of the disk. Maybe I corrupted that area by doing sth. wrong on the secondary node. I did vgcfgrestore for this volume group on node1.
Then I tried to do vgexport/vgimport.
This time SUCCESSFUL. I saw it in the output of /etc/lvmtab. Also, output of vgscan -pav was rigth on both of the nodes.

THANKS TO ALL,

Deniz
Deniz Cendere
Frequent Advisor

Re: Problem at adding new disk to VG in cluster env.

vgcfgrestore solved my problem