- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Problem at adding new disk to VG in cluster env.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2005 10:34 PM
05-28-2005 10:34 PM
I have added a new PV to vg01 on node1 in a high availability cluster.
I extended logical volume and the file system.
Then I exported the VG information on node1.
vgexport -pvs -m /tmp/map.file vg01
I imported the map.file on node2.
vgexport vg01
mkdir /dev/vg01
mknod /dev/vg01/group c 64 0x010000
vgimport -v -s -m /tmp/vg01.map vg01
I am using EMC storage.
When I do "ioscan -fnC disk" on node1 and on node2, I see the disks with different adresses.
The controller numbers are different.
node1;
/dev/dsk/c12t2d3
/dev/dsk/c13t2d3 alternative link
node2;
/dev/dsk/c10t2d3
/dev/dsk/c8t2d3 alternative link
My first problem is;
1- After vgimport when I look at the /etc/lvmtab on NODE2(failover node), I couldn't see the new disks.
My second problem is;
2- Although I see the new disks in /etc/lvmtab on NODE1(primary node), when I look at the output of "vgscan -pav"
it says that new disks are not part of a Volume Group
and contains no LVM information
Third problem is;
3- On NODE1 and NODE2, When I look at the VGID s of the disks in my volume group VG01 , I see the VGID s of all old disks as 418F2EDE but see the VGID of new disks as 0 , see below,
echo 2000?8c+8x|adb /dev/dsk/c12t3d1 (old disk)
2000: LVMREC010xD7A2 9760 421F 398C 0xD7A2 9760 418F2EDE
echo 2000?8c+8x|adb /dev/dsk/c12t2d3 (new disk)
2000: LVMREC010xD7A2 975F 4296 0xED3A 0 0 0 0
The OUTPUT of NODE1 and NODE2 are the same for VGIDs.
ARE THE THREE PROBLEMS RELATED WİTH EACH OTHER?
Thanks
Deniz
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2005 11:16 PM
05-28-2005 11:16 PM
Re: Problem at adding new disk to VG in cluster env.
It seems you have not presented the news LUN properly to your system. There is difference in the controllers of your target disk which means possibally you are getting this LUN accessed through a different controllers on both nodes or you have not done similar settings on both nodes for accessing this LUN.
Possibally this LUN is different on both nodes causing VGID to differ. You can check this by creating new seperate VG (For testing )on this disk at one node and then try to access the data having import on other node.
How many FC controllers your system have ?
Having you created new zone for getting this new LUN accessed ? Was this LUN allready there in system earlier or is newly created ?
HTH,
Devender
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-29-2005 03:18 AM
05-29-2005 03:18 AM
Re: Problem at adding new disk to VG in cluster env.
I have 2 FC controllers on each node.
I have already 4 disks in this VG and add a new one now.
There is no problem when I look with "ioscan -fnC disk" and "pvdisplay" commands on both of the nodes.
I think I'm misunderstood.
I see the VGID of new disk as 0 on both of the nodes. I mean I see the same VGID for the new disk on both of the nodes but the VGID of the new disk differ from the other disks which are in the same VG.
vgexport -s puts the VGID in the mapfile and when I do vgimport -s on the second node I cant see the new disk in the /etc/lvmtab file of the second node.
There is no problem in the primary node when I look at the vgdisplay output and /etc/lvmtab. But as I said when I look at the VGID of the new disk, I see 0. And when I do vgscan -pav , it says that the new disk doesn't belong to a volume group although I see it in the output of the vgdisplay command.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-29-2005 04:38 AM
05-29-2005 04:38 AM
Re: Problem at adding new disk to VG in cluster env.
Try using vgimport command as:
vgimport -v -s -m /tmp/vg01.map vg01 PV1 PV2 PV3 ....
If you know what PV vg01 belongs to.
Hope that works.
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-29-2005 05:13 AM
05-29-2005 05:13 AM
Re: Problem at adding new disk to VG in cluster env.
thanks for your reply,
I thougth to try this, maybe it solves the problem on the second node by updating the /etc/lvmtab
but I am wondering about the VGID.
I'm afraid of if there can be a problem about it.
Output of "vgscan -pav" says that the new disk doesn't belong to the vg01. So if vgscan runs at a time in the future it will update the /etc/lvmtab and will delete the new disk from it. I read that for EMC disks vgscan queries the VGID to find out which disks belong to the VG.
Do you think that doing vgimport (as you said) on the second node puts the VGID of VG01 into VGRA of
the physical device?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-29-2005 07:31 AM
05-29-2005 07:31 AM
Re: Problem at adding new disk to VG in cluster env.
Why i suggested that becuase you have different device names for same LUN for two nodes. And i believe it should solve your problem.
And yes the VGID after this should remain the same becuase you are explicitly specifying the PV names.
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-30-2005 02:53 AM
05-30-2005 02:53 AM
SolutionIf you did everything correctly and got no errors, maybe it points to a bug in the vgexport/import -s.
For one thing, if the cXtY match for device names between the 2 nodes for the original PVs, then they should match for the new PV.
Notice that the CPUID of the new PVID (first 4 bytes are the CPUID; next 4 bytes are creation date) is not even correct, D7A2 975F.
Where did you do the 'pvcreate' ?
Are you sure that this LUN is not being accessed by another system??
You can get the hex value of the CPUID that should be in the LVM header by doing this command:
... # (echo ob=16 ; uname -i) | bc
If you do this on all your systems, you'll eventually find "D7A2975F" which will tell you where it was created.
If the new space in the LVOL/FS has not been used, yet, I would consider starting all over:
node1 # vgexport vg01
You may get an error. You could remove /etc/lvmtab, /dev/vg01, and then vgscan.
node2 # fsadm ... ## reduce your FS size back down
node2 # lvreduce -l ... ## reduce your LVOL size
node2 # vgreduce vg01 /dev/dsk/
node2 # ioscan
node2 # insf -e
node2 # ioscan -fnCdisk
....... determine device names of new LUN
node2 # pvcreate /dev/rdsk/
node2 # vgextend vg01 /dev/dsk/
At this point, verify that vg01 on node1 is correct.
Then continue with node2.
node2 # vgexport -pvs -m /tmp/map.file vg01
node2 # scp /tmp/map.file node2:/tmp/map.file
node1 # mkdir /dev/vg01
node1 # mknod /dev/vg01/group c 64 0x010000
node1 # vgimport -v -s -m /tmp/vg01.map vg01
hth
bv
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2005 05:56 AM
05-31-2005 05:56 AM
Re: Problem at adding new disk to VG in cluster env.
I thought something was wrong with the LVMREC of the disk. Maybe I corrupted that area by doing sth. wrong on the secondary node. I did vgcfgrestore for this volume group on node1.
Then I tried to do vgexport/vgimport.
This time SUCCESSFUL. I saw it in the output of /etc/lvmtab. Also, output of vgscan -pav was rigth on both of the nodes.
THANKS TO ALL,
Deniz
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2005 05:59 AM
05-31-2005 05:59 AM