- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Service Guard vgexport and lvmtab inconsistent
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-19-2006 11:22 PM
01-19-2006 11:22 PM
I'm attempting to create a cluster and at the point of exporting vg01 volume to the second node.
I've been having trouble importing my vg01 from active node P2 to new node P1
Here is the story...
I just removed the current lvmtab on P1 and ran the following to recreate it…..in case there was any problem with it
[LUSMMXP1-root]# vgscan -v -a
Creating "/etc/lvmtab".
Couldn't stat physical volume "/dev/dsk/c0t0d0":
Invalid argument
Physical Volume "/dev/dsk/c2t0d0" contains no LVM information
/dev/vg00
/dev/dsk/c2t1d0s2
vgscan: has no correspoding valid raw device file under /dev/rdsk.
Verification of unique LVM disk id on each disk in the volume group
/dev/vg01 failed.
Scan of Physical Volumes Complete.
Then run the following on the new lvmtab….
[LUSMMXP1-root]# strings lvmtab
/dev/vg00
/dev/dsk/c2t1d0s2
Then created the /dev/vg01 and did the mknod etc and ran the vgimport and got the following …
[LUSMMXP1-root]# vgimport -v -s -m vg01.map /dev/vg01
Beginning the import process on Volume Group "/dev/vg01".
vgimport: Quorum not present, or some physical volume(s) are missing.
Logical volume "/dev/vg01/lvol1" has been successfully created
with lv number 1.
Volume group "/dev/vg01" has been successfully created.
At this point with the help of the forum I THINK there is a problem with the PV's
Can anyone have a look at the output of my ioscan and lvmtab files. There is inconistancies between the two and not sure whether this is causing the problem with my vgimport etc.
Thanks in advance
Here's the ioscan and lvmtab outputs, what are the cluster implications if they are different ?
[LUSMMXP2-root]# ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
============================================================================
disk 0 0/0/2/0.0.0.0 sdisk CLAIMED DEVICE TEAC DV-28E-C
/dev/dsk/c0t0d0 /dev/rdsk/c0t0d0
disk 1 0/1/1/0.0.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c2t0d0 /dev/rdsk/c2t0d0
disk 2 0/1/1/0.1.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c2t1d0 /dev/dsk/c2t1d0s2 /dev/rdsk/c2t1d0 /dev/rdsk/c2t1d0s2
/dev/dsk/c2t1d0s1 /dev/dsk/c2t1d0s3 /dev/rdsk/c2t1d0s1 /dev/rdsk/c2t1d0s3
disk 6 0/1/1/1.9.0 sdisk CLAIMED DEVICE HP 73.4GST373307LC
/dev/dsk/c3t9d0 /dev/rdsk/c3t9d0
[LUSMMXP2-root]# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0s2
/dev/vg01
,mA5
/dev/dsk/c3t9d0
/dev/dsk/c4t0d0
[LUSMMXP1-root]# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0s2
/dev/vg01
,mA5
/dev/dsk/c4t9d0
[LUSMMXP1-root]# ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
============================================================================
disk 0 0/0/2/0.0.0.0 sdisk CLAIMED DEVICE TEAC DV-28E-C
/dev/dsk/c0t0d0 /dev/rdsk/c0t0d0
disk 1 0/1/1/0.0.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c2t0d0 /dev/rdsk/c2t0d0
disk 2 0/1/1/0.1.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c2t1d0 /dev/dsk/c2t1d0s2 /dev/rdsk/c2t1d0 /dev/rdsk/c2t1d0s2
/dev/dsk/c2t1d0s1 /dev/dsk/c2t1d0s3 /dev/rdsk/c2t1d0s1 /dev/rdsk/c2t1d0s3
disk 3 0/3/1/0.9.0 sdisk CLAIMED DEVICE HP 73.4GST373307LC
/dev/dsk/c4t9d0 /dev/rdsk/c4t9d0
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-19-2006 11:50 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 01:23 AM
01-20-2006 01:23 AM
Re: Service Guard vgexport and lvmtab inconsistent
check this out from source. Moreover, you did not tell us if there was a problem when you were importing the vg from the source.
What is the content of your map file after import. Do you have 2 disks?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 01:37 AM
01-20-2006 01:37 AM
Re: Service Guard vgexport and lvmtab inconsistent
I am not sure how the lvmtab contains /dev/dsk/c4t0d0.
Here is the log of importing the volume group...
[LUSMMXP1-root]# vgimport -v -s -m vg01.map /dev/vg01
Beginning the import process on Volume Group "/dev/vg01".
vgimport: Quorum not present, or some physical volume(s) are missing.
Logical volume "/dev/vg01/lvol1" has been successfully created
with lv number 1.
Volume group "/dev/vg01" has been successfully created.
THe content of my map file after the import into node P1....
#more vg01_P1.map
VGID 7ebf2c6d4135f38f
1 lvol1
Here is the original MAP file that I exported from P2
# more vg01.map
VGID 7ebf2c6d4135f38f
1 lvol1
So they are both identical
UNfortunately I'm not 100% sure of the number of disks as i didnt set up the H/W originally and I am not near the machine ;-(. I think each node has two internal mirrored disks and the external disk array has two disks (mirrored). The ioscan doesn't seem to pick up the second external 73.4gb disk - could this be related to the extra entry in the lvmtab?
Thanks
Steve
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 01:44 AM
01-20-2006 01:44 AM
Re: Service Guard vgexport and lvmtab inconsistent
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 01:51 AM
01-20-2006 01:51 AM
Re: Service Guard vgexport and lvmtab inconsistent
If you do ioscan -fnC disk and the server does not see the device and it is written in lvmtab. Then there must be an issue if it is mounted presently. For ioscan not able to see it then you have rebooted the server after registration in the lvmtab.
If it is not in the ioscan list then the system does not see it. If it is in lvmtab, them the system has seen it once but missing. If the lvmtab is a genuine one and not overwritten from the old backup. Then I wonder why the volume could be mounted without the quorum option involved.
Next check your syslog. and check when the server was trying to come up, what it said about the volume group in question.
Is the file system mounted? or is the volume group active?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 02:12 AM
01-20-2006 02:12 AM
Re: Service Guard vgexport and lvmtab inconsistent
The syslog doesn't show any errors on server startup, below is some trace when I attempted to just start the cluster on one node using ...
# cmruncl -n LUSMMXP2
Jan 19 01:16:58 LUSMMXP2 CM-CMD[4229]: cmruncl -n LUSMMXP2
Jan 19 01:17:11 LUSMMXP2 vmunix: intl100(0): Unknown ctl_req 80044432
Jan 19 01:17:11 LUSMMXP2 CM-CMD[4229]: Not probing node LUSMMXP1 as it is currently unreachable.
Jan 19 01:17:11 LUSMMXP2 CM-CMD[4229]:
Jan 19 01:17:13 LUSMMXP2 cmclconfd[4251]: Request from root on node LUSMMXP2 to start the cluster on this node
Jan 19 01:17:13 LUSMMXP2 cmcld: Logging level changed to level 0.
Jan 19 01:17:13 LUSMMXP2 cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 1.
Jan 19 01:17:13 LUSMMXP2 cmcld: Global Cluster Information:
Jan 19 01:17:13 LUSMMXP2 cmcld: Heartbeat Interval is 1.00 seconds.
Jan 19 01:17:13 LUSMMXP2 cmcld: Logging level changed to level 0.
Jan 19 01:17:13 LUSMMXP2 cmcld: Node Timeout is 30.00 seconds.
Jan 19 01:17:13 LUSMMXP2 cmcld: Network Polling Interval is 2.00 seconds.
Jan 19 01:17:13 LUSMMXP2 cmcld: Auto Start Timeout is 600.00 seconds.
Jan 19 01:17:13 LUSMMXP2 cmcld: Failover Optimization is disabled.
Jan 19 01:17:13 LUSMMXP2 cmcld: Information Specific to node LUSMMXP2:
Jan 19 01:17:13 LUSMMXP2 cmcld: Cluster lock disk: /dev/dsk/c4t0d0.
Jan 19 01:17:13 LUSMMXP2 cmcld: lan3 0x000f20677a99 172.24.52.7 bridged net:1
Jan 19 01:17:13 LUSMMXP2 cmcld: lan4 0x000f20677a9a 16.1.1.11 bridged net:2
Jan 19 01:17:13 LUSMMXP2 cmcld: Heartbeat Subnet: 172.24.52.0
Jan 19 01:17:13 LUSMMXP2 cmcld: Heartbeat Subnet: 16.1.1.0
Jan 19 01:17:13 LUSMMXP2 cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 2018.
Jan 19 01:17:13 LUSMMXP2 cmcld: bytes is 64
Jan 19 01:17:13 LUSMMXP2 vmunix:
Jan 19 01:17:13 LUSMMXP2 vmunix: SCSI: Reset detected -- path: 0/3/1/0
Jan 19 01:17:13 LUSMMXP2 vmunix: SCSI: -- lbolt: 79463, bus: 4
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->state: 30008
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->offset: ffffffff
Jan 19 01:17:13 LUSMMXP2 vmunix:
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->nominalOffset: 0
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->Cmdindex: 0
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->last_nexus_index: 0
Jan 19 01:17:13 LUSMMXP2 vmunix: lbp->nexus_index: 0
Jan 19 01:17:13 LUSMMXP2 vmunix: uCmdSent: 0 uNexus_offset: 0
Jan 19 01:17:13 LUSMMXP2 vmunix: last lbp->puStatus [e000000113b3d600]:
Jan 19 01:17:13 LUSMMXP2 vmunix: 00000000 00000000 00000000 00000
.
.
.
.
.
.
Jan 19 01:17:13 LUSMMXP2 vmunix: 74: SCRATCHH: 4014d600
Jan 19 01:17:13 LUSMMXP2 vmunix: 78: SCRATCHI: 00000000
Jan 19 01:17:13 LUSMMXP2 vmunix: 7c: SCRATCHJ: 00000000
Jan 19 01:17:13 LUSMMXP2 vmunix: bc: SCNTL4: 00
Jan 19 01:17:13 LUSMMXP2 vmunix: PCI configuration register dump:
Jan 19 01:17:13 LUSMMXP2 vmunix: Command: 0157
Jan 19 01:17:13 LUSMMXP2 vmunix: Latency Timer: c0
Jan 19 01:17:13 LUSMMXP2 vmunix: Cache Line Size: 20
Jan 19 01:17:13 LUSMMXP2 cmcld: rcomm health: Initializing timeout to 120000000 microseconds
Jan 19 01:17:13 LUSMMXP2 cmcld: Total allocated: 18241896 bytes, used: 659744 bytes, unused 17582144 bytes
Jan 19 01:17:13 LUSMMXP2 cmcld: Starting cluster management protocols.
Jan 19 01:17:13 LUSMMXP2 cmcld: Attempting to form a new cluster
Jan 19 01:17:13 LUSMMXP2 cmcld: Beginning standard election
Jan 19 01:17:14 LUSMMXP2 cmcld: Clearing Cluster Lock
Jan 19 01:17:14 LUSMMXP2 cmcld: Request to clear cluster lock /dev/dsk/c4t0d0 failed: Device busy
Jan 19 01:17:14 LUSMMXP2 cmcld: Turning off safety time protection since the cluster
Jan 19 01:17:14 LUSMMXP2 cmcld: now consists of a single node. If Serviceguard
Jan 19 01:17:14 LUSMMXP2 cmlvmd[4257]: Clvmd initialized successfully.
Jan 19 01:17:14 LUSMMXP2 cmcld: fails, this node will not automatically halt
Jan 19 01:17:14 LUSMMXP2 cmcld: 1 nodes have formed a new cluster, sequence #1
Jan 19 01:17:14 LUSMMXP2 cmcld: The new active cluster membership is: LUSMMXP2(id=2)
Jan 19 01:17:14 LUSMMXP2 cmcld: One of the nodes is down.
Jan 19 01:17:14 LUSMMXP2 cmcld: Request from node LUSMMXP2 to start package pkg1 on node LUSMMXP2.
Jan 19 01:17:14 LUSMMXP2 cmcld: Executing '/etc/cmcluster/package1/pkg1.sh start' for package pkg1, as service PKG*10497.
Jan 19 01:17:18 LUSMMXP2 cmcld: WARNING: Cluster lock disk /dev/dsk/c4t0d0 has failed: I/O error
Jan 19 01:17:18 LUSMMXP2 cmcld: Until it is fixed, a single failure could
Jan 19 01:17:18 LUSMMXP2 cmcld: cause all nodes in the cluster to crash
Jan 19 01:17:19 LUSMMXP2 cmclconfd[4254]: Updated file /var/adm/cmcluster/frdump.cmcld.6 for node LUSMMXP2 (length = 49203).
Jan 19 01:17:26 LUSMMXP2 LVM[4266]: vgchange -a e vg01
Jan 19 01:17:27 LUSMMXP2 CM-pkg1[4309]: cmmodnet -a -i 172.24.52.4 172.24.52.0
Jan 19 01:17:27 LUSMMXP2 CM-pkg1[4314]: cmrunserv -r 2 package_monitor.sh >> /etc/cmcluster/package1/pkg1.sh.log 2>&1 /tmp/agent/pro
cess_manager_hpux.py -f /tmp/agent/PROCESSES_VIA_SCRIPTS_INFO_hpux.xml
Jan 19 01:17:27 LUSMMXP2 cmcld: Service PKG*10497 terminated due to an exit(0).
Jan 19 01:17:27 LUSMMXP2 cmcld: Started package pkg1 on node LUSMMXP2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 02:22 AM
01-20-2006 02:22 AM
Re: Service Guard vgexport and lvmtab inconsistent
[LUSMMXP2-root]# bdf /shared
Filesystem kbytes used avail %used Mounted on
/dev/vg01/lvol1 10240000 3642022 6186607 37% /shared
There is a warning when I run vgdisplay ...
[LUSMMXP2-root]# vgdisplay vg01
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c4t0d0":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available, exclusive
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 2
Act PV 1
Max PE per PV 17502
VGDA 2
PE Size (Mbytes) 4
Total PE 17499
Alloc PE 2500
Free PE 14999
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 02:25 AM
01-20-2006 02:25 AM
Re: Service Guard vgexport and lvmtab inconsistent
Is this an EMC or BOD disk?
The disk is not being seen find solution to that first.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2006 02:31 AM
01-20-2006 02:31 AM
Re: Service Guard vgexport and lvmtab inconsistent
ioscan -fnC disk
from the server that has it mounted and it is displayed then we can progress. get us the output of
mount -v so that we can know when this was mounted and also "uptime" to know when last the server was rebooted.