HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- cluster lock disk failed
Operating System - HP-UX
1830250
Members
2709
Online
110000
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2001 12:31 AM
06-15-2001 12:31 AM
cluster lock disk failed
Hi,
How do I replace my failed cluster lock disk when it is also contained data?
How do I replace my failed cluster lock disk when it is also contained data?
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2001 01:38 AM
06-15-2001 01:38 AM
Re: cluster lock disk failed
# Shutdown your system and replace the disk that is failing.
Boot into single user mode and override the quorum, as:
# ISL> hpux -is -lq /stand/vmunix
# vgcfgrestore -n /dev/vgxx /dev/rdsk/cXtYdZ
# lvlnboot -Rv
# vgchange -a y /dev/vgxx
# vgsync /dev/vgxx
# shutdown -ry 0
Boot into single user mode and override the quorum, as:
# ISL> hpux -is -lq /stand/vmunix
# vgcfgrestore -n /dev/vgxx /dev/rdsk/cXtYdZ
# lvlnboot -Rv
# vgchange -a y /dev/vgxx
# vgsync /dev/vgxx
# shutdown -ry 0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2001 01:45 AM
06-15-2001 01:45 AM
Re: cluster lock disk failed
Hi,
First of all you may have to get the disk replaced (may keep the same SCSI id to avoid any confusion).
But when the cluster lock disk is replaced, or the cluster lock ID is missing on a disk , the cluster lock structure must be recreated to allow MC/Service Guard to use it.
A simple vgcfgrestore command of the below mentioned form would suffice , if a vgcfgbackup was performed after te cluster lock disk was configured into teh cluster.
# vgcfgrestore -n vglock /dev/rdsk/cXtYdZ
If the vgcfgbackup was not performed after the cmapplyconf, another cmapplyconf must be perfomed with the cluster down (cluster halted) to reestablish the disk listed in the cluster ASCII file as the cluster lock disk.
The command should be:
# cmapplyconf -C cluster_ASCII -P pkg1/pkg1.conf
The Service Guard cluster checks for the lock disk once in every hour.. If successful the warnings should end ..
Cheers !!!
Mathew
First of all you may have to get the disk replaced (may keep the same SCSI id to avoid any confusion).
But when the cluster lock disk is replaced, or the cluster lock ID is missing on a disk , the cluster lock structure must be recreated to allow MC/Service Guard to use it.
A simple vgcfgrestore command of the below mentioned form would suffice , if a vgcfgbackup was performed after te cluster lock disk was configured into teh cluster.
# vgcfgrestore -n vglock /dev/rdsk/cXtYdZ
If the vgcfgbackup was not performed after the cmapplyconf, another cmapplyconf must be perfomed with the cluster down (cluster halted) to reestablish the disk listed in the cluster ASCII file as the cluster lock disk.
The command should be:
# cmapplyconf -C cluster_ASCII -P pkg1/pkg1.conf
The Service Guard cluster checks for the lock disk once in every hour.. If successful the warnings should end ..
Cheers !!!
Mathew
Cheers !!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2001 06:28 AM
08-03-2001 06:28 AM
Re: cluster lock disk failed
Hi,
Let's see if this helps.
I'm hoping you are using software mirroring with MirrorDisk/UX and the disk drive is in hot-plus mechanism.
Use this procedure:
Identify the physical volume name of the failed disk and the name of the volume group in which it was configured. In the following example, the volume group name is
identified as /dev/vg_samp and the physical volume name is shown as /dev/dsk/disk_samp. Substitute the
volume group and physical volume names that are correct for your system.
Identify the names of any logical volumes that have extents defined on the failed physical volume.
On the node on which the volume group is currently activated, use the following command for each logical volume that has extents on the failed physical volume:
# lvreduce -m 0 /dev/vg_samp/lvolname /dev/dsk/disk_samp
At this point, remove the failed disk and insert a new one. The new disk will have the same HP-UX device name as the old one.
On the node from which you issued the lvreduce command, issue the following command to restore the volume group configuration data to the newly inserted disk
# vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0
Issue the following command to extend the logical volume to the newly inserted disk
# lvextend -m 1 /dev/vg_sg01 /dev/dsk/c2t3d0
Finally, use the lvsync command for each logical volume that has extents on the failed physical volume. This synchronizes the extents of the new disk with the extents of the other mirror.
# lvsync /dev/vg_samp/lvolname
Let's see if this helps.
I'm hoping you are using software mirroring with MirrorDisk/UX and the disk drive is in hot-plus mechanism.
Use this procedure:
Identify the physical volume name of the failed disk and the name of the volume group in which it was configured. In the following example, the volume group name is
identified as /dev/vg_samp and the physical volume name is shown as /dev/dsk/disk_samp. Substitute the
volume group and physical volume names that are correct for your system.
Identify the names of any logical volumes that have extents defined on the failed physical volume.
On the node on which the volume group is currently activated, use the following command for each logical volume that has extents on the failed physical volume:
# lvreduce -m 0 /dev/vg_samp/lvolname /dev/dsk/disk_samp
At this point, remove the failed disk and insert a new one. The new disk will have the same HP-UX device name as the old one.
On the node from which you issued the lvreduce command, issue the following command to restore the volume group configuration data to the newly inserted disk
# vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0
Issue the following command to extend the logical volume to the newly inserted disk
# lvextend -m 1 /dev/vg_sg01 /dev/dsk/c2t3d0
Finally, use the lvsync command for each logical volume that has extents on the failed physical volume. This synchronizes the extents of the new disk with the extents of the other mirror.
# lvsync /dev/vg_samp/lvolname
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP