- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Failed boot drive cannot remove from LVM
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2011 10:58 AM - edited 08-29-2011 01:05 PM
08-29-2011 10:58 AM - edited 08-29-2011 01:05 PM
How can I either fix or remove a failed boot hard drive from LVM so I can get an ignite backup.
I can query the drive with diskinfo, unlike another server which has the same problem but I can't do a diskinfo on it.
Since I can query the drive on this server, I figured it can be worked with somehow. Basically I want to either recover the drive or remove it from LVM so I can at least get make_net_recovery of the OS and have HP replace the drive later.
I tried mv /etc/lvmtab to /etc/lvmtab.old and running vgscan -a -v but vg00 fails due to the bad disk and does not get put into /etc/lvmtab.
This server is an HP-UX 11.11 9000/800/rp7420
These are stale and are using failed primary boot drive /dev/dsk/c0t8d0.
lvol8 /var
lvol9 /var/adm/crash
lvnsr /nsr
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c0t10d0
/dev/dsk/c0t8d0
/dev/dsk/c1t10d0
/dev/dsk/c1t8d0
disk 0 1/0/8/1/0/4/0.8.0 sdisk CLAIMED DEVICE HP 36.4GST336754LC
/dev/dsk/c0t8d0 /dev/rdsk/c0t8d0
# lvdisplay -v /dev/vg00/lvol8
lvdisplay: Warning: couldn't query physical volume "/dev/dsk/c0t8d0":
The specified path does not correspond to physical volume attached to
this volume group
lvdisplay: Warning: couldn't query all of the physical volumes.
--- Logical volumes ---
LV Name /dev/vg00/lvol8
VG Name /dev/vg00
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 4096
Current LE 256
Allocated PE 512
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation PVG-strict
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c0t10d0 25 25
/dev/dsk/c1t10d0 25 25
/dev/dsk/c1t8d0 231 231
.
.
00023 /dev/dsk/c0t10d0 02168 current /dev/dsk/c1t10d0 02168 current
00024 /dev/dsk/c0t10d0 02169 current /dev/dsk/c1t10d0 02169 current
00025 ??? 00000 stale /dev/dsk/c1t8d0 00000 current
00026 ??? 00001 stale /dev/dsk/c1t8d0 00001 current
.
.
# diskinfo /dev/rdsk/c0t8d0
SCSI describe of /dev/rdsk/c0t8d0:
vendor: HP 36.4G
product id: ST336754LC
type: direct access
size: 0 Kbytes
bytes per sector: 0
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2011 12:43 AM
08-31-2011 12:43 AM
SolutionHi Jerrym,
to do that you have to remove it via the pv key you can get it like follow .
# lvdisplay -v -k /dev/vg00/lvol1|more
--- Logical volumes ---
LV Name /dev/vg00/lvol1
VG Name /dev/vg00
LV Permission read/write
LV Status available/syncd
Mirror copies 1
....
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 00000 current 1 00000 current
00001 0 00001 current 1 00001 current
00002 0 00002 current 1 00002 current
00003 0 00003 current 1 00003 current
00004 0 00004 current 1 00004 current
00005 0 00005 current 1 00005 current
00006 0 00006 current 1 00006 current
00007 0 00007 current 1 00007 current
"0" and "1" corresponds to pv key from disks belonging to the VG .
Now basically the following steps apply :
# Remove ghost disk from one LV with mirroring
# -> the disk must be removed from all LV mirror before being vgreduced
# -> get pv key from lv (look for the number on the stale device)
lvdisplay -k -v /dev/vg00/lvol3
# -> remove the mirror for all lv with the key
# ---> If some LV are not using the failed disk the mirror will as well be broken off if I remember well
lvreduce -k -m 0 /dev/vg00/lvol3 <pv-key>
# reduce the vg
vgreduce -l vg00 /dev/dsk/<your-disk>
You will get a good explanation at the following site
http://www.hpuxtips.es/?q=node/124
Regards,
Thierry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2011 04:24 PM
08-31-2011 04:24 PM
Re: Failed boot drive cannot remove from LVM
It works. I had to also take out the disk entry in /etc/lvmpvg file also since we use that.
Thank you so much Thierry. I did not know about using the disk key. The man page does not say much about -k.