GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- mirror stale on corrupted disk situation.
Operating System - HP-UX
1847899
Members
4043
Online
104021
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2008 02:46 AM
12-12-2008 02:46 AM
mirror stale on corrupted disk situation.
Hi everybody...a strange situation:
We interrupted a swinstall session (with kernel ricompilation) and now we have a kernel incorrect situation. Anyway...we rebooted and have a disk failur eon the boot primary disk. We had mirror activated...boot from alternate and the server in up&running with 1 disk. I exported vg00 in maintenance mode, recreate vg00 with ine disk...HP substitutes the failur edisk and now we have 2 disk. BUT...lvdisplay see the mirror activated on a PHANTOM disk...if I try to mirro on the new disk...lvm tells we have mirror = 1...on the phantom disk.
I Can't eliminate a phantom disk...his name is "????". If I try to lvreduce m 0 he doesn't allow...
here the command:
omihn31:/etc/lvmconf>lvdisplay -v /dev/vg00/lvol1
--- Logical volumes ---
LV Name /dev/vg00/lvol1
VG Name /dev/vg00
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 304
Current LE 38
Allocated PE 76
Stripes 0
Stripe Size (Kbytes) 0
Bad block off
Allocation strict/contiguous
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c1t0d0 38 38
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 ??? 00000 stale /dev/dsk/c1t0d0 00000 current
00001 ??? 00001 stale /dev/dsk/c1t0d0 00001 current
00002 ??? 00002 stale /dev/dsk/c1t0d0 00002 current
00003 ??? 00003 stale /dev/dsk/c1t0d0 00003 current
and so on.
-----------------
omihn31:/etc/lvmconf>lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c1t2d0
Physical extents on remaining physical volumes are stale or
Remaining physical volumes are not responding.
lvreduce: The LVM device driver failed to reduce mirrors on
the logical volume "/dev/vg00/lvol1".
So I'm not able to lvreduce m 0
---------------
omihn31:/etc/lvmconf>lvreduce -m 1 /dev/vg00/lvol1 /dev/dsk/c1t2d0
lvreduce: "MirrorCopies" parameter "1" is not smaller than existing number "1";
therefore no mirrors are removed.
omihn31:/etc/lvmconf>
and I'm not able to lv modify at m 1
---------------
end of vgdisplay:
--- Physical volumes ---
PV Name /dev/dsk/c1t0d0
PV Status available
Total PE 4340
Free PE 753
Autoswitch On
PV Name /dev/dsk/c1t2d0
PV Status available
Total PE 4340
Free PE 4340
Autoswitch On
But I have ALL lv in stale...
So...I'm in trouble...I think I have to restore from make recovery....
Is it possible to lvreduce a lv extracting a disk indicated with "????"...knowing it was the old primary disk...called c1t2d0...and now we have the c1t2d0 disk which the system recognize like a new disk?
I think the kernel has old data...
Thanks in advance!
We interrupted a swinstall session (with kernel ricompilation) and now we have a kernel incorrect situation. Anyway...we rebooted and have a disk failur eon the boot primary disk. We had mirror activated...boot from alternate and the server in up&running with 1 disk. I exported vg00 in maintenance mode, recreate vg00 with ine disk...HP substitutes the failur edisk and now we have 2 disk. BUT...lvdisplay see the mirror activated on a PHANTOM disk...if I try to mirro on the new disk...lvm tells we have mirror = 1...on the phantom disk.
I Can't eliminate a phantom disk...his name is "????". If I try to lvreduce m 0 he doesn't allow...
here the command:
omihn31:/etc/lvmconf>lvdisplay -v /dev/vg00/lvol1
--- Logical volumes ---
LV Name /dev/vg00/lvol1
VG Name /dev/vg00
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 304
Current LE 38
Allocated PE 76
Stripes 0
Stripe Size (Kbytes) 0
Bad block off
Allocation strict/contiguous
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c1t0d0 38 38
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 ??? 00000 stale /dev/dsk/c1t0d0 00000 current
00001 ??? 00001 stale /dev/dsk/c1t0d0 00001 current
00002 ??? 00002 stale /dev/dsk/c1t0d0 00002 current
00003 ??? 00003 stale /dev/dsk/c1t0d0 00003 current
and so on.
-----------------
omihn31:/etc/lvmconf>lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c1t2d0
Physical extents on remaining physical volumes are stale or
Remaining physical volumes are not responding.
lvreduce: The LVM device driver failed to reduce mirrors on
the logical volume "/dev/vg00/lvol1".
So I'm not able to lvreduce m 0
---------------
omihn31:/etc/lvmconf>lvreduce -m 1 /dev/vg00/lvol1 /dev/dsk/c1t2d0
lvreduce: "MirrorCopies" parameter "1" is not smaller than existing number "1";
therefore no mirrors are removed.
omihn31:/etc/lvmconf>
and I'm not able to lv modify at m 1
---------------
end of vgdisplay:
--- Physical volumes ---
PV Name /dev/dsk/c1t0d0
PV Status available
Total PE 4340
Free PE 753
Autoswitch On
PV Name /dev/dsk/c1t2d0
PV Status available
Total PE 4340
Free PE 4340
Autoswitch On
But I have ALL lv in stale...
So...I'm in trouble...I think I have to restore from make recovery....
Is it possible to lvreduce a lv extracting a disk indicated with "????"...knowing it was the old primary disk...called c1t2d0...and now we have the c1t2d0 disk which the system recognize like a new disk?
I think the kernel has old data...
Thanks in advance!
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2008 03:25 AM
12-12-2008 03:25 AM
Re: mirror stale on corrupted disk situation.
You can reduce the ??? disk from the lv using the -k option:
/usr/sbin/lvreduce [-A autobackup] -k -m mirror_copies lv_path key
once you have reduces all your lvols, you can vgreduce the disk normally if not you can try with:
vgreduce -f vgname
have a look at this:
http://docs.hp.com/en/5991-1236/When_Good_Disks_Go_Bad_WP.pdf
from the pdf:
If the disk was not available at boot time (pvdisplay failed) then the lvreduce command fails with an error that it could not query the physical volume. You can still remove the mirror copy, but you must specify the physical volume key rather than the name. You can get the key using lvdisplay with the â k option as follows:
# lvdisplay -v â k /dev/vg00/lvol1
â ¦
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 00000 stale 1 00000 current
00001 0 00001 stale 1 00001 current
00002 0 00002 stale 1 00002 current
00003 0 00003 stale 1 00003 current
00004 0 00004 stale 1 00004 current
00005 0 00005 stale 1 00005 current
â ¦
Compare this output with the output of lvdisplay without â k, which you did to check the mirror status. The column that contained the failing disk (or â ???â ) now holds the key. For this example, the key is 0. Use this key with lvreduce as follows:
# lvreduce -m 0 -A n â k /dev/vgname/lvname key (if you have a single mirror copy)
/usr/sbin/lvreduce [-A autobackup] -k -m mirror_copies lv_path key
once you have reduces all your lvols, you can vgreduce the disk normally if not you can try with:
vgreduce -f vgname
have a look at this:
http://docs.hp.com/en/5991-1236/When_Good_Disks_Go_Bad_WP.pdf
from the pdf:
If the disk was not available at boot time (pvdisplay failed) then the lvreduce command fails with an error that it could not query the physical volume. You can still remove the mirror copy, but you must specify the physical volume key rather than the name. You can get the key using lvdisplay with the â k option as follows:
# lvdisplay -v â k /dev/vg00/lvol1
â ¦
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 0 00000 stale 1 00000 current
00001 0 00001 stale 1 00001 current
00002 0 00002 stale 1 00002 current
00003 0 00003 stale 1 00003 current
00004 0 00004 stale 1 00004 current
00005 0 00005 stale 1 00005 current
â ¦
Compare this output with the output of lvdisplay without â k, which you did to check the mirror status. The column that contained the failing disk (or â ???â ) now holds the key. For this example, the key is 0. Use this key with lvreduce as follows:
# lvreduce -m 0 -A n â k /dev/vgname/lvname key (if you have a single mirror copy)
Windows?, no thanks
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP