- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: root disk gone faulty
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2012 10:25 PM
02-22-2012 10:25 PM
Hi All,
I am facing problem with one of HP-UX box 11.11 whose root disk gone faulty and even not showing in ioscan also.
Server could not able to query with root disk./dev/dsk/c0t6d0 (0/0/0/3/0.6.0)
#-> strings /etc/lvmtab | more
/dev/vg00
/dev/dsk/c0t6d0
/dev/dsk/c0t5d0
#-> setboot
Primary bootpath : 0/0/0/3/0.6.0
Alternate bootpath : 0/0/0/3/0.5.0 (/dev/dsk/c0t5d0)
#-> echo "boot_string/S" | adb -k /stand/vmunix /dev/kmem
boot_string:
boot_string: disk(0/0/0/3/0.5.0.0.0.0.0;0)/stand/vmunix
#-> lvlnboot -v
lvlnboot: Couldn't query physical volume "/dev/dsk/c0t6d0":
The specified path does not correspond to physical volume attached to
this volume group
[root@cpcwmid5:/.root/sverma8]#
#->
#-> lvdisplay -v /dev/vg00/lvol3 | more
lvdisplay: Warning: couldn't query physical volume "/dev/dsk/c0t6d0":
The specified path does not correspond to physical volume attached to
this volume group
lvdisplay: Warning: couldn't query all of the physical volumes.
--- Logical volumes ---
LV Name /dev/vg00/lvol3
VG Name /dev/vg00
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 512
Current LE 32
Allocated PE 64
Stripes 0
Stripe Size (Kbytes) 0
Bad block off
Allocation strict/contiguous
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c0t5d0 32 32
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 ??? 02032 stale /dev/dsk/c0t5d0 02032 current
00001 ??? 02033 stale /dev/dsk/c0t5d0 02033 current
00002 ??? 02034 stale /dev/dsk/c0t5d0 02034 current
00003 ??? 02035 stale /dev/dsk/c0t5d0 02035 current
00004 ??? 02036 stale /dev/dsk/c0t5d0 02036 current
00005 ??? 02037 stale /dev/dsk/c0t5d0 02037 current
00006 ??? 02038 stale /dev/dsk/c0t5d0 02038 current
00007 ??? 02039 stale /dev/dsk/c0t5d0 02039 current
00008 ??? 02040 stale /dev/dsk/c0t5d0 02040 current
00009 ??? 02041 stale /dev/dsk/c0t5d0 02041 current
00010 ??? 02042 stale /dev/dsk/c0t5d0 02042 current
00011 ??? 02043 stale /dev/dsk/c0t5d0 02043 current
00012 ??? 02044 stale /dev/dsk/c0t5d0 02044 current
00013 ??? 02045 stale /dev/dsk/c0t5d0 02045 current
00014 ??? 02046 stale /dev/dsk/c0t5d0 02046 current
00015 ??? 02047 stale /dev/dsk/c0t5d0 02047 current
00016 ??? 02048 stale /dev/dsk/c0t5d0 02048 current
00017 ??? 02049 stale /dev/dsk/c0t5d0 02049 current
00018 ??? 02050 stale /dev/dsk/c0t5d0 02050 current
00019 ??? 02051 stale /dev/dsk/c0t5d0 02051 current
#-> ioscan -kH 0/0/0/3/0
H/W Path Class Description
=====================================================
0/0/0/3/0 ext_bus SCSI C1010 Ultra160 Wide LVD A6793-60001
0/0/0/3/0.5 target
0/0/0/3/0.5.0 disk HP 73.4GST373454LC
0/0/0/3/0.7 target
0/0/0/3/0.7.0 ctl Initiator
[root@cpcwmid5:/.root/sverma8]#
My query is that whats the procedure to recity this issue as BDRA information is also seems to be corrupted?
Is that server up with mirrored disk ?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-23-2012 01:24 AM
02-23-2012 01:24 AM
SolutionYour lvdisplay output says "Mirror copies 1" (that is, original + 1 copy), so at least /dev/vg00/lvol3 is mirrored.
This is confirmed by the fact that Allocated PE = 2 * Current LE, i.e. each logical extent takes 2 physical extents. (In a non-mirrored case, 1 physical extent = 1 logical extent.)
You should check all the other LVs on vg00 and verify that all of them are mirrored... but if you have not yet lost any filesystems in the vg00 volume group, I guess all of them are mirrored.
The lvdisplay -v shows "???" in the PV1 column simply because the system cannot find the matching PV any more (because the /dev/dsk/c0t6d0 is totally dead).
Replacing a mirrored root disk is standard sysadmin procedure on HP-UX. If the failed disk is in a hot-swap slot, you can even replace it without shutting down the server. It just means you must use "lvreduce -k -m 0 /dev/vg00/lvolN" and "vgreduce -f vg00" to remove the failed mirrors if you need to follow the old procedure (delete the failed LV mirrors, remove failed disk from vg00, replace disk, mirror the root disk again).
But if your HP-UX 11.11 is up to date with patches, you can use "pvchange -a N /dev/dsk/c0t6d0" to disable access to the failed disk. Then you can use an easier procedure: replace disk, vgcfgrestore, mkboot, pvchange -a y, vgsync.
HP has a whitepaper titled "When Good Disks go Bad". It contains very good step-by-step instructions for replacing a failed root disk. Use Google to find that whitepaper.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-27-2012 06:18 AM
02-27-2012 06:18 AM
Re: root disk gone faulty
Hi
Matti says everything .
Read "When Good Disks go Bad" very good white paper and see what is your case.
Regards.