cancel
Showing results for 
Search instead for 
Did you mean: 

LVM Traces of old disk

admin1979
Super Advisor

LVM Traces of old disk

Hello,

We have a SLES 10 version. Recently one of the disks out of RAID 0 got failed. (sdb1) . Which had 2 disks /dev/sdb1 & /dev/sdc1
We have replaced the faulty disk of bigger size. Which is now /dev/sdd1.
But the problem is whenever we run pvdisplay or vgdisplay , we get

# pvdisplay

/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error

--- Physical volume ---
PV Name /dev/sdd1
VG Name data
PV Size 68.36 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 17500
Free PE 7260
Allocated PE 10240
PV UUID qRZIwB-zs6I-klfc-BpL6-MTQ4-tu4J-8ME0iy


So the problem is , the /dev/sdb1 read failed error occurs.

I have already tried,

pvremove /dev/sdb1 but it says ,

# pvremove /dev/sdb1

/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
No physical volume label read from /dev/sdb1
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Labels on physical volume "/dev/sdb1" successfully wiped

So how to remove the traces of the failed/removed disk /dev/sdb1 ??
14 REPLIES
Matt Palmer_2
Respected Contributor

Re: LVM Traces of old disk

Hi,

if you really want to remove the entry have you also tried pvremove -f /dev/sdb1 ??

regards

Matt
admin1979
Super Advisor

Re: LVM Traces of old disk

No use !!

# pvremove -f /dev/sdb1
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
No physical volume label read from /dev/sdb1
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Labels on physical volume "/dev/sdb1" successfully wiped

# pvdisplay
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
--- Physical volume ---
PV Name /dev/sdd1
VG Name data
PV Size 68.36 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 17500
Free PE 7260
Allocated PE 10240
PV UUID qRZIwB-zs6I-klfc-BpL6-MTQ4-tu4J-8ME0iy

Matt Palmer_2
Respected Contributor

Re: LVM Traces of old disk

did u try vgreduce before pvremove?

regards

Matt
admin1979
Super Advisor

Re: LVM Traces of old disk

No i have not tried vgreduce yet.
Matt Palmer_2
Respected Contributor

Re: LVM Traces of old disk

Hi,

see this post:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1284100

for how to use vgreduce on failed disks

hope that helps

regards

Matt
admin1979
Super Advisor

Re: LVM Traces of old disk

Are you suggesting,

vgreduce --test --removemissing ??

admin1979
Super Advisor

Re: LVM Traces of old disk

That post is wrt HP.

the suggestion in that post is to run

vgreduce -l vg pv

but i did not find the -l parameter in man vgreduce.

Btw I tried below command. But no help.

# vgreduce --removemissing data
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Volume group "data" is already consistent
admin1979
Super Advisor

Re: LVM Traces of old disk

I tried this as well now,

# vgreduce data /dev/sdb1
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Physical Volume "/dev/sdb1" not found in Volume Group "data"
Matt Palmer_2
Respected Contributor

Re: LVM Traces of old disk

Hi,

the approach you now take really depends on the order you may have executed LVM commands up to this point.

I would try several things:
make sure you have a full backup
make sure you have a backup of vgcfg and lvmtab

run the few commands at the bottom of this post, which explains better than I can the sort of issues you can have if you get it wrong :-)

http://bisqwit.iki.fi/story/howto/undopvremove/

Go down to where it says 'problem averted'
key points are:

pvs (see if your failed disk is listed here) - please let me know
vgreduce

pvremove

run pvs again and see if your failed disk is no longer listed.

This has worked for me in the past

regards

Matt


admin1979
Super Advisor

Re: LVM Traces of old disk


# pvs
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
PV VG Fmt Attr PSize PFree
/dev/sda4 system lvm2 a- 112.52G 51.52G
/dev/sdd1 data lvm2 a- 68.36G 28.36G

# vgreduce data /dev/sdb1
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Physical Volume "/dev/sdb1" not found in Volume Group "data"

# pvremove /dev/sdb1
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
No physical volume label read from /dev/sdb1
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Labels on physical volume "/dev/sdb1" successfully wiped

# pvs
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
PV VG Fmt Attr PSize PFree
/dev/sda4 system lvm2 a- 112.52G 51.52G
/dev/sdd1 data lvm2 a- 68.36G 28.36G
#
admin1979
Super Advisor

Re: LVM Traces of old disk

Referring to your link,

Tried this as well with no luck,

# vgcfgrestore data
/dev/sdb: read failed after 0 of 2048 at 0: Input/output error
/dev/sdb1: read failed after 0 of 2048 at 0: Input/output error
Restored volume group data

Matt Palmer_2
Respected Contributor

Re: LVM Traces of old disk

Hi,

vgreduce --removemissing --test

Note that this will also remove any logical volumes that were using the missing physical volume. You can run it with --test first to see what effect it will have.

thats about as far as I can get

regards

Matt
admin1979
Super Advisor

Re: LVM Traces of old disk

Sorry Matt...but If you notice , I have already tried this ...See above.

So it looks like this problem is gona be there for sometime........... Thanx anyws...U get your share.
Matti_Kurkela
Honored Contributor

Re: LVM Traces of old disk

Your LVM does not seem to be under the impression that there is something on /dev/sdb1. Since it was a RAID 0 component disk, I assume you had to fully re-create the VG and restore the data from backups.

The commands like "pvs" or "pvdisplay" without arguments will display error messages about /dev/sdb1 because they will probe *all* the disks kernel knows about. So the problem is that the kernel has not been told that /dev/sdb is completely and irrevocably gone. (When a disk device just vanishes while the system is running, the kernel holds the device name in case the disk comes back later.)

Booting the system should certainly fix it, but there's an easier way to tell the kernel that a particular disk device is gone and won't come back:

echo 1 > /sys/block/sdb/device/delete

This should stop the error messages from tools like "pvdisplay" or "pvs".

On SLES, there might also be some files related to persistent device naming in /etc/udev/rules.d/ directory. If I recall correctly, there is one file that includes some physical identifier of each disk (like a WWID or a serial number) and the assigned device name, like "sdb".

If you want to allow the name /dev/sdb to be reassigned to some future disk, you may have to remove the current association. But if you don't care about it, you don't have to do it.

MK
MK