Operating System - Linux
1752591 Members
3409 Online
108788 Solutions
New Discussion

Re: Remove a Lun From Redhat

 
eng_galileo
Visitor

Remove a Lun From Redhat

I have a running cluster system. I need to remove one LUN "LUN09". I disconnected the LUN from EMC then I removed the LV then removed the VG "vginfra" then the PV.

 

I found that mpath still has a new LUN "19" without removing "LUN09".

 

I need to clean this 

 


[root@gcds1 bin]# ./mpathconfig.pl -l
/dev/mapper/lun01 size=358G
/dev/mapper/lun02 size=66G
/dev/mapper/lun04 size=330G
/dev/mapper/lun05 size=750G
/dev/mapper/lun06 size=592G
/dev/mapper/lun07 size=90G
/dev/mapper/lun08 size=90G
/dev/mapper/lun09 size=3.6T
/dev/mapper/lun10 size=5.0T
/dev/mapper/lun11 size=170G
/dev/mapper/lun12 size=170G
/dev/mapper/lun13 size=500G
/dev/mapper/lun14 size=582G
/dev/mapper/lun15 size=500G
/dev/mapper/lun16 size=582G
/dev/mapper/lun17 size=65G
/dev/mapper/lun18 size=65G
/dev/mapper/lun19 size=3.6T
/dev/mapper/lun20 size=7.2T
[root@gcds1 bin]#

 

[root@gcds1 bin]# multipath -ll
sdac: checker msg is "emc_clariion_checker: Logical Unit is unbound or LUNZ"
sdi: checker msg is "emc_clariion_checker: Logical Unit is unbound or LUNZ"
lun18 (36006016016402600eeba9cf34b1fe111) dm-9 DGC,RAID 10
[size=65G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:17 sdaj 66:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:17 sdp 8:240 [active][ready]

.

.

.

lun09 (360060160164026008a01c5e1a226df11) dm-2 DGC,RAID 5
[size=3.6T][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:9 sdi 8:128 [failed][faulty]
\_ round-robin 0 [prio=0][enabled]
\_ 3:0:0:9 sdac 65:192 [failed][faulty]

.

.

.

lun19 (350060160c460156b50060160c460156b) dm-30 DGC,RAID 5
[size=3.6T][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=0][enabled]
\_ 3:0:0:9 sdac 65:192 [failed][faulty]
\_ 2:0:0:9 sdi 8:128 [failed][faulty]

.

 

 

[root@gcds1 bin]# pvscan
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125624832: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125682176: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 4096: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118284800: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118342144: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 4096: Input/output error
/dev/mapper/lun19: read failed after 0 of 4096 at 3939125624832: Input/output error
/dev/mapper/lun19: read failed after 0 of 4096 at 3939125682176: Input/output error
/dev/mapper/lun19: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/lun19: read failed after 0 of 4096 at 4096: Input/output error
PV /dev/mapper/lun01 VG vgarc lvm2 [358.00 GB / 0 free]
PV /dev/mapper/lun05 VG vgarc lvm2 [750.00 GB / 0 free]
PV /dev/mapper/lun06 VG vgarc lvm2 [591.85 GB / 62.55 GB free]
PV /dev/mapper/lun20 VG vgbackup lvm2 [7.17 TB / 0 free]
PV /dev/mapper/lun10 VG vgbackup2 lvm2 [5.01 TB / 4.00 MB free]
PV /dev/mapper/lun04 VG vgdb lvm2 [330.00 GB / 0 free]
PV /dev/mapper/lun13 VG vgdb lvm2 [500.00 GB / 0 free]
PV /dev/mapper/lun14 VG vgdb lvm2 [581.85 GB / 0 free]
PV /dev/mapper/lun15 VG vgdb lvm2 [500.00 GB / 103.15 GB free]
PV /dev/mapper/lun16 VG vgdb lvm2 [581.85 GB / 0 free]
PV /dev/mapper/lun11 VG vgglobal lvm2 [170.00 GB / 0 free]
PV /dev/mapper/lun12 VG vgglobal lvm2 [170.00 GB / 0 free]
PV /dev/mapper/lun17 VG vgglobal lvm2 [65.00 GB / 32.00 MB free]
PV /dev/mapper/lun18 VG vgglobal lvm2 [65.00 GB / 32.00 MB free]
PV /dev/mapper/lun02 VG vgredo lvm2 [66.00 GB / 0 free]
PV /dev/mapper/lun07 VG vgredo lvm2 [90.00 GB / 0 free]
PV /dev/mapper/lun08 VG vgredo lvm2 [90.00 GB / 4.21 GB free]
Total: 17 [16.97 TB] / in use: 17 [16.97 TB] / in no VG: 0 [0 ]
[root@gcds1 bin]#

3 REPLIES 3
Matti_Kurkela
Honored Contributor

Re: Remove a Lun From Redhat

You should have completed the LVM operations before disconnecting the LUN.

 

Now the system has a write operation for the PV header of the LUN09 in its write cache: it was created by the LV removal and updated later by the VG & PV removal operation. But as the LUN is already gone, the write operation cannot be completed.

 

Knowing the name and version of your Linux distribution could have been helpful. Or even knowing the kernel version number.

 

You can try this:

multipath -f /dev/mapper/lun09

multipath -f /dev/mapper/lun19

echo 1 >/sys/block/sdi/device/delete

echo 1 >/sys/block/sdac/device/delete

 

You may expect the kernel logging some rather bad-looking messages when you run those commands, as the kernel seriously hates the idea of throwing away cached write operations it's already promised to complete.

 

I don't really know what might have caused lun19 to appear - perhaps it is a side effect of the way your storage system handles LUN deletion?

 

A reboot would also clear this up - and since you said you have a cluster, it should be somewhat tolerable to move all the applications to the other node(s), and then reboot this node. (I realize that it is not always so easy in real life.)

MK
eng_galileo
Visitor

Re: Remove a Lun From Redhat

 


[root@gcds2 ~]# vgs
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125624832: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125682176: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 4096: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118284800: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118342144: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 4096: Input/output error
VG #PV #LV #SN Attr VSize VFree
vgarc 3 1 0 wz--nc 1.66T 62.55G
vgbackup 1 1 0 wz--nc 7.17T 0
vgbackup2 1 1 0 wz--nc 5.01T 4.00M
vgdb 5 1 0 wz--nc 2.44T 103.15G
vgglobal 4 7 0 wz--nc 469.98G 64.00M
vgredo 3 1 0 wz--nc 245.99G 4.21G
[root@gcds2 ~]# multipath -f /dev/mapper/lun09
must provide a map name to remove
[root@gcds2 ~]# multipath -f lun09
lun09: map in use
[root@gcds2 ~]# multipath -f /dev/mapper/vginfra-lvol1
must provide a map name to remove
[root@gcds2 ~]# multipath -f vginfra-lvol1
[root@gcds2 ~]# multipath -f lun09
lun09: map in use
[root@gcds2 ~]# echo 1 >/sys/block/sdi/device/delete
-bash: /sys/block/sdi/device/delete: cannot overwrite existing file
[root@gcds2 ~]# echo 1 >/sys/block/sdac/device/delete
-bash: /sys/block/sdac/device/delete: cannot overwrite existing file
[root@gcds2 ~]#
[root@gcds2 ~]# vgs
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125624832: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 3939125682176: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/lun09: read failed after 0 of 4096 at 4096: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118284800: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118342144: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 0: Input/output error
/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 4096: Input/output error
VG #PV #LV #SN Attr VSize VFree
vgarc 3 1 0 wz--nc 1.66T 62.55G
vgbackup 1 1 0 wz--nc 7.17T 0
vgbackup2 1 1 0 wz--nc 5.01T 4.00M
vgdb 5 1 0 wz--nc 2.44T 103.15G
vgglobal 4 7 0 wz--nc 469.98G 64.00M
vgredo 3 1 0 wz--nc 245.99G 4.21G
[root@gcds2 ~]#

Matti_Kurkela
Honored Contributor

Re: Remove a Lun From Redhat

>/dev/mapper/vginfra-lvol1: read failed after 0 of 4096 at 3939118284800: Input/output error

 

The VG seems to be still active.  Apparently your previous VG removal operation failed too.

 

> multipath -f /dev/mapper/vginfra-lvol1

 

I don't think /dev/mapper/vginfra-lvol1 is a multipath map: it looks like a LVM device map.

 

> -bash: /sys/block/sdi/device/delete: cannot overwrite existing file

 

Is your shell configured to reject file overwrites? (i.e. is your shell configured with "set -o noclobber" or "set -C"?)

 

Try:

vgchange -an vginfra

multipath -f lun09

echo 1 >| /sys/block/sdi/device/delete

echo 1 >| /sys/block/sdac/device/delete

 

(note the special >| redirection operator to force overwriting a file.)

 

If the "vgchange -an vginfra" fails, the LV might still be mounted or otherwise in use by some application. In that case, the simplest way to fix this mess is probably remove or comment out any references to /dev/mapper/vginfra-lvol1 (or /dev/vginfra/lvol1) in /etc/fstab, and then reboot the system.

 

If you absolutely cannot reboot, these commands might help:

dmsetup remove -f /dev/mapper/vginfra-lvol1

dmsetup remove -f /dev/mapper/lun09

 

These commands will skip the LVM and multipath layers and instead send commands directly to the device-mapper, which is the basic subsystem underlying both LVM and dm-multipath.

MK