System Administration
Showing results for 
Search instead for 
Did you mean: 

Remove a SAN LUN device from Linux

Go to solution
Adrian Gehrig

Remove a SAN LUN device from Linux

Environment: EVA 6100 and 8000. Proliant DL580G5 and RedHat 5.2.

What we had to do: We had to resize a LUN for OracleASM.

What we did: We created a new LUN with the same LUN number a previous deleted LUN on the same host.
Added the new WWID to the multipath.conf.
Ran /opt/hp/hp_fibreutils/hp_rescan -a
and multipath -v2

What we've seen: after the multipath -v2 command we have seen, that the size of the LUN isn't the same as we configured on the EVA. It was the same size the LUN had before.

We had changed the LUN number on the EVA and after that the right size was recognized...

From my point of view there seems to be data stored in the devices somehow. I guess, that after a reboot this would work okay - but a reboot was not a option for us...
Is there a prooven way to remove the multipath, device mapper devices?

Thanks for your help in advance!

Honored Contributor

Re: Remove a SAN LUN device from Linux

The multipath devices can be removed with "multipath -f ", when the multipath device is unused, i.e. any filesystems on it are unmounted and VG deactivated.

To fully purge the system's information about a particular LUN, you should also remove the /dev/sd* devices underlying the multipath device.

Please see this RedHat Knowledge Base document:

Summary: to remove a device:
echo 1 > /sys/block//device/delete

Of course, you should use the "multipath -l" listing to identify the correct /dev/sd* devices *before* removing the multipath device.

Then use hp_rescan or the methods listed in the above-mentioned RHKB document to re-scan the storage. That should re-create the /dev/sd* devices (but the device names are not guaranteed to be the same as before).
Then re-create the multipath device.

Adrian Gehrig

Re: Remove a SAN LUN device from Linux

Thanks for your help! This works fine.
Adrian Gehrig

Re: Remove a SAN LUN device from Linux

Thanks again