- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactiv...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-17-2011 09:29 AM
08-17-2011 09:29 AM
RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactivating
After reading several posts and replies here I realize my mistake. Linux is just too **bleep** helpful (if you didn't get that -- http://en.wikipedia.org/wiki/The_UNIX-HATERS_Handbook).
Here's my setup: RHEV 5.6, EVA8400.
I wanted to move a file system from one box to the other, having used LVM in HPUX for quite a while this seemed like a simple task.
I umounted, fsck'd (just for kicks), vgchange'd, and vgexported, and removed the entry in fstab on box 1.
Then I presented the Vdisk to the new box in Command View.
Then on box 2 I did hp_rescan, fount the right mpath device and vgimport'd.
Everything was cool. So I thought. Now those of you paying attention will already see my problem. Box 1 helpfully recreated the volume group for me because obviously I didn't want something hanging out there not available. I should have at least un-presented the Vdisk to box 1 or I could have blacklisted it... I see the error of my ways. However now I'm stuck. DataProtector complained about not being able to open certain files for backup so looking into it I found the problem. I was able to recover the data, build a new VG on box 2 and rsync the data from box 1 which was able to successfully mount the old LV... and unmount the old LV, but now I can't remove the volume group. Box 2 is doing fine, removed the multipath info, the disk devices, etc.
The LV is not mounted, lsof reports nothing of interest. In short -- it's not in use.
[root@box1 cache]# vgremove vgsi
Do you really want to remove volume group "vgsi" containing 1 logical volumes? [y/n]: y
Can't remove open logical volume "lvol0"
[root@box1 ~]# vgchange -a n vgsi
Can't deactivate volume group "vgsi" with 1 open logical volume(s)
[root@box1 cache]# lvchange -a n /dev/vgsi/lvol0
LV vgsi/lvol0 in use: not deactivating
Any suggestions? I could just un-present the Vdisk and clean up the mess, or pvremove -ff, I was hoping for something that most likely won't cause a reboot.
Thanks in advance,
Todd
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-17-2011 12:06 PM
08-17-2011 12:06 PM
Re: RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactivating
> The LV is not mounted, lsof reports nothing of interest. In short -- it's not in use.
At the user-space level, sure. But is it exported as a NFS filesystem at the kernel level? Or is the LV in use as a swap device? The lsof command won't see such kernel-level uses. If the kernel says something is in use, I'd tend to believe it over lsof. And if the kernel really has it wrong, trying to avoid a reboot would be foolish: you don't want to run a kernel that has its internal data structures corrupted.
And if the VG is in use in the other system, you really don't want to run "vgremove". That would delete the VG information on the disk.
To cleanly remove disks on Linux, deactivate the VG, then use "multipath -f <multipath-device>" to remove the multipath device. Unpresent the disks, then use commands like "echo 1 > /sys/block/sdXX/device/delete" to tell the kernel that yes, the loss of connection was intentional and the disk is not likely to come back. Then run "vgscan" and *poof* all traces of the disk on the system should be gone.
Knowing about HP-UX LVM is useful on Linux as it gives you a good idea of all the nice things LVM can do. But it is also highly misleading: although the LVM commands are very similar between HP-UX and Linux, the implementations are very different. Exporting a VG on Linux is only required if you are moving the VG to another host and you suspect it might already have a VG with the same name: in VG naming conflicts, the non-exported VG wins over the exported one. Then you can use "vg -o +vg_uuid" to discover the VG UUIDs, and "vgrename" by UUID to resolve the conflict.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-18-2011 12:07 PM
08-18-2011 12:07 PM
Re: RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactivating
No NFS, No swap.
I don't care about the data, I was able to mount the file system on box 1 after removing all traces of it on box 2 and rsync'd it over to a new volume group on box 2.
I know how to cleanly remove the disks from the system, however the first step -- deactivating the VG isn't working.
"Knowing about HP-UX ... "
Condescending attitude not appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2011 02:35 AM
08-19-2011 02:35 AM
Re: RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactivating
Sorry, it was not my intention to be condescending, only to warn about a common mistake I've seen (and done myself) when transitioning between HP-UX and Linux.
You might try running "dmsetup ls --tree" and see if it offers any clues: it gives you information from the device-mapper layer, which underlies both LVM, dm-multipath and other disk management features. You might try to force-remove the mappings with "dmsetup remove -f <devicename>". Since this works at lower level than LVM, it might allow you to remove the mapping.
Here's an excerpt from the dmsetup man page:
remove [-f|--force] device_name Removes a device. It will no longer be visible to dmsetup. Open devices cannot be removed except with older kernels that contain a version of device-mapper prior to 4.8.0. In this case the device will be deleted when its open_count drops to zero. From version 4.8.0 onwards, if a device can't be removed because an uninterruptible process is waiting for I/O to return from it, adding --force will replace the table with one that fails all I/O, which might allow the process to be killed
So "dmsetup remove --force" might be helpful if you currently have some processes in uninterruptible I/O wait, which could also be causing the system to think the LV is still in use.(RHEL 5.6 should have device-mapper version 4.11.5 or thereabouts, see "dmsetup version" for exact version number.)
Processes in uninterruptible I/O wait can be a sign of hardware failure (and therefore I tend to regard them as Bad News unless the cause is known), and getting rid of them may require a reboot. However, I guess changes in SAN disk presentation might cause situations that look like fatal I/O errors to the system: does your "dmesg" listing contain anything that looks like I/O errors?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-24-2011 03:18 PM
08-24-2011 03:18 PM
Re: RHEL 5.6 LVM LV vgsi/lvol0 in use: not deactivating
Thanks for the additional info. It didn't work either. I was able to finally get down to a smaller "failure" state, but not the greatest. At least the database on that system has now been moved and I can reboot it tomorrow. That should clean everything up.
I resorted to unpresenting the LUN first, then running
echo 1 > /sys/block/$disk/device/delete <-- $disk from a list of the devices pointing to that LUN.
multipathd -k
suspend mpath7
remove path mpath7 <-- failed
remove map mpath7
reconfigure
multipath -r mpath7
Got it down to this:
mpath7 (36001438005dea8ce0000800002230000) dm-7 ,
[size=30G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=0][enabled]
\_ #:#:#:# - #:# [active][faulty]
It thinks there is an active disk out there, but it doesn't know where..... ;)
The Lesson of the Day is unpresent your LUNs if you no longer want a machine to use them.... because Linux will try to think for you.