- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- lvm cache
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 08:58 AM
тАО01-18-2010 08:58 AM
LVM2, RHEL AS 5.x(64 bit)
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 10:30 AM
тАО01-18-2010 10:30 AM
Re: lvm cache
I find what you propose as dangerous. lvm is meant and designed to be minipulated by running commands, not hacking configuration files.
What is the reason you feel the need to do this? Context is important in providing you a useful answer.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 11:42 AM
тАО01-18-2010 11:42 AM
Re: lvm cache
Editing the file is not recommended.
If you find that LVM probes unnecessary devices (which is likely to happen in multipathed SAN configurations), you should configure a suitable filter expression in /etc/lvm/lvm.conf.
See:
http://kbase.redhat.com/faq/docs/DOC-3651
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-19-2010 03:25 AM
тАО01-19-2010 03:25 AM
Re: lvm cache
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-19-2010 03:35 AM
тАО01-19-2010 03:35 AM
Re: lvm cache
As long as the multipath device is a member of an active VG, the device is "in use".
Fiddling with the LVM cache has no effect on whether the device is in use or not.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-19-2010 04:56 AM
тАО01-19-2010 04:56 AM
Re: lvm cache
PV /dev/dm-35 VG vg01 lvm2 [220.06 GB / 40.00 GB free]
PV /dev/dm-33 VG vg01 lvm2 [174.97 GB / 36.00 GB free]
PV /dev/dm-34 VG vg01 lvm2 [505.75 GB / 67.47 GB free]
PV unknown device VG vg01 lvm2 [299.97 GB / 15.16 GB free]
DO you think pvcreate can help here to remove the VG information. I tried to do vgcfgrestore,but failed with follwoign error(that again sounds like device is in use with Vg)
Couldn't find device with uuid '1WVFuJ-xzRj-o39s-V0gf-aHAN-JFTe-NVGlhr'.
Couldn't find all physical volumes for volume group vg01.
get_pv_from_vg_by_id: vg_read failed to read VG vg01
Can't open /dev/dm-35 exclusively. Mounted filesystem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-19-2010 11:02 AM
тАО01-19-2010 11:02 AM
SolutionIn Linux, vgexport is very different and is not helpful here.
First, you're seeing completely dynamic /dev/dm-NN names. Those are not very useful. Please edit your /etc/lvm/lvm.conf: comment out the line:
preferred_names = [ ]
And uncomment the multipath-aware example preferred_names line after it:
# Try to avoid using undescriptive /dev/dm-N names, if present.
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
If you don't care about the data on vg01 (or can easily restore it), you might consider running "vgreduce --removemissing vg01". As the now-unknown device has extents allocated, this will ask for a confirmation, because this causes any vg01 LVs with pieces missing to be *removed completely*.
If you would like to keep the data, do "vgchange -a n vg01" instead. It does not touch the on-disk data, just makes LVM stop accessing it, causing the multipath devices no longer be "in use".
Then, verify that your multipath mappings are consistent. Please run "multipath -v2". If it produces notification messages about creating new multipath devices, it might solve your original access problem.
If your /var filesystem is separate from your root fs, the loss of access might well be caused by this:
http://kbase.redhat.com/faq/docs/DOC-17650
If this is applicable to you, please implement the configuration change described in the RHKB article.
After all your LUNs are visible as multipath devices, delete the LVM cache file /var/lvm/cache/.cache, then run "vgscan -vvv" to make the LVM system re-detect all PVs. The Very Very Verbose output will also allow you to verify that the LVM probes all the devices it should.
Now try activating the VG again:
vgchange -a y vg01
---------
If vgchange -a n vg01 did not work, there is one other way to make LVM stop using the disk.
In Linux 2.6.* kernels, the LVM is implemented using the device-mapper subsystem. You can bypass LVM and manipulate the device-mapper subsystem directly, using the "dmsetup" command.
If necessary, you could use "dmsetup remove" or even "dmsetup --force remove" to rip out the LVM-generated mappings from the kernel, freeing the multipath devices. Use "dmsetup ls" first to see the LVM device names in the form dmsetup uses.
(You may see other names too: LVM is not the only thing that uses the device-mapper subsystem. Software RAID, disk encryption and device-mapper-multipathing all use device-mapper subsystem too. Don't misuse dmsetup: it is a very powerful low-level tool.)
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-19-2010 03:49 AM
тАО03-19-2010 03:49 AM
Re: lvm cache
with the below change we were able to get the multipath devices back.
~]# cat /etc/scsi_id.config |grep -v '#'
options=-g
vendor=someone, model=nicedrive, options=-g
Also tuned lvm filter and multipath.conf
#cat /etc/lvm/lvm.conf |grep -v '#' |grep -e 'filter' -e 'types'
filter = [ "a|/dev/mpath/.*/|","a|/dev/mapper/.*|", "a|/dev/sda|","r|/dev/sd[b-z]|", "r/.*/" ]
types = [ "device-mapper", 1]
#cat /etc/multipath.conf |grep -v '#'
devnode_blacklist {
devnode "^sda"
}
defaults {
user_friendly_names yes
}
multipaths {
multipath {
wwid 360060160f1731100c642e85121b2dd11
alias mpath0
}
multipath {
wwid 360060160f173110012f68b1e56b3dd11
alias mpath1
}
multipath {
wwid 360060160f17311000c55033921b2dd11
alias mpath2
}
multipath {
wwid 360060160f1731100b254eb42b9b4dd11
alias mpath3
}
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2010 01:48 AM
тАО03-31-2010 01:48 AM
Re: lvm cache
options=-g
vendor="DGC",options=-p 0x83