<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: lvextend error on Redhat-cluster suit 5.1 in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192462#M57852</link>
    <description># This is an example configuration file for the LVM2 system.&lt;BR /&gt;# It contains the default settings that would be used if there was no&lt;BR /&gt;# /etc/lvm/lvm.conf file.&lt;BR /&gt;#&lt;BR /&gt;# Refer to 'man lvm.conf' for further information including the file layout.&lt;BR /&gt;#&lt;BR /&gt;# To put this file in a different directory and override /etc/lvm set&lt;BR /&gt;# the environment variable LVM_SYSTEM_DIR before running the tools.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# This section allows you to configure which block devices should&lt;BR /&gt;# be used by the LVM system.&lt;BR /&gt;devices {&lt;BR /&gt;&lt;BR /&gt;    # Where do you want your volume groups to appear ?&lt;BR /&gt;    dir = "/dev"&lt;BR /&gt;&lt;BR /&gt;    # An array of directories that contain the device nodes you wish&lt;BR /&gt;    # to use with LVM2.&lt;BR /&gt;    scan = [ "/dev" ]&lt;BR /&gt;&lt;BR /&gt;    # If several entries in the scanned directories correspond to the&lt;BR /&gt;    # same block device and the tools need to display a name for device,&lt;BR /&gt;    # all the pathnames are matched against each item in the following&lt;BR /&gt;    # list of regular expressions in turn and the first match is used.&lt;BR /&gt;    preferred_names = [ ]&lt;BR /&gt;&lt;BR /&gt;    # preferred_names = [ "^/dev/mpath/", "^/dev/[hs]d" ]&lt;BR /&gt;&lt;BR /&gt;    # A filter that tells LVM2 to only use a restricted set of devices.&lt;BR /&gt;    # The filter consists of an array of regular expressions.  These&lt;BR /&gt;    # expressions can be delimited by a character of your choice, and&lt;BR /&gt;    # prefixed with either an 'a' (for accept) or 'r' (for reject).&lt;BR /&gt;    # The first expression found to match a device name determines if&lt;BR /&gt;    # the device will be accepted or rejected (ignored).  Devices that&lt;BR /&gt;    # don't match any patterns are accepted.&lt;BR /&gt;&lt;BR /&gt;    # Be careful if there there are symbolic links or multiple filesystem&lt;BR /&gt;    # entries for the same device as each name is checked separately against&lt;BR /&gt;    # the list of patterns.  The effect is that if any name matches any 'a'&lt;BR /&gt;    # pattern, the device is accepted; otherwise if any name matches any 'r'&lt;BR /&gt;    # pattern it is rejected; otherwise it is accepted.&lt;BR /&gt;&lt;BR /&gt;    # Don't have more than one filter line active at once: only one gets used.&lt;BR /&gt;&lt;BR /&gt;    # Run vgscan after you change this parameter to ensure that&lt;BR /&gt;    # the cache file gets regenerated (see below).&lt;BR /&gt;    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;    # By default we accept every block device:&lt;BR /&gt;    #filter = [ "a/.*/" ]&lt;BR /&gt;    filter = [ "a|/dev/sda1|", "a|/dev/sda2|", "a|/dev/emc*|", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # Exclude the cdrom drive&lt;BR /&gt;    # filter = [ "r|/dev/cdrom|" ]&lt;BR /&gt;&lt;BR /&gt;    # When testing I like to work with just loopback devices:&lt;BR /&gt;    # filter = [ "a/loop/", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # Or maybe all loops and ide drives except hdc:&lt;BR /&gt;    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]&lt;BR /&gt;&lt;BR /&gt;    # Use anchors if you want to be really specific&lt;BR /&gt;    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # The results of the filtering are cached on disk to avoid&lt;BR /&gt;    # rescanning dud devices (which can take a very long time).&lt;BR /&gt;    # By default this cache is stored in the /etc/lvm/cache directory&lt;BR /&gt;    # in a file called '.cache'.&lt;BR /&gt;    # It is safe to delete the contents: the tools regenerate it.&lt;BR /&gt;    # (The old setting 'cache' is still respected if neither of&lt;BR /&gt;    # these new ones is present.)&lt;BR /&gt;    cache_dir = "/etc/lvm/cache"&lt;BR /&gt;    cache_file_prefix = ""&lt;BR /&gt;&lt;BR /&gt;    # You can turn off writing this cache file by setting this to 0.&lt;BR /&gt;    write_cache_state = 1&lt;BR /&gt;&lt;BR /&gt;    # Advanced settings.&lt;BR /&gt;&lt;BR /&gt;    # List of pairs of additional acceptable block device types found&lt;BR /&gt;    # in /proc/devices with maximum (non-zero) number of partitions.&lt;BR /&gt;    # types = [ "fd", 16 ]&lt;BR /&gt;&lt;BR /&gt;    # If sysfs is mounted (2.6 kernels) restrict device scanning to&lt;BR /&gt;    # the block devices it believes are valid.&lt;BR /&gt;    # 1 enables; 0 disables.&lt;BR /&gt;    sysfs_scan = 1&lt;BR /&gt;&lt;BR /&gt;    # By default, LVM2 will ignore devices used as components of&lt;BR /&gt;    # software RAID (md) devices by looking for md superblocks.&lt;BR /&gt;    # 1 enables; 0 disables.&lt;BR /&gt;    md_component_detection = 1&lt;BR /&gt;&lt;BR /&gt;    # If, while scanning the system for PVs, LVM2 encounters a device-mapper&lt;BR /&gt;    # device that has its I/O suspended, it waits for it to become accessible.&lt;BR /&gt;    # Set this to 1 to skip such devices.  This should only be needed&lt;BR /&gt;    # in recovery situations.&lt;BR /&gt;    ignore_suspended_devices = 0&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# This section that allows you to configure the nature of the&lt;BR /&gt;# information that LVM2 reports.&lt;BR /&gt;log {&lt;BR /&gt;&lt;BR /&gt;    # Controls the messages sent to stdout or stderr.&lt;BR /&gt;    # There are three levels of verbosity, 3 being the most verbose.&lt;BR /&gt;    verbose = 0&lt;BR /&gt;&lt;BR /&gt;    # Should we send log messages through syslog?&lt;BR /&gt;    # 1 is yes; 0 is no.&lt;BR /&gt;    syslog = 1&lt;BR /&gt;&lt;BR /&gt;    # Should we log error and debug messages to a file?&lt;BR /&gt;    # By default there is no log file.&lt;BR /&gt;    #file = "/var/log/lvm2.log"&lt;BR /&gt;&lt;BR /&gt;    # Should we overwrite the log file each time the program is run?&lt;BR /&gt;    # By default we append.&lt;BR /&gt;    overwrite = 0&lt;BR /&gt;&lt;BR /&gt;    # What level of log messages should we send to the log file and/or syslog?&lt;BR /&gt;    # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.&lt;BR /&gt;    # 7 is the most verbose (LOG_DEBUG).&lt;BR /&gt;    level = 0&lt;BR /&gt;&lt;BR /&gt;    # Format of output messages&lt;BR /&gt;    # Whether or not (1 or 0) to indent messages according to their severity&lt;BR /&gt;    indent = 1&lt;BR /&gt;&lt;BR /&gt;    # Whether or not (1 or 0) to display the command name on each line output&lt;BR /&gt;    command_names = 0&lt;BR /&gt;&lt;BR /&gt;    # A prefix to use before the message text (but after the command name,&lt;BR /&gt;    # if selected).  Default is two spaces, so you can see/grep the severity&lt;BR /&gt;    # of each message.&lt;BR /&gt;    prefix = "  "&lt;BR /&gt;&lt;BR /&gt;    # To make the messages look similar to the original LVM tools use:&lt;BR /&gt;    #   indent = 0&lt;BR /&gt;    #   command_names = 1&lt;BR /&gt;    #   prefix = " -- "&lt;BR /&gt;&lt;BR /&gt;    # Set this if you want log messages during activation.&lt;BR /&gt;    # Don't use this in low memory situations (can deadlock).&lt;BR /&gt;    # activation = 0&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# Configuration of metadata backups and archiving.  In LVM2 when we&lt;BR /&gt;# talk about a 'backup' we mean making a copy of the metadata for the&lt;BR /&gt;# *current* system.  The 'archive' contains old metadata configurations.&lt;BR /&gt;# Backups are stored in a human readeable text format.&lt;BR /&gt;backup {&lt;BR /&gt;&lt;BR /&gt;    # Should we maintain a backup of the current metadata configuration ?&lt;BR /&gt;    # Use 1 for Yes; 0 for No.&lt;BR /&gt;    # Think very hard before turning this off!&lt;BR /&gt;    backup = 1&lt;BR /&gt;&lt;BR /&gt;    # Where shall we keep it ?&lt;BR /&gt;    # Remember to back up this directory regularly!&lt;BR /&gt;    backup_dir = "/etc/lvm/backup"&lt;BR /&gt;&lt;BR /&gt;    # Should we maintain an archive of old metadata configurations.&lt;BR /&gt;    # Use 1 for Yes; 0 for No.&lt;BR /&gt;    # On by default.  Think very hard before turning this off.&lt;BR /&gt;    archive = 1&lt;BR /&gt;&lt;BR /&gt;    # Where should archived files go ?&lt;BR /&gt;    # Remember to back up this directory regularly!&lt;BR /&gt;    archive_dir = "/etc/lvm/archive"&lt;BR /&gt;&lt;BR /&gt;    # What is the minimum number of archive files you wish to keep ?&lt;BR /&gt;    retain_min = 10&lt;BR /&gt;&lt;BR /&gt;    # What is the minimum time you wish to keep an archive file for ?&lt;BR /&gt;    retain_days = 30&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# Settings for the running LVM2 in shell (readline) mode.&lt;BR /&gt;shell {&lt;BR /&gt;&lt;BR /&gt;    # Number of lines of history to store in ~/.lvm_history&lt;BR /&gt;    history_size = 100&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# Miscellaneous global LVM2 settings&lt;BR /&gt;global {&lt;BR /&gt;    library_dir = "/usr/lib64"&lt;BR /&gt;&lt;BR /&gt;    # The file creation mask for any files and directories created.&lt;BR /&gt;    # Interpreted as octal if the first digit is zero.&lt;BR /&gt;    umask = 077&lt;BR /&gt;&lt;BR /&gt;    # Allow other users to read the files&lt;BR /&gt;    #umask = 022&lt;BR /&gt;&lt;BR /&gt;    # Enabling test mode means that no changes to the on disk metadata&lt;BR /&gt;    # will be made.  Equivalent to having the -t option on every&lt;BR /&gt;    # command.  Defaults to off.&lt;BR /&gt;    test = 0&lt;BR /&gt;&lt;BR /&gt;    # Default value for --units argument&lt;BR /&gt;    units = "h"&lt;BR /&gt;&lt;BR /&gt;    # Whether or not to communicate with the kernel device-mapper.&lt;BR /&gt;    # Set to 0 if you want to use the tools to manipulate LVM metadata&lt;BR /&gt;    # without activating any logical volumes.&lt;BR /&gt;    # If the device-mapper kernel driver is not present in your kernel&lt;BR /&gt;    # setting this to 0 should suppress the error messages.&lt;BR /&gt;    activation = 1&lt;BR /&gt;&lt;BR /&gt;    # If we can't communicate with device-mapper, should we try running&lt;BR /&gt;    # the LVM1 tools?&lt;BR /&gt;    # This option only applies to 2.4 kernels and is provided to help you&lt;BR /&gt;    # switch between device-mapper kernels and LVM1 kernels.&lt;BR /&gt;    # The LVM1 tools need to be installed with .lvm1 suffices&lt;BR /&gt;    # e.g. vgscan.lvm1 and they will stop working after you start using&lt;BR /&gt;    # the new lvm2 on-disk metadata format.&lt;BR /&gt;    # The default value is set when the tools are built.&lt;BR /&gt;    # fallback_to_lvm1 = 0&lt;BR /&gt;&lt;BR /&gt;    # The default metadata format that commands should use - "lvm1" or "lvm2".&lt;BR /&gt;    # The command line override is -M1 or -M2.&lt;BR /&gt;    # Defaults to "lvm1" if compiled in, else "lvm2".&lt;BR /&gt;    # format = "lvm1"&lt;BR /&gt;&lt;BR /&gt;    # Location of proc filesystem&lt;BR /&gt;    proc = "/proc"&lt;BR /&gt;&lt;BR /&gt;    # Type of locking to use. Defaults to local file-based locking (1).&lt;BR /&gt;    # Turn locking off by setting to 0 (dangerous: risks metadata corruption&lt;BR /&gt;    # if LVM2 commands get run concurrently).&lt;BR /&gt;    # Type 2 uses the external shared library locking_library.&lt;BR /&gt;    # Type 3 uses built-in clustered locking.&lt;BR /&gt;    locking_type = 3&lt;BR /&gt;&lt;BR /&gt;    # If using external locking (type 2) and initialisation fails,&lt;BR /&gt;    # with this set to 1 an attempt will be made to use the built-in&lt;BR /&gt;    # clustered locking.&lt;BR /&gt;    # If you are using a customised locking_library you should set this to 0.&lt;BR /&gt;    fallback_to_clustered_locking = 1&lt;BR /&gt;&lt;BR /&gt;    # If an attempt to initialise type 2 or type 3 locking failed, perhaps&lt;BR /&gt;    # because cluster components such as clvmd are not running, with this set&lt;BR /&gt;    # to 1 an attempt will be made to use local file-based locking (type 1).&lt;BR /&gt;    # If this succeeds, only commands against local volume groups will proceed.&lt;BR /&gt;    # Volume Groups marked as clustered will be ignored.&lt;BR /&gt;    fallback_to_local_locking = 1&lt;BR /&gt;&lt;BR /&gt;    # Local non-LV directory that holds file-based locks while commands are&lt;BR /&gt;    # in progress.  A directory like /tmp that may get wiped on reboot is OK.&lt;BR /&gt;    locking_dir = "/var/lock/lvm"&lt;BR /&gt;&lt;BR /&gt;    # Other entries can go here to allow you to load shared libraries&lt;BR /&gt;    # e.g. if support for LVM1 metadata was compiled as a shared library use&lt;BR /&gt;    #   format_libraries = "liblvm2format1.so"&lt;BR /&gt;    # Full pathnames can be given.&lt;BR /&gt;&lt;BR /&gt;    # Search this directory first for shared libraries.&lt;BR /&gt;    #   library_dir = "/lib"&lt;BR /&gt;&lt;BR /&gt;    # The external locking library to load if locking_type is set to 2.&lt;BR /&gt;    #   locking_library = "liblvm2clusterlock.so"&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;activation {&lt;BR /&gt;    # Device used in place of missing stripes if activating incomplete volume.&lt;BR /&gt;    # For now, you need to set this up yourself first (e.g. with 'dmsetup')&lt;BR /&gt;    # For example, you could make it return I/O errors using the 'error'&lt;BR /&gt;    # target or make it return zeros.&lt;BR /&gt;    missing_stripe_filler = "/dev/ioerror"&lt;BR /&gt;&lt;BR /&gt;    # How much stack (in KB) to reserve for use while devices suspended&lt;BR /&gt;    reserved_stack = 256&lt;BR /&gt;&lt;BR /&gt;    # How much memory (in KB) to reserve for use while devices suspended&lt;BR /&gt;    reserved_memory = 8192&lt;BR /&gt;&lt;BR /&gt;    # Nice value used while devices suspended&lt;BR /&gt;    process_priority = -18&lt;BR /&gt;&lt;BR /&gt;    # If volume_list is defined, each LV is only activated if there is a&lt;BR /&gt;    # match against the list.&lt;BR /&gt;    #   "vgname" and "vgname/lvname" are matched exactly.&lt;BR /&gt;    #   "@tag" matches any tag set in the LV or VG.&lt;BR /&gt;    #   "@*" matches if any tag defined on the host is also set in the LV or VG&lt;BR /&gt;    #&lt;BR /&gt;    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]&lt;BR /&gt;&lt;BR /&gt;    # Size (in KB) of each copy operation when mirroring&lt;BR /&gt;    mirror_region_size = 512&lt;BR /&gt;&lt;BR /&gt;    # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define&lt;BR /&gt;    # how a device failure affecting a mirror is handled.&lt;BR /&gt;    # A mirror is composed of mirror images (copies) and a log.&lt;BR /&gt;    # A disk log ensures that a mirror does not need to be re-synced&lt;BR /&gt;    # (all copies made the same) every time a machine reboots or crashes.&lt;BR /&gt;    #&lt;BR /&gt;    # In the event of a failure, the specified policy will be used to&lt;BR /&gt;    # determine what happens:&lt;BR /&gt;    #&lt;BR /&gt;    # "remove" - Simply remove the faulty device and run without it.  If&lt;BR /&gt;    #            the log device fails, the mirror would convert to using&lt;BR /&gt;    #            an in-memory log.  This means the mirror will not&lt;BR /&gt;    #            remember its sync status across crashes/reboots and&lt;BR /&gt;    #            the entire mirror will be re-synced.  If a&lt;BR /&gt;    #            mirror image fails, the mirror will convert to a&lt;BR /&gt;    #            non-mirrored device if there is only one remaining good&lt;BR /&gt;    #            copy.&lt;BR /&gt;    #&lt;BR /&gt;    # "allocate" - Remove the faulty device and try to allocate space on&lt;BR /&gt;    #            a new device to be a replacement for the failed device.&lt;BR /&gt;    #            Using this policy for the log is fast and maintains the&lt;BR /&gt;    #            ability to remember sync state through crashes/reboots.&lt;BR /&gt;    #            Using this policy for a mirror device is slow, as it&lt;BR /&gt;    #            requires the mirror to resynchronize the devices, but it&lt;BR /&gt;    #            will preserve the mirror characteristic of the device.&lt;BR /&gt;    #            This policy acts like "remove" if no suitable device and&lt;BR /&gt;    #            space can be allocated for the replacement.&lt;BR /&gt;    #            Currently this is not implemented properly and behaves&lt;BR /&gt;    #            similarly to:&lt;BR /&gt;    #&lt;BR /&gt;    # "allocate_anywhere" - Operates like "allocate", but it does not&lt;BR /&gt;    #            require that the new space being allocated be on a&lt;BR /&gt;    #            device is not part of the mirror.  For a log device&lt;BR /&gt;    #            failure, this could mean that the log is allocated on&lt;BR /&gt;    #            the same device as a mirror device.  For a mirror&lt;BR /&gt;    #            device, this could mean that the mirror device is&lt;BR /&gt;    #            allocated on the same device as another mirror device.&lt;BR /&gt;    #            This policy would not be wise for mirror devices&lt;BR /&gt;    #            because it would break the redundant nature of the&lt;BR /&gt;    #            mirror.  This policy acts like "remove" if no suitable&lt;BR /&gt;    #            device and space can be allocated for the replacement.&lt;BR /&gt;&lt;BR /&gt;    mirror_log_fault_policy = "allocate"&lt;BR /&gt;    mirror_device_fault_policy = "remove"&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;####################&lt;BR /&gt;# Advanced section #&lt;BR /&gt;####################&lt;BR /&gt;&lt;BR /&gt;# Metadata settings&lt;BR /&gt;#&lt;BR /&gt;# metadata {&lt;BR /&gt;    # Default number of copies of metadata to hold on each PV.  0, 1 or 2.&lt;BR /&gt;    # You might want to override it from the command line with 0&lt;BR /&gt;    # when running pvcreate on new PVs which are to be added to large VGs.&lt;BR /&gt;&lt;BR /&gt;    # pvmetadatacopies = 1&lt;BR /&gt;&lt;BR /&gt;    # Approximate default size of on-disk metadata areas in sectors.&lt;BR /&gt;    # You should increase this if you have large volume groups or&lt;BR /&gt;    # you want to retain a large on-disk history of your metadata changes.&lt;BR /&gt;&lt;BR /&gt;    # pvmetadatasize = 255&lt;BR /&gt;&lt;BR /&gt;    # List of directories holding live copies of text format metadata.&lt;BR /&gt;    # These directories must not be on logical volumes!&lt;BR /&gt;    # It's possible to use LVM2 with a couple of directories here,&lt;BR /&gt;    # preferably on different (non-LV) filesystems, and with no other&lt;BR /&gt;    # on-disk metadata (pvmetadatacopies = 0). Or this can be in&lt;BR /&gt;    # addition to on-disk metadata areas.&lt;BR /&gt;    # The feature was originally added to simplify testing and is not&lt;BR /&gt;    # supported under low memory situations - the machine could lock up.&lt;BR /&gt;    #&lt;BR /&gt;    # Never edit any files in these directories by hand unless you&lt;BR /&gt;    # you are absolutely sure you know what you are doing! Use&lt;BR /&gt;    # the supplied toolset to make changes (e.g. vgcfgrestore).&lt;BR /&gt;&lt;BR /&gt;    # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]&lt;BR /&gt;#}&lt;BR /&gt;&lt;BR /&gt;# Event daemon&lt;BR /&gt;#&lt;BR /&gt;# dmeventd {&lt;BR /&gt;    # mirror_library is the library used when monitoring a mirror device.&lt;BR /&gt;    #&lt;BR /&gt;    # "libdevmapper-event-lvm2mirror.so" attempts to recover from failures.&lt;BR /&gt;    # It removes failed devices from a volume group and reconfigures a&lt;BR /&gt;    # mirror as necessary.&lt;BR /&gt;    #&lt;BR /&gt;    # mirror_library = "libdevmapper-event-lvm2mirror.so"&lt;BR /&gt;#}&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 07 May 2008 19:26:07 GMT</pubDate>
    <dc:creator>skt_skt</dc:creator>
    <dc:date>2008-05-07T19:26:07Z</dc:date>
    <item>
      <title>lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192456#M57846</link>
      <description>Linux hostname 2.6.18-53.1.14.el5 #1 SMP Tue Feb 19 07:18:46 EST 2008 x86_64 x86_64 x86_64 GNU/Linux&lt;BR /&gt;&lt;BR /&gt;Red Hat Enterprise Linux Server release 5.1 (Tikanga)&lt;BR /&gt;&lt;BR /&gt;# lvextend -L +4000M  /dev/vgec_rde0_pdb/lvol2&lt;BR /&gt;  Extending logical volume lvol2 to 63.91 GB&lt;BR /&gt;  Error locking on node xxxxxx: Volume group for uuid not found: &lt;BR /&gt;&lt;BR /&gt;CyPYYtsmPYFg2M11glsWM2OSzmcVAbkm05WEDGVxERxVhTjHDIl90yjpjq7urtPl&lt;BR /&gt;  Error locking on node xxxxxx: Volume group for uuid not found: &lt;BR /&gt;&lt;BR /&gt;CyPYYtsmPYFg2M11glsWM2OSzmcVAbkm05WEDGVxERxVhTjHDIl90yjpjq7urtPl&lt;BR /&gt;  Failed to suspend lvol2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v vgec_rde0_pdb&lt;BR /&gt;    Using volume group(s) on command line&lt;BR /&gt;    Finding volume group "vgec_rde0_pdb"&lt;BR /&gt;  --- Volume group ---&lt;BR /&gt;  VG Name               vgec_rde0_pdb&lt;BR /&gt;  System ID&lt;BR /&gt;  Format                lvm2&lt;BR /&gt;  Metadata Areas        4&lt;BR /&gt;  Metadata Sequence No  9&lt;BR /&gt;  VG Access             read/write&lt;BR /&gt;  VG Status             resizable&lt;BR /&gt;  Clustered             yes&lt;BR /&gt;  Shared                no&lt;BR /&gt;  MAX LV                255&lt;BR /&gt;  Cur LV                7&lt;BR /&gt;  Open LV               7&lt;BR /&gt;  Max PV                150&lt;BR /&gt;  Cur PV                4&lt;BR /&gt;  Act PV                4&lt;BR /&gt;  VG Size               269.62 GB&lt;BR /&gt;  PE Size               32.00 MB&lt;BR /&gt;  Total PE              8628&lt;BR /&gt;  Alloc PE / Size       6752 / 211.00 GB&lt;BR /&gt;  Free  PE / Size       1876 / 58.62 GB&lt;BR /&gt;  VG UUID               CyPYYt-smPY-Fg2M-11gl-sWM2-OSzm-cVAbkm&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# lvdisplay -v /dev/vgec_rde0_pdb/lvol2&lt;BR /&gt;    Using logical volume(s) on command line&lt;BR /&gt;  --- Logical volume ---&lt;BR /&gt;  LV Name                /dev/vgec_rde0_pdb/lvol2&lt;BR /&gt;  VG Name                vgec_rde0_pdb&lt;BR /&gt;  LV UUID                05WEDG-VxER-xVhT-jHDI-l90y-jpjq-7urtPl&lt;BR /&gt;  LV Write Access        read/write&lt;BR /&gt;  LV Status              available&lt;BR /&gt;  # open                 1&lt;BR /&gt;  LV Size                60.00 GB&lt;BR /&gt;  Current LE             1920&lt;BR /&gt;  Segments               1&lt;BR /&gt;  Allocation             inherit&lt;BR /&gt;  Read ahead sectors     0&lt;BR /&gt;  Block device           253:15&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Let me know if you have any suggetion.</description>
      <pubDate>Tue, 06 May 2008 12:06:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192456#M57846</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-06T12:06:33Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192457#M57847</link>
      <description>Is it clvmd service running?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 06 May 2008 13:08:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192457#M57847</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-06T13:08:32Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192458#M57848</link>
      <description>yes. On both nodes</description>
      <pubDate>Tue, 06 May 2008 13:10:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192458#M57848</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-06T13:10:48Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192459#M57849</link>
      <description>What is the loking type used in /etc/lvm/lvm.conf? Have you run lvmconf --enable-cluster --lockinglibdir /usr/lib. Try restarting clvmd.</description>
      <pubDate>Tue, 06 May 2008 14:45:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192459#M57849</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-06T14:45:05Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192460#M57850</link>
      <description>how can i check if it is enabled or not? i dont see a status option for the lvmconf command</description>
      <pubDate>Wed, 07 May 2008 15:57:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192460#M57850</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-07T15:57:03Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192461#M57851</link>
      <description>Attach your /etc/lvm/lvm.conf file.</description>
      <pubDate>Wed, 07 May 2008 19:06:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192461#M57851</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-07T19:06:09Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192462#M57852</link>
      <description># This is an example configuration file for the LVM2 system.&lt;BR /&gt;# It contains the default settings that would be used if there was no&lt;BR /&gt;# /etc/lvm/lvm.conf file.&lt;BR /&gt;#&lt;BR /&gt;# Refer to 'man lvm.conf' for further information including the file layout.&lt;BR /&gt;#&lt;BR /&gt;# To put this file in a different directory and override /etc/lvm set&lt;BR /&gt;# the environment variable LVM_SYSTEM_DIR before running the tools.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# This section allows you to configure which block devices should&lt;BR /&gt;# be used by the LVM system.&lt;BR /&gt;devices {&lt;BR /&gt;&lt;BR /&gt;    # Where do you want your volume groups to appear ?&lt;BR /&gt;    dir = "/dev"&lt;BR /&gt;&lt;BR /&gt;    # An array of directories that contain the device nodes you wish&lt;BR /&gt;    # to use with LVM2.&lt;BR /&gt;    scan = [ "/dev" ]&lt;BR /&gt;&lt;BR /&gt;    # If several entries in the scanned directories correspond to the&lt;BR /&gt;    # same block device and the tools need to display a name for device,&lt;BR /&gt;    # all the pathnames are matched against each item in the following&lt;BR /&gt;    # list of regular expressions in turn and the first match is used.&lt;BR /&gt;    preferred_names = [ ]&lt;BR /&gt;&lt;BR /&gt;    # preferred_names = [ "^/dev/mpath/", "^/dev/[hs]d" ]&lt;BR /&gt;&lt;BR /&gt;    # A filter that tells LVM2 to only use a restricted set of devices.&lt;BR /&gt;    # The filter consists of an array of regular expressions.  These&lt;BR /&gt;    # expressions can be delimited by a character of your choice, and&lt;BR /&gt;    # prefixed with either an 'a' (for accept) or 'r' (for reject).&lt;BR /&gt;    # The first expression found to match a device name determines if&lt;BR /&gt;    # the device will be accepted or rejected (ignored).  Devices that&lt;BR /&gt;    # don't match any patterns are accepted.&lt;BR /&gt;&lt;BR /&gt;    # Be careful if there there are symbolic links or multiple filesystem&lt;BR /&gt;    # entries for the same device as each name is checked separately against&lt;BR /&gt;    # the list of patterns.  The effect is that if any name matches any 'a'&lt;BR /&gt;    # pattern, the device is accepted; otherwise if any name matches any 'r'&lt;BR /&gt;    # pattern it is rejected; otherwise it is accepted.&lt;BR /&gt;&lt;BR /&gt;    # Don't have more than one filter line active at once: only one gets used.&lt;BR /&gt;&lt;BR /&gt;    # Run vgscan after you change this parameter to ensure that&lt;BR /&gt;    # the cache file gets regenerated (see below).&lt;BR /&gt;    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;    # By default we accept every block device:&lt;BR /&gt;    #filter = [ "a/.*/" ]&lt;BR /&gt;    filter = [ "a|/dev/sda1|", "a|/dev/sda2|", "a|/dev/emc*|", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # Exclude the cdrom drive&lt;BR /&gt;    # filter = [ "r|/dev/cdrom|" ]&lt;BR /&gt;&lt;BR /&gt;    # When testing I like to work with just loopback devices:&lt;BR /&gt;    # filter = [ "a/loop/", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # Or maybe all loops and ide drives except hdc:&lt;BR /&gt;    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]&lt;BR /&gt;&lt;BR /&gt;    # Use anchors if you want to be really specific&lt;BR /&gt;    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]&lt;BR /&gt;&lt;BR /&gt;    # The results of the filtering are cached on disk to avoid&lt;BR /&gt;    # rescanning dud devices (which can take a very long time).&lt;BR /&gt;    # By default this cache is stored in the /etc/lvm/cache directory&lt;BR /&gt;    # in a file called '.cache'.&lt;BR /&gt;    # It is safe to delete the contents: the tools regenerate it.&lt;BR /&gt;    # (The old setting 'cache' is still respected if neither of&lt;BR /&gt;    # these new ones is present.)&lt;BR /&gt;    cache_dir = "/etc/lvm/cache"&lt;BR /&gt;    cache_file_prefix = ""&lt;BR /&gt;&lt;BR /&gt;    # You can turn off writing this cache file by setting this to 0.&lt;BR /&gt;    write_cache_state = 1&lt;BR /&gt;&lt;BR /&gt;    # Advanced settings.&lt;BR /&gt;&lt;BR /&gt;    # List of pairs of additional acceptable block device types found&lt;BR /&gt;    # in /proc/devices with maximum (non-zero) number of partitions.&lt;BR /&gt;    # types = [ "fd", 16 ]&lt;BR /&gt;&lt;BR /&gt;    # If sysfs is mounted (2.6 kernels) restrict device scanning to&lt;BR /&gt;    # the block devices it believes are valid.&lt;BR /&gt;    # 1 enables; 0 disables.&lt;BR /&gt;    sysfs_scan = 1&lt;BR /&gt;&lt;BR /&gt;    # By default, LVM2 will ignore devices used as components of&lt;BR /&gt;    # software RAID (md) devices by looking for md superblocks.&lt;BR /&gt;    # 1 enables; 0 disables.&lt;BR /&gt;    md_component_detection = 1&lt;BR /&gt;&lt;BR /&gt;    # If, while scanning the system for PVs, LVM2 encounters a device-mapper&lt;BR /&gt;    # device that has its I/O suspended, it waits for it to become accessible.&lt;BR /&gt;    # Set this to 1 to skip such devices.  This should only be needed&lt;BR /&gt;    # in recovery situations.&lt;BR /&gt;    ignore_suspended_devices = 0&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# This section that allows you to configure the nature of the&lt;BR /&gt;# information that LVM2 reports.&lt;BR /&gt;log {&lt;BR /&gt;&lt;BR /&gt;    # Controls the messages sent to stdout or stderr.&lt;BR /&gt;    # There are three levels of verbosity, 3 being the most verbose.&lt;BR /&gt;    verbose = 0&lt;BR /&gt;&lt;BR /&gt;    # Should we send log messages through syslog?&lt;BR /&gt;    # 1 is yes; 0 is no.&lt;BR /&gt;    syslog = 1&lt;BR /&gt;&lt;BR /&gt;    # Should we log error and debug messages to a file?&lt;BR /&gt;    # By default there is no log file.&lt;BR /&gt;    #file = "/var/log/lvm2.log"&lt;BR /&gt;&lt;BR /&gt;    # Should we overwrite the log file each time the program is run?&lt;BR /&gt;    # By default we append.&lt;BR /&gt;    overwrite = 0&lt;BR /&gt;&lt;BR /&gt;    # What level of log messages should we send to the log file and/or syslog?&lt;BR /&gt;    # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.&lt;BR /&gt;    # 7 is the most verbose (LOG_DEBUG).&lt;BR /&gt;    level = 0&lt;BR /&gt;&lt;BR /&gt;    # Format of output messages&lt;BR /&gt;    # Whether or not (1 or 0) to indent messages according to their severity&lt;BR /&gt;    indent = 1&lt;BR /&gt;&lt;BR /&gt;    # Whether or not (1 or 0) to display the command name on each line output&lt;BR /&gt;    command_names = 0&lt;BR /&gt;&lt;BR /&gt;    # A prefix to use before the message text (but after the command name,&lt;BR /&gt;    # if selected).  Default is two spaces, so you can see/grep the severity&lt;BR /&gt;    # of each message.&lt;BR /&gt;    prefix = "  "&lt;BR /&gt;&lt;BR /&gt;    # To make the messages look similar to the original LVM tools use:&lt;BR /&gt;    #   indent = 0&lt;BR /&gt;    #   command_names = 1&lt;BR /&gt;    #   prefix = " -- "&lt;BR /&gt;&lt;BR /&gt;    # Set this if you want log messages during activation.&lt;BR /&gt;    # Don't use this in low memory situations (can deadlock).&lt;BR /&gt;    # activation = 0&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# Configuration of metadata backups and archiving.  In LVM2 when we&lt;BR /&gt;# talk about a 'backup' we mean making a copy of the metadata for the&lt;BR /&gt;# *current* system.  The 'archive' contains old metadata configurations.&lt;BR /&gt;# Backups are stored in a human readeable text format.&lt;BR /&gt;backup {&lt;BR /&gt;&lt;BR /&gt;    # Should we maintain a backup of the current metadata configuration ?&lt;BR /&gt;    # Use 1 for Yes; 0 for No.&lt;BR /&gt;    # Think very hard before turning this off!&lt;BR /&gt;    backup = 1&lt;BR /&gt;&lt;BR /&gt;    # Where shall we keep it ?&lt;BR /&gt;    # Remember to back up this directory regularly!&lt;BR /&gt;    backup_dir = "/etc/lvm/backup"&lt;BR /&gt;&lt;BR /&gt;    # Should we maintain an archive of old metadata configurations.&lt;BR /&gt;    # Use 1 for Yes; 0 for No.&lt;BR /&gt;    # On by default.  Think very hard before turning this off.&lt;BR /&gt;    archive = 1&lt;BR /&gt;&lt;BR /&gt;    # Where should archived files go ?&lt;BR /&gt;    # Remember to back up this directory regularly!&lt;BR /&gt;    archive_dir = "/etc/lvm/archive"&lt;BR /&gt;&lt;BR /&gt;    # What is the minimum number of archive files you wish to keep ?&lt;BR /&gt;    retain_min = 10&lt;BR /&gt;&lt;BR /&gt;    # What is the minimum time you wish to keep an archive file for ?&lt;BR /&gt;    retain_days = 30&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# Settings for the running LVM2 in shell (readline) mode.&lt;BR /&gt;shell {&lt;BR /&gt;&lt;BR /&gt;    # Number of lines of history to store in ~/.lvm_history&lt;BR /&gt;    history_size = 100&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# Miscellaneous global LVM2 settings&lt;BR /&gt;global {&lt;BR /&gt;    library_dir = "/usr/lib64"&lt;BR /&gt;&lt;BR /&gt;    # The file creation mask for any files and directories created.&lt;BR /&gt;    # Interpreted as octal if the first digit is zero.&lt;BR /&gt;    umask = 077&lt;BR /&gt;&lt;BR /&gt;    # Allow other users to read the files&lt;BR /&gt;    #umask = 022&lt;BR /&gt;&lt;BR /&gt;    # Enabling test mode means that no changes to the on disk metadata&lt;BR /&gt;    # will be made.  Equivalent to having the -t option on every&lt;BR /&gt;    # command.  Defaults to off.&lt;BR /&gt;    test = 0&lt;BR /&gt;&lt;BR /&gt;    # Default value for --units argument&lt;BR /&gt;    units = "h"&lt;BR /&gt;&lt;BR /&gt;    # Whether or not to communicate with the kernel device-mapper.&lt;BR /&gt;    # Set to 0 if you want to use the tools to manipulate LVM metadata&lt;BR /&gt;    # without activating any logical volumes.&lt;BR /&gt;    # If the device-mapper kernel driver is not present in your kernel&lt;BR /&gt;    # setting this to 0 should suppress the error messages.&lt;BR /&gt;    activation = 1&lt;BR /&gt;&lt;BR /&gt;    # If we can't communicate with device-mapper, should we try running&lt;BR /&gt;    # the LVM1 tools?&lt;BR /&gt;    # This option only applies to 2.4 kernels and is provided to help you&lt;BR /&gt;    # switch between device-mapper kernels and LVM1 kernels.&lt;BR /&gt;    # The LVM1 tools need to be installed with .lvm1 suffices&lt;BR /&gt;    # e.g. vgscan.lvm1 and they will stop working after you start using&lt;BR /&gt;    # the new lvm2 on-disk metadata format.&lt;BR /&gt;    # The default value is set when the tools are built.&lt;BR /&gt;    # fallback_to_lvm1 = 0&lt;BR /&gt;&lt;BR /&gt;    # The default metadata format that commands should use - "lvm1" or "lvm2".&lt;BR /&gt;    # The command line override is -M1 or -M2.&lt;BR /&gt;    # Defaults to "lvm1" if compiled in, else "lvm2".&lt;BR /&gt;    # format = "lvm1"&lt;BR /&gt;&lt;BR /&gt;    # Location of proc filesystem&lt;BR /&gt;    proc = "/proc"&lt;BR /&gt;&lt;BR /&gt;    # Type of locking to use. Defaults to local file-based locking (1).&lt;BR /&gt;    # Turn locking off by setting to 0 (dangerous: risks metadata corruption&lt;BR /&gt;    # if LVM2 commands get run concurrently).&lt;BR /&gt;    # Type 2 uses the external shared library locking_library.&lt;BR /&gt;    # Type 3 uses built-in clustered locking.&lt;BR /&gt;    locking_type = 3&lt;BR /&gt;&lt;BR /&gt;    # If using external locking (type 2) and initialisation fails,&lt;BR /&gt;    # with this set to 1 an attempt will be made to use the built-in&lt;BR /&gt;    # clustered locking.&lt;BR /&gt;    # If you are using a customised locking_library you should set this to 0.&lt;BR /&gt;    fallback_to_clustered_locking = 1&lt;BR /&gt;&lt;BR /&gt;    # If an attempt to initialise type 2 or type 3 locking failed, perhaps&lt;BR /&gt;    # because cluster components such as clvmd are not running, with this set&lt;BR /&gt;    # to 1 an attempt will be made to use local file-based locking (type 1).&lt;BR /&gt;    # If this succeeds, only commands against local volume groups will proceed.&lt;BR /&gt;    # Volume Groups marked as clustered will be ignored.&lt;BR /&gt;    fallback_to_local_locking = 1&lt;BR /&gt;&lt;BR /&gt;    # Local non-LV directory that holds file-based locks while commands are&lt;BR /&gt;    # in progress.  A directory like /tmp that may get wiped on reboot is OK.&lt;BR /&gt;    locking_dir = "/var/lock/lvm"&lt;BR /&gt;&lt;BR /&gt;    # Other entries can go here to allow you to load shared libraries&lt;BR /&gt;    # e.g. if support for LVM1 metadata was compiled as a shared library use&lt;BR /&gt;    #   format_libraries = "liblvm2format1.so"&lt;BR /&gt;    # Full pathnames can be given.&lt;BR /&gt;&lt;BR /&gt;    # Search this directory first for shared libraries.&lt;BR /&gt;    #   library_dir = "/lib"&lt;BR /&gt;&lt;BR /&gt;    # The external locking library to load if locking_type is set to 2.&lt;BR /&gt;    #   locking_library = "liblvm2clusterlock.so"&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;activation {&lt;BR /&gt;    # Device used in place of missing stripes if activating incomplete volume.&lt;BR /&gt;    # For now, you need to set this up yourself first (e.g. with 'dmsetup')&lt;BR /&gt;    # For example, you could make it return I/O errors using the 'error'&lt;BR /&gt;    # target or make it return zeros.&lt;BR /&gt;    missing_stripe_filler = "/dev/ioerror"&lt;BR /&gt;&lt;BR /&gt;    # How much stack (in KB) to reserve for use while devices suspended&lt;BR /&gt;    reserved_stack = 256&lt;BR /&gt;&lt;BR /&gt;    # How much memory (in KB) to reserve for use while devices suspended&lt;BR /&gt;    reserved_memory = 8192&lt;BR /&gt;&lt;BR /&gt;    # Nice value used while devices suspended&lt;BR /&gt;    process_priority = -18&lt;BR /&gt;&lt;BR /&gt;    # If volume_list is defined, each LV is only activated if there is a&lt;BR /&gt;    # match against the list.&lt;BR /&gt;    #   "vgname" and "vgname/lvname" are matched exactly.&lt;BR /&gt;    #   "@tag" matches any tag set in the LV or VG.&lt;BR /&gt;    #   "@*" matches if any tag defined on the host is also set in the LV or VG&lt;BR /&gt;    #&lt;BR /&gt;    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]&lt;BR /&gt;&lt;BR /&gt;    # Size (in KB) of each copy operation when mirroring&lt;BR /&gt;    mirror_region_size = 512&lt;BR /&gt;&lt;BR /&gt;    # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define&lt;BR /&gt;    # how a device failure affecting a mirror is handled.&lt;BR /&gt;    # A mirror is composed of mirror images (copies) and a log.&lt;BR /&gt;    # A disk log ensures that a mirror does not need to be re-synced&lt;BR /&gt;    # (all copies made the same) every time a machine reboots or crashes.&lt;BR /&gt;    #&lt;BR /&gt;    # In the event of a failure, the specified policy will be used to&lt;BR /&gt;    # determine what happens:&lt;BR /&gt;    #&lt;BR /&gt;    # "remove" - Simply remove the faulty device and run without it.  If&lt;BR /&gt;    #            the log device fails, the mirror would convert to using&lt;BR /&gt;    #            an in-memory log.  This means the mirror will not&lt;BR /&gt;    #            remember its sync status across crashes/reboots and&lt;BR /&gt;    #            the entire mirror will be re-synced.  If a&lt;BR /&gt;    #            mirror image fails, the mirror will convert to a&lt;BR /&gt;    #            non-mirrored device if there is only one remaining good&lt;BR /&gt;    #            copy.&lt;BR /&gt;    #&lt;BR /&gt;    # "allocate" - Remove the faulty device and try to allocate space on&lt;BR /&gt;    #            a new device to be a replacement for the failed device.&lt;BR /&gt;    #            Using this policy for the log is fast and maintains the&lt;BR /&gt;    #            ability to remember sync state through crashes/reboots.&lt;BR /&gt;    #            Using this policy for a mirror device is slow, as it&lt;BR /&gt;    #            requires the mirror to resynchronize the devices, but it&lt;BR /&gt;    #            will preserve the mirror characteristic of the device.&lt;BR /&gt;    #            This policy acts like "remove" if no suitable device and&lt;BR /&gt;    #            space can be allocated for the replacement.&lt;BR /&gt;    #            Currently this is not implemented properly and behaves&lt;BR /&gt;    #            similarly to:&lt;BR /&gt;    #&lt;BR /&gt;    # "allocate_anywhere" - Operates like "allocate", but it does not&lt;BR /&gt;    #            require that the new space being allocated be on a&lt;BR /&gt;    #            device is not part of the mirror.  For a log device&lt;BR /&gt;    #            failure, this could mean that the log is allocated on&lt;BR /&gt;    #            the same device as a mirror device.  For a mirror&lt;BR /&gt;    #            device, this could mean that the mirror device is&lt;BR /&gt;    #            allocated on the same device as another mirror device.&lt;BR /&gt;    #            This policy would not be wise for mirror devices&lt;BR /&gt;    #            because it would break the redundant nature of the&lt;BR /&gt;    #            mirror.  This policy acts like "remove" if no suitable&lt;BR /&gt;    #            device and space can be allocated for the replacement.&lt;BR /&gt;&lt;BR /&gt;    mirror_log_fault_policy = "allocate"&lt;BR /&gt;    mirror_device_fault_policy = "remove"&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;####################&lt;BR /&gt;# Advanced section #&lt;BR /&gt;####################&lt;BR /&gt;&lt;BR /&gt;# Metadata settings&lt;BR /&gt;#&lt;BR /&gt;# metadata {&lt;BR /&gt;    # Default number of copies of metadata to hold on each PV.  0, 1 or 2.&lt;BR /&gt;    # You might want to override it from the command line with 0&lt;BR /&gt;    # when running pvcreate on new PVs which are to be added to large VGs.&lt;BR /&gt;&lt;BR /&gt;    # pvmetadatacopies = 1&lt;BR /&gt;&lt;BR /&gt;    # Approximate default size of on-disk metadata areas in sectors.&lt;BR /&gt;    # You should increase this if you have large volume groups or&lt;BR /&gt;    # you want to retain a large on-disk history of your metadata changes.&lt;BR /&gt;&lt;BR /&gt;    # pvmetadatasize = 255&lt;BR /&gt;&lt;BR /&gt;    # List of directories holding live copies of text format metadata.&lt;BR /&gt;    # These directories must not be on logical volumes!&lt;BR /&gt;    # It's possible to use LVM2 with a couple of directories here,&lt;BR /&gt;    # preferably on different (non-LV) filesystems, and with no other&lt;BR /&gt;    # on-disk metadata (pvmetadatacopies = 0). Or this can be in&lt;BR /&gt;    # addition to on-disk metadata areas.&lt;BR /&gt;    # The feature was originally added to simplify testing and is not&lt;BR /&gt;    # supported under low memory situations - the machine could lock up.&lt;BR /&gt;    #&lt;BR /&gt;    # Never edit any files in these directories by hand unless you&lt;BR /&gt;    # you are absolutely sure you know what you are doing! Use&lt;BR /&gt;    # the supplied toolset to make changes (e.g. vgcfgrestore).&lt;BR /&gt;&lt;BR /&gt;    # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]&lt;BR /&gt;#}&lt;BR /&gt;&lt;BR /&gt;# Event daemon&lt;BR /&gt;#&lt;BR /&gt;# dmeventd {&lt;BR /&gt;    # mirror_library is the library used when monitoring a mirror device.&lt;BR /&gt;    #&lt;BR /&gt;    # "libdevmapper-event-lvm2mirror.so" attempts to recover from failures.&lt;BR /&gt;    # It removes failed devices from a volume group and reconfigures a&lt;BR /&gt;    # mirror as necessary.&lt;BR /&gt;    #&lt;BR /&gt;    # mirror_library = "libdevmapper-event-lvm2mirror.so"&lt;BR /&gt;#}&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 07 May 2008 19:26:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192462#M57852</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-07T19:26:07Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192463#M57853</link>
      <description>Your config file is correct. Can you post the output of "vgs". I want to know if you have the "cluster" attribute enabled in that volume group.</description>
      <pubDate>Wed, 07 May 2008 20:42:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192463#M57853</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-07T20:42:07Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192464#M57854</link>
      <description>they are (see the vgdisplay o/p above;Clustered yes)&lt;BR /&gt;&lt;BR /&gt;]# vgs&lt;BR /&gt;  VG             #PV #LV #SN Attr   VSize   VFree&lt;BR /&gt;  vg00             1   8   0 wz--n- 558.50G 532.53G&lt;BR /&gt;  vgec_rde0_lbin   1   1   0 wz--nc  59.97G   1.38G&lt;BR /&gt;  vgec_rde0_ldb    3   8   0 wz--nc 191.91G 928.00M&lt;BR /&gt;  vgec_rde0_ldb2   4   9   0 wz--nc 269.62G  16.62G&lt;BR /&gt;  vgec_rde0_par    2   1   0 wz--nc  16.81G      0&lt;BR /&gt;  vgec_rde0_pdb    4   7   0 wz--nc 269.62G  58.62G&lt;BR /&gt;  vgec_rde0_prd    1   3   0 wz--nc  16.84G   8.72G&lt;BR /&gt;</description>
      <pubDate>Thu, 08 May 2008 09:12:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192464#M57854</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-08T09:12:11Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192465#M57855</link>
      <description>If the system where mine, I would try the following:&lt;BR /&gt;&lt;BR /&gt;lvmconf --disable-cluster&lt;BR /&gt;&lt;BR /&gt;Extend the lun&lt;BR /&gt;&lt;BR /&gt;lvmconf --enable-cluster&lt;BR /&gt;&lt;BR /&gt;I do all day this to take snapshots of a GFS filesystem. No problems so far, but I doubt is supported.</description>
      <pubDate>Thu, 08 May 2008 19:59:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192465#M57855</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-08T19:59:33Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192466#M57856</link>
      <description>i dont use GFS, but ext3. Also i am not clear what is the exact limitation you have so to choose this method</description>
      <pubDate>Fri, 09 May 2008 12:20:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192466#M57856</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2008-05-09T12:20:27Z</dc:date>
    </item>
    <item>
      <title>Re: lvextend error on Redhat-cluster suit 5.1</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192467#M57857</link>
      <description>That was just a comment to let you know that I disable and enable cluster support for LVM every day. As you cannot take snapshots of clustered LVMs, I must disable, take the snapshot, and then enable it again.</description>
      <pubDate>Fri, 09 May 2008 12:23:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvextend-error-on-redhat-cluster-suit-5-1/m-p/4192467#M57857</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-05-09T12:23:29Z</dc:date>
    </item>
  </channel>
</rss>

