<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: vgchange activation error in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282675#M53029</link>
    <description>&amp;gt; volume_list = [ "VolGroup00", "@db01", "datavg1/lvol1" ]&lt;BR /&gt;&lt;BR /&gt;You've now effectively disabled the HA LVM protection: datavg1 can now be activated on this node even if it has a tag that indicates it may currently be active on another node.&lt;BR /&gt;&lt;BR /&gt;If db02 is currently running the service and db01 is rebooted, this change allows db01 to activate the datavg1 at boot time and perhaps perform an automatic filesystem check on datavg1/lvol1... while the filesystem is active on db02. This will *certainly* cause filesystem corruption, because db01's fsck will see db02's on-going operations as "corruption" and will attempt to fix it. &lt;BR /&gt;&lt;BR /&gt;At that point, db02 will see problems like "WTF??? I just changed this directory entry from X to Y, but now it's back at X again?" This will typically cause the filesystem to become read-only at db02.&lt;BR /&gt;&lt;BR /&gt;Let me emphasise: In a HA LVM configuration, it is important that the shared VGs *must not* be activated before the cluster services are started and communicating with the other node(s). The shared VGs *must not* be activated, filesystem-checked nor mounted by the regular start-up procedure: they must be controlled entirely by the cluster mechanisms. &lt;BR /&gt;&lt;BR /&gt;If the shared filesystem is mentioned in /etc/fstab at all (you could omit it completely), it *must* have mount option "noauto" and the filesystem check pass number at the 6th column of fstab set to 0. Otherwise your system will fail to boot if the HA LVM locking mechanism works, or may corrupt your shared filesystem if the locking mechanism fails.&lt;BR /&gt;&lt;BR /&gt;If your cluster configuration requires that the shared VG is activated on one or the other node before the cluster daemons are started, then your cluster configuration is misdesigned.&lt;BR /&gt;&lt;BR /&gt;The correct procedure for manually activating a HA LVM -configured shared VG is like this:&lt;BR /&gt;&lt;BR /&gt;(Note: this procedure is for emergency/maintenance use only. In normal use, the cluster should handle all this automatically - if it doesn't, your cluster may not be able to perform an automatic failover in a real failure situation.)&lt;BR /&gt;&lt;BR /&gt;1.) Use "vgs -o +tags" to see if the VG currently has a tag on it.&lt;BR /&gt;&lt;BR /&gt;2.) If the VG has no tag, or a tag that matches the name of the host you wish to activate the VG on, you can go directly to step 7.&lt;BR /&gt;&lt;BR /&gt;3.) If the VG has a tag that matches the hostname of another node, *you must* first make sure that node does not have the VG currently activated.&lt;BR /&gt;&lt;BR /&gt;4.) When you're sure the VG is not currently active on any node, you can use "vgchange --deltag" to remove the VG tag of the other node:&lt;BR /&gt;&lt;BR /&gt;vgchange --deltag db02 datavg1&lt;BR /&gt;&lt;BR /&gt;5.) At this point, say to yourself: "I am definitely certain this VG is not active on any cluster node, and I understand I will held responsible of any damages to data if this is not true." You're telling you know better than the cluster here. &lt;BR /&gt;&lt;BR /&gt;6.) Then add a new tag that matches the hostname of the node you wish to activate the VG in:&lt;BR /&gt;&lt;BR /&gt;vgchange --addtag db01 datavg1&lt;BR /&gt;&lt;BR /&gt;7.) Activate the VG as normal.&lt;BR /&gt;&lt;BR /&gt;vgchange -a y db01&lt;BR /&gt;&lt;BR /&gt;8.) If applicable, run a filesystem check on the LV(s):&lt;BR /&gt;&lt;BR /&gt;fsck -C0 /dev/mapper/datavg1-lvol1&lt;BR /&gt;&lt;BR /&gt;9.) If applicable, mount the filesystem(s).&lt;BR /&gt;&lt;BR /&gt;If the LV contains a raw database instead of a filesystem, steps 8 and 9 will not be applicable; instead, the database engine may be started at that point.&lt;BR /&gt;&lt;BR /&gt;MK</description>
    <pubDate>Tue, 24 May 2011 08:05:10 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2011-05-24T08:05:10Z</dc:date>
    <item>
      <title>vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282668#M53022</link>
      <description>When I try to activate a Volume Group on db01 I am receiving this activation filter error….&lt;BR /&gt;&lt;BR /&gt;We have a db02 server that can see and use the same SAN DASD and I can activate it and&lt;BR /&gt;&lt;BR /&gt;mount the file system there ok…&lt;BR /&gt;&lt;BR /&gt;The issue started when db01 crashed and rebooted and must of set some flag some where that&lt;BR /&gt;&lt;BR /&gt;is now preventing it from startin up again….&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;I’ve tried doing a vgexport on db02 and an import on db01 but I still get the same error…&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;Any ideas ??&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]# /sbin/vgchange -a y datavg1&lt;BR /&gt;&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;&lt;BR /&gt;  Not activating datavg1/lvol01 since it does not pass activation filter.&lt;BR /&gt;&lt;BR /&gt;  0 logical volume(s) in volume group "datavg1" now active&lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]#&lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]# lvdisplay -v /dev/datavg1/lvol01&lt;BR /&gt;&lt;BR /&gt;    Using logical volume(s) on command line&lt;BR /&gt;&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;&lt;BR /&gt;  --- Logical volume ---&lt;BR /&gt;&lt;BR /&gt;  LV Name                /dev/datavg1/lvol01&lt;BR /&gt;&lt;BR /&gt;  VG Name                datavg1&lt;BR /&gt;&lt;BR /&gt;  LV UUID                e2DFlG-CweU-zVsV-wzfs-oDwY-IpUF-JKgHKt&lt;BR /&gt;&lt;BR /&gt;  LV Write Access        read/write&lt;BR /&gt;&lt;BR /&gt;  LV Status              NOT available&lt;BR /&gt;&lt;BR /&gt;  LV Size                1000.00 GB&lt;BR /&gt;&lt;BR /&gt;  Current LE             256000&lt;BR /&gt;&lt;BR /&gt;  Segments               4&lt;BR /&gt;&lt;BR /&gt;  Allocation             inherit&lt;BR /&gt;&lt;BR /&gt;  Read ahead sectors     auto&lt;BR /&gt;&lt;BR /&gt;[root@awopdb01 ~]#&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]# vgimport datavg1&lt;BR /&gt;&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;&lt;BR /&gt;  Volume group "datavg1" successfully imported&lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]# /sbin/vgchange -a y datavg1&lt;BR /&gt;&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;&lt;BR /&gt;  Not activating datavg1/lvol01 since it does not pass activation filter.&lt;BR /&gt;&lt;BR /&gt;  0 logical volume(s) in volume group "datavg1" now active&lt;BR /&gt;&lt;BR /&gt;[root@db01 ~]#&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;But when I activate it on the second server it works ok:&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;[root@db02 ~]# /sbin/vgchange -a y datavg1&lt;BR /&gt;&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;&lt;BR /&gt;  1 logical volume(s) in volume group "datavg1" now active&lt;BR /&gt;&lt;BR /&gt;[root@db02 ~]#&lt;BR /&gt;&lt;BR /&gt;[root@db02 ~]# ls -al /dev/datavg1&lt;BR /&gt;&lt;BR /&gt;total 0&lt;BR /&gt;&lt;BR /&gt;drwxr-xr-x  2 root root   60 May 18 19:45 .&lt;BR /&gt;&lt;BR /&gt;drwxr-xr-x 17 root root 7280 May 18 19:45 ..&lt;BR /&gt;&lt;BR /&gt;lrwxrwxrwx  1 root root   26 May 18 19:45 lvol01 -&amp;gt; /dev/mapper/datavg1-lvol01&lt;BR /&gt;&lt;BR /&gt;[root@db02 ~]#&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 May 2011 14:22:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282668#M53022</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2011-05-19T14:22:27Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282669#M53023</link>
      <description>You seem to have some HP-UX experience. Beware: vgimport and vgexport will not work on Linux at all like you may have used to on HP-UX.&lt;BR /&gt;&lt;BR /&gt;Does this system have any kind of cluster suite installed? (Serviceguard? RedHat Cluster Suite? DB2 cluster? Something else?)&lt;BR /&gt;&lt;BR /&gt;What is the output of these commands:&lt;BR /&gt;&lt;BR /&gt;grep -e filter -e volume_list /etc/lvm/lvm.conf&lt;BR /&gt;vgs -o +tags&lt;BR /&gt;lvs&lt;BR /&gt;pvs&lt;BR /&gt;&lt;BR /&gt;If /etc/lvm/lvm.conf contains an uncommented filter expression that is different from the default value:&lt;BR /&gt;&lt;BR /&gt;filter = [ "a/.*/" ]&lt;BR /&gt;&lt;BR /&gt;... or an uncommented "volume_list" definition, then it's probably been added there for a reason: don't change it until you understand why the current value is there.&lt;BR /&gt;&lt;BR /&gt;The activation filter and/or VG tags are often used as a part of a cluster interlock mechanism that stops the cluster node from activating a VG that is in use by another cluster node. (If the particular VG is supposed to be accessed by more than one node simultaneously, then the lockout is designed to prevent *uncoordinated* access: cluster nodes must be able to communicate with each other to be aware of what the other nodes are doing. The nodes must coordinate their actions so that one node does not accidentally use an stale cached copy of some record when another node has just updated it.)&lt;BR /&gt;&lt;BR /&gt;If this is what is stopping you from activating the VG, it probably means that some sort of cluster infrastructure process did not automatically start up when db01 was rebooted; when you find it and start it, it might automatically fix this problem for you.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Fri, 20 May 2011 06:33:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282669#M53023</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2011-05-20T06:33:09Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282670#M53024</link>
      <description>The two servers: db01 and db02, are running in a Red Hat Cluster... &lt;BR /&gt;&lt;BR /&gt;The problem started when db01 failed when it lost contact with the quorum disk...&lt;BR /&gt;&lt;BR /&gt;I believe the cluster tried to start on db02 but failed with the same quorum disk issue, since it lost the connectivity and must of set some kind of lock or tag that is preventing it to activate on db01..&lt;BR /&gt;&lt;BR /&gt;I was able to mount it manually on db02 as the SAN group resolves the issue with the quorum disk..&lt;BR /&gt;&lt;BR /&gt;Just need to figure out what is preventing it from starting up on db01 so we can get the cluster back up and going again...&lt;BR /&gt;&lt;BR /&gt;[root@awopdb01 ~]# grep -e filter -e volume_list /etc/lvm/lvm.conf&lt;BR /&gt;    # A filter that tells LVM2 to only use a restricted set of devices.&lt;BR /&gt;    # The filter consists of an array of regular expressions.  These&lt;BR /&gt;    # Don't have more than one filter line active at once: only one gets used.&lt;BR /&gt;    #filter = [ "a/.*/" ]&lt;BR /&gt;    #filter = [ "r|/dev/sdr/|", "r|/dev/sdi/|" ]&lt;BR /&gt;    #filter = [ "a|/dev/sda.*|", "a|/dev/mpath/.*|", "r/.*/" ]&lt;BR /&gt;    # filter = [ "r|/dev/cdrom|" ]&lt;BR /&gt;    # filter = [ "a/loop/", "r/.*/" ]&lt;BR /&gt;    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]&lt;BR /&gt;    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]&lt;BR /&gt;    # The results of the filtering are cached on disk to avoid&lt;BR /&gt;    # If volume_list is defined, each LV is only activated if there is a&lt;BR /&gt;    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]&lt;BR /&gt;    volume_list = [ "VolGroup00", "@awopdb01" ]&lt;BR /&gt;[root@awopdb01 ~]# vgs -o +tags&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;  VG         #PV #LV #SN Attr   VSize    VFree  VG Tags&lt;BR /&gt;  VolGroup00   2   6   0 wz--n-  680.34G 80.25G&lt;BR /&gt;  datavg1      4   1   0 wz--n- 1000.00G     0  awopdb02&lt;BR /&gt;[root@awopdb01 ~]# lvs&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;  LV       VG         Attr   LSize    Origin Snap%  Move Log Copy%  Convert&lt;BR /&gt;  LogVol00 VolGroup00 -wi-ao  100.00G&lt;BR /&gt;  LogVol01 VolGroup00 -wi-ao  192.00G&lt;BR /&gt;  LogVol02 VolGroup00 -wi-ao  192.00G&lt;BR /&gt;  lvol1    VolGroup00 -wi-ao   46.09G&lt;BR /&gt;  lvol2    VolGroup00 -wi-ao   50.00G&lt;BR /&gt;  lvol3    VolGroup00 -wi-ao   20.00G&lt;BR /&gt;  lvol01   datavg1    -wi--- 1000.00G&lt;BR /&gt;[root@awopdb01 ~]# pvs&lt;BR /&gt;  Found duplicate PV tbU5yWceVhgPgS6RIvj0M2TxChiLp61b: using /dev/sdr2 not /dev/sdb2&lt;BR /&gt;  PV                  VG         Fmt  Attr PSize   PFree&lt;BR /&gt;  /dev/mpath/mpath11  datavg1    lvm2 a-   250.00G      0&lt;BR /&gt;  /dev/mpath/mpath12  datavg1    lvm2 a-   250.00G      0&lt;BR /&gt;  /dev/mpath/mpath13  datavg1    lvm2 a-   250.00G      0&lt;BR /&gt;  /dev/mpath/mpath14  datavg1    lvm2 a-   250.00G      0&lt;BR /&gt;  /dev/mpath/mpath1p2            lvm2 a-   267.75G 267.75G&lt;BR /&gt;  /dev/sda2           VolGroup00 lvm2 a-   408.09G      0&lt;BR /&gt;  /dev/sda3           VolGroup00 lvm2 a-   272.25G  80.25G&lt;BR /&gt;[root@awopdb01 ~]#</description>
      <pubDate>Fri, 20 May 2011 10:35:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282670#M53024</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2011-05-20T10:35:54Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282671#M53025</link>
      <description>OK, this looks like a HA LVM configuration of RedHat Cluster Suite. (In other words, very much like what later versions of Serviceguard for Linux use as a substitute of HP-UX "vgchange -a e".)&lt;BR /&gt;&lt;BR /&gt;In this configuration, the cluster VG can only be active on one of the cluster nodes at a time, or on none at all: never on two or more nodes simultaneously. The activation of the cluster VG is controlled by the cluster suite.&lt;BR /&gt;&lt;BR /&gt;Which version of RHEL? The RedHat Cluster Suite has changed a lot between versions. &lt;BR /&gt;The instructions below assume RHEL 5, but should be mostly compatible with RHEL 4 or 6 too. The HA LVM mode is available on RHEL 4.5 and newer.&lt;BR /&gt;&lt;BR /&gt;Your datavg1 volume group currently has a tag "awopdb02" on it - meaning the VG is currently in use on awopdb02 (or the cluster suite had it active there when the system crashed). &lt;BR /&gt;&lt;BR /&gt;It is a cluster volume group, so *you should not activate it* manually in a normal situation - the cluster suite will activate it if (and only if) appropriate checks are successful. *It is not an error* that you cannot activate the VG - that is the cluster safety system doing its job.&lt;BR /&gt;&lt;BR /&gt;If both nodes failed to reach the quorum disk, that means both nodes should have noticed they've lost quorum and rebooted - is this what happened? That's what a cluster *should* have done in that situation.&lt;BR /&gt;&lt;BR /&gt;First, you should run "clustat" and "cman_tool status" on both nodes. &lt;BR /&gt;&lt;BR /&gt;- Are both nodes "online" in the clustat listing? (if not, the node that is not "online" should not activate datavg1 unless the cluster daemons have been completely stopped on both nodes AND the sysadmin has verified the other node does not have it active.)&lt;BR /&gt;&lt;BR /&gt;- Does "clustat" say "Member Status: Quorate" on both nodes? (If not, the node that is not quorate should not activate datavg1...[see above])&lt;BR /&gt;&lt;BR /&gt;- What's the state of the cluster services in the clustat listing? (If the nodes are online but the services are stopped, then datavg1 should not be activated anywhere.)&lt;BR /&gt;&lt;BR /&gt;- In the "cman_tool status" listings, are the values of "Config Version", "Cluster Id" and "Cluster Generation" the same in both nodes? &lt;BR /&gt;(If not, and both nodes are online as per clustat, then *you're in a split-brain situation*: both nodes are thinking "I'm OK, the other node is not.")&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;My recommendation:&lt;BR /&gt;1.) Undo all your manual activation steps on awopdb02. If you started the database manually, stop it. If you mounted the disks manually, unmount them. Deactivate the VG.&lt;BR /&gt;&lt;BR /&gt;2.) If the cluster services are not running on one or both nodes, start them: qdiskd, cman and rgmanager. If HA LVM-style configuration is used, you shouldn't need clvmd; but starting it too won't hurt anything. The fact that datavg1 is not activated should not prevent starting the cluster daemons.&lt;BR /&gt;&lt;BR /&gt;3.) Make sure both cluster nodes are quorate and communicating with each other (see the "clustat" and "cman_tool status" checks above).&lt;BR /&gt;&lt;BR /&gt;4.) If your database service is configured to start up automatically, rgmanager should start it: if not, use the "clusvcadm -e &lt;SERVICE&gt;" command to start it.&lt;BR /&gt;&lt;BR /&gt;If your cluster is properly configured, this should take care of the VG activation and all the necessary application start-up actions.&lt;BR /&gt;&lt;BR /&gt;If you really need to override the cluster suite's control on VG activation, you should understand how the HA LVM configuration works, and then read the vgchange(8) man page, paying attention to the --addtag and --deltag options. &lt;BR /&gt;&lt;BR /&gt;MK&lt;/SERVICE&gt;</description>
      <pubDate>Fri, 20 May 2011 13:30:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282671#M53025</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2011-05-20T13:30:01Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282672#M53026</link>
      <description>Post your "filter" line in your /etc/lvm/lvm.conf. It is very possible it was changed "recently" by someone who does not fully understand the implications.&lt;BR /&gt;</description>
      <pubDate>Mon, 23 May 2011 15:30:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282672#M53026</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2011-05-23T15:30:20Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282673#M53027</link>
      <description># grep filter /etc/lvm/lvm.conf&lt;BR /&gt;    # A filter that tells LVM2 to only use a restricted set of devices.&lt;BR /&gt;    # The filter consists of an array of regular expressions.  These&lt;BR /&gt;    # Don't have more than one filter line active at once: only one gets used.&lt;BR /&gt;    filter = [ "a/.*/" ]&lt;BR /&gt;    # filter = [ "r|/dev/cdrom|" ]&lt;BR /&gt;    # filter = [ "a/loop/", "r/.*/" ]&lt;BR /&gt;    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]&lt;BR /&gt;    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]&lt;BR /&gt;    # The results of the filtering are cached on disk to avoid&lt;BR /&gt;#</description>
      <pubDate>Mon, 23 May 2011 15:36:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282673#M53027</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2011-05-23T15:36:33Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282674#M53028</link>
      <description>Was able to resolve issue by changing following in /etc/lvm/lvm.conf&lt;BR /&gt;&lt;BR /&gt;from:&lt;BR /&gt;    volume_list = [ "VolGroup00", "@db01" ]&lt;BR /&gt;&lt;BR /&gt;to:&lt;BR /&gt;    volume_list = [ "VolGroup00", "@db01", "datavg1/lvol1" ]</description>
      <pubDate>Mon, 23 May 2011 19:13:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282674#M53028</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2011-05-23T19:13:09Z</dc:date>
    </item>
    <item>
      <title>Re: vgchange activation error</title>
      <link>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282675#M53029</link>
      <description>&amp;gt; volume_list = [ "VolGroup00", "@db01", "datavg1/lvol1" ]&lt;BR /&gt;&lt;BR /&gt;You've now effectively disabled the HA LVM protection: datavg1 can now be activated on this node even if it has a tag that indicates it may currently be active on another node.&lt;BR /&gt;&lt;BR /&gt;If db02 is currently running the service and db01 is rebooted, this change allows db01 to activate the datavg1 at boot time and perhaps perform an automatic filesystem check on datavg1/lvol1... while the filesystem is active on db02. This will *certainly* cause filesystem corruption, because db01's fsck will see db02's on-going operations as "corruption" and will attempt to fix it. &lt;BR /&gt;&lt;BR /&gt;At that point, db02 will see problems like "WTF??? I just changed this directory entry from X to Y, but now it's back at X again?" This will typically cause the filesystem to become read-only at db02.&lt;BR /&gt;&lt;BR /&gt;Let me emphasise: In a HA LVM configuration, it is important that the shared VGs *must not* be activated before the cluster services are started and communicating with the other node(s). The shared VGs *must not* be activated, filesystem-checked nor mounted by the regular start-up procedure: they must be controlled entirely by the cluster mechanisms. &lt;BR /&gt;&lt;BR /&gt;If the shared filesystem is mentioned in /etc/fstab at all (you could omit it completely), it *must* have mount option "noauto" and the filesystem check pass number at the 6th column of fstab set to 0. Otherwise your system will fail to boot if the HA LVM locking mechanism works, or may corrupt your shared filesystem if the locking mechanism fails.&lt;BR /&gt;&lt;BR /&gt;If your cluster configuration requires that the shared VG is activated on one or the other node before the cluster daemons are started, then your cluster configuration is misdesigned.&lt;BR /&gt;&lt;BR /&gt;The correct procedure for manually activating a HA LVM -configured shared VG is like this:&lt;BR /&gt;&lt;BR /&gt;(Note: this procedure is for emergency/maintenance use only. In normal use, the cluster should handle all this automatically - if it doesn't, your cluster may not be able to perform an automatic failover in a real failure situation.)&lt;BR /&gt;&lt;BR /&gt;1.) Use "vgs -o +tags" to see if the VG currently has a tag on it.&lt;BR /&gt;&lt;BR /&gt;2.) If the VG has no tag, or a tag that matches the name of the host you wish to activate the VG on, you can go directly to step 7.&lt;BR /&gt;&lt;BR /&gt;3.) If the VG has a tag that matches the hostname of another node, *you must* first make sure that node does not have the VG currently activated.&lt;BR /&gt;&lt;BR /&gt;4.) When you're sure the VG is not currently active on any node, you can use "vgchange --deltag" to remove the VG tag of the other node:&lt;BR /&gt;&lt;BR /&gt;vgchange --deltag db02 datavg1&lt;BR /&gt;&lt;BR /&gt;5.) At this point, say to yourself: "I am definitely certain this VG is not active on any cluster node, and I understand I will held responsible of any damages to data if this is not true." You're telling you know better than the cluster here. &lt;BR /&gt;&lt;BR /&gt;6.) Then add a new tag that matches the hostname of the node you wish to activate the VG in:&lt;BR /&gt;&lt;BR /&gt;vgchange --addtag db01 datavg1&lt;BR /&gt;&lt;BR /&gt;7.) Activate the VG as normal.&lt;BR /&gt;&lt;BR /&gt;vgchange -a y db01&lt;BR /&gt;&lt;BR /&gt;8.) If applicable, run a filesystem check on the LV(s):&lt;BR /&gt;&lt;BR /&gt;fsck -C0 /dev/mapper/datavg1-lvol1&lt;BR /&gt;&lt;BR /&gt;9.) If applicable, mount the filesystem(s).&lt;BR /&gt;&lt;BR /&gt;If the LV contains a raw database instead of a filesystem, steps 8 and 9 will not be applicable; instead, the database engine may be started at that point.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Tue, 24 May 2011 08:05:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/vgchange-activation-error/m-p/5282675#M53029</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2011-05-24T08:05:10Z</dc:date>
    </item>
  </channel>
</rss>

