- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Creation of Logical Volume fails. Lv inactive.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 08:49 AM
тАО03-12-2009 08:49 AM
This is the commands that I execute (pvcreate and vgextend work fine):
Code:
pvcreate /dev/emcpowerh1
vgextend VolGroup00 /dev/emcpowerh1
lvcreate VolGroup00 -n LogVol10 -L 2G
Failed to activate new LV.
This server forms a part of a cluster of two nodes with ServiceGuard. Can be this the problem?. Have I to change something in the cluster?.
The OS is RedHat 4.
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 09:31 AM
тАО03-12-2009 09:31 AM
Re: Creation of Logical Volume fails. Lv inactive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 09:34 AM
тАО03-12-2009 09:34 AM
Re: Creation of Logical Volume fails. Lv inactive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 09:40 AM
тАО03-12-2009 09:40 AM
Re: Creation of Logical Volume fails. Lv inactive.
look the knowledge base
http://www13.itrc.hp.com/service/cki/docDisplay.do?docLocale=en&docId=emr_na-c01146263-1
Hope it helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 10:39 AM
тАО03-12-2009 10:39 AM
Re: Creation of Logical Volume fails. Lv inactive.
If I have understood it well this parameter is fundamental for the cluster and I can't to erase. Then, How can I add more disk to increase the swap?.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 11:06 AM
тАО03-12-2009 11:06 AM
Re: Creation of Logical Volume fails. Lv inactive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 11:31 AM
тАО03-12-2009 11:31 AM
Re: Creation of Logical Volume fails. Lv inactive.
Have you try the extending swap process instead?
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/s1-swap-adding.html.
The tags { hosttags = 1 } statement in the /etc/lvm/lvm.conf file is a Volume Group (VG) activation Protection.
This mechanism is required in SGLX to limit access to LVM Logical Volumes to only one node in the cluster.
Perhaps, you could try to disable it just to do your lvcreate and after that enable it quickly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 11:47 AM
тАО03-12-2009 11:47 AM
Re: Creation of Logical Volume fails. Lv inactive.
As you have noticed, it causes extra complications when you attempt to create a VG for local use on one of the nodes.
Please see this older thread:
http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1129836
If you are using LVM on your system disk (indicated by the name VolGroup00, which is the default VG name used by the RedHat installer), the situation is extra tricky.
When the system is booted, the VolGroup00 is activated using the mini-root environment stored in the initrd file. At that point, the system does not yet know it should require a host tag on the VGs, so the activation of VolGroup00 is successful.
After that, the root filesystem is mounted. This includes /etc/lvm/lvm.conf, so after this point the system knows that host tags are required, and will not allow the activation of any more LVs unless the VG has the required tag. This is what prevents you from adding any new LVs to VolGroup00.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 05:36 PM
тАО03-12-2009 05:36 PM
Re: Creation of Logical Volume fails. Lv inactive.
vgchange --addtag $(uname -n) vgpkgA
vgs -o +tags vgpkgA
vgchange -a y vgpkgA
Similarly, you need to delete the tag after deactivating teh VG
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2009 11:28 PM
тАО03-12-2009 11:28 PM
Re: Creation of Logical Volume fails. Lv inactive.
vgdisplay -v VolGroup01
Using volume group(s) on command line
Finding volume group "VolGroup01"
--- Volume group ---
VG Name VolGroup01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 30.00 GB
PE Size 4.00 MB
Total PE 7679
Alloc PE / Size 512 / 2.00 GB
Free PE / Size 7167 / 28.00 GB
VG UUID jeV3Z8-KAWV-3I1O-2ek2-lhca-ANjQ-M3lW3q
--- Logical volume ---
LV Name /dev/VolGroup01/LogVol10
VG Name VolGroup01
LV UUID WQQGex-ptYM-0Cp2-dkZj-VnUz-fvuN-X7Mrky
LV Write Access read/write
LV Status NOT available
LV Size 2.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 0
--- Physical volumes ---
PV Name /dev/emcpowerh1
PV UUID 3JLFnn-OLm6-EJFr-jyNy-YFfw-nXS6-bN1oin
PV Status allocatable
Total PE / Free PE 7679 / 7167
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2009 11:11 AM
тАО03-13-2009 11:11 AM
Re: Creation of Logical Volume fails. Lv inactive.
I was able to create and activate the volume by running:
lvmconf --disable-cluster
lvcreate...
lvmconf --enable-cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2009 11:23 AM
тАО03-13-2009 11:23 AM
Re: Creation of Logical Volume fails. Lv inactive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2009 12:40 PM
тАО03-13-2009 12:40 PM
Re: Creation of Logical Volume fails. Lv inactive.
If hosttags are activated, you need to create an additionnal file /etc/lvm/lvm_`hostname`.conf in which you must enter something like what follows:
activation { volume_list = ["VolGroup00", "@yourhostname" ] }
Replace yourhostname by your real hostname ;-)
That will make LVM able to activate VolGroup00 and all the VG's under cluster control that are implicitely tagged with the hostname on which the VG's are supposed to be activated.
Of course, if you need to add local VG's that will not be part of the cluster, let's say VolGroup01, you'll need to add it in the same file with the same syntax as Volgroup00 (without the @).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2009 02:22 PM
тАО03-13-2009 02:22 PM
Re: Creation of Logical Volume fails. Lv inactive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2009 02:15 AM
тАО03-16-2009 02:15 AM
Re: Creation of Logical Volume fails. Lv inactive.
nodo2:/etc/lvm> more lvm_nodo2.conf
activation { volume_list=["@nodo2"] }
This node have an Oracle database running. Can I add the the VolGroup01 with the cluster (and database) running?.
Is it necessary to add VolGroup00 (system volume) if I don't want to create logical volumes in VolGroup00?
This is the line to add
activation { volume_list = ["VolGroup01", "@nodo2" ] }
Is it ok?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2009 11:44 AM
тАО03-16-2009 11:44 AM
Re: Creation of Logical Volume fails. Lv inactive.
Also, what version of RH4 are you using. I cannot remember when hosttags was added to RH4 and when it became fully supported.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2009 02:09 PM
тАО03-16-2009 02:09 PM
Re: Creation of Logical Volume fails. Lv inactive.
As long as VolGroup00 or whatever doesn't belong to a clustered VG, VolGroup00 looking as a default installation system VG, you can add them in the activation list.
Be carefull never to add a clustered VG in the activation list, as they are supposed to be managed thru tags by ServiceGuard, thus managed thru the @`hostname`directive.
Tags were added in 4u4 if I remember well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-17-2009 04:14 AM
тАО03-17-2009 04:14 AM
Re: Creation of Logical Volume fails. Lv inactive.
The OS is Red Hat Enterprise Linux AS release 4 Update 4.
The vg that I want to activate is on a new disk of 30Gb that I have configured with lvm.
pvcreate /dev/emcpowerh1
vgextend VolGroup01 /dev/emcpowerh1
lvcreate VolGroup01 -n LogVol10 -L 2G
If I understand that you say I can add the line
activation { volume_list = ["VolGroup01", "@nodo2" ] }
to the file /etc/lvm/lvm_`hostname`.conf without problems (the disk is new).
Can I change this file with the cluster working or I have to stop it. (This is a production system with an Oracle database working, and I have to be very careful)
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-17-2009 02:25 PM
тАО03-17-2009 02:25 PM
SolutionThe question is :
Is VolGroup01 part of any package hosted by the cluster ?
If so, you must not add it this way in the lvm_`hostname`.conf.
You have to add it into the package configuration (control file if legacy package, ascii file if modular) and elt serviceguard manage it.
If the package to which it belongs is already running and you cannot wait, you must manually tag it, activate it and mount it (mount in case you want to use it as a FS):
# vgchange --addtag `hostname` VolGroup01
# vgchange -a y VolGroup01
# mkfs.ext3 /dev/VolGroup01/LogVol10
# mount /dev/VolGroup01/LogVol01 /yourmountpoint.
In case the VG is not part of any package, so not managed by serviceguard (local use only), you can add it as mentionned in my previous post (activation { volume_list = ["VolGroup01", "@nodo2" ] } )
Is it clear ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-18-2009 02:21 AM
тАО03-18-2009 02:21 AM
Re: Creation of Logical Volume fails. Lv inactive.
Very clear. The Vg is not part of any package. This VG will be part of swap of the node. I want to add more swap to the existent.
Only one important question. I can't to make the change without this confirmation. Can I change this file with the cluster working or I have to stop it. (This is a production system with an Oracle database working, and I have to be very careful).
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-18-2009 05:30 PM
тАО03-18-2009 05:30 PM
Re: Creation of Logical Volume fails. Lv inactive.
I did it live on servers with a lot of newly added memory on which I had to had a swap device.
The only difference is that the new disk wasn't a SAN Lun but a VG on local disks.
Yes you can do it live, modify the lvm_`hostname`.conf, and then vgchange -a y your_Volume.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-19-2009 01:04 AM
тАО03-19-2009 01:04 AM
Re: Creation of Logical Volume fails. Lv inactive.
In order to make you more confident on the operation, you could try it on a developpement/test server, if you have some.
We tend to always have dev/test servers with almost the same setup as the prod ones to be able to troubleshoot or to run upgrades before the prod.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-23-2009 03:19 AM
тАО03-23-2009 03:19 AM
Re: Creation of Logical Volume fails. Lv inactive.
I have added the VolGroup01 and I can activate the lv. The problem is that all lv that I have created in nodo2 appear in node1 (inactive). Can I hide the new logical volumes to the node1?. I want to they be local to the node.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-23-2009 04:25 AM
тАО03-23-2009 04:25 AM
Re: Creation of Logical Volume fails. Lv inactive.
I want to add swap in both nodes and then I will have to create a VolGroup01 in node1 an VolGroup02 in node2 with LogVol101 in node1 and LogVol201 in node1.
Is It the unique option?. Or can I hide the lv of node1 to the node2. I don't want to do and lvscan in node1 and that it appears the logical volumes of node2.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-23-2009 10:11 AM
тАО03-23-2009 10:11 AM
Re: Creation of Logical Volume fails. Lv inactive.
The first problem is that I make one partition in new disk and then I do an ls /dev/emsc* and in node1 I have emscpoweri1. If I do the same in node2 I don't have the /dev/emscpoweri1 device but yes the emscpoweri.
(node1)
root@nodo1:/root> ls /dev/emc*
...
/dev/emcpowerh /dev/emcpoweri
/dev/emcpowerh1 /dev/emcpoweri1
(node2)
root@nodo2:/root> ls /dev/emc*
...
/dev/emcpowerh /dev/emcpoweri
/dev/emcpowerh1
The partition on node2 /dev/emscpowerh1 exists in node1 an node2.
Why I can't to see /dev/emscpoweri1 device on node2 but I can see the /dev/emscpowerh1 in node1?
Regards,