Showing results for 
Search instead for 
Did you mean: 

error on activation of volume

Occasional Advisor

error on activation of volume

yesterday i was testing on my lab to force shutdown of 1 node from 2 node serviceguard cluster. the node tries to fail over but it failed to bring up the package with this error.

Error vg_files may still be activated on ota2
To correct this situation, logon to "ota2" and
execute the following commands:
vgchange -a n vg_files
vgchange --deltag ota2 vg_files

Once "vg_files" has been deactivated from "ota2",
this package may be restarted via either cmmodpkg(1M)
or cmrunpkg(1M).

In the event that "ota2" is either powered off
or unable to boot, then "vg_files" must be forced
to be activated on this node.

******************* WARNING ***************************

Forcing activation can lead to data corruption if
"ota2" is still running and has "vg_files"
active. It is imperitive to positively determine that
"ota2" is not running prior to performing
this operation.


To force activate "vg_files", execute the following
command on the local system:
vgchange --deltag ota2 vg_files

The package may then be restarted via either
cmmodpkg(1M) or cmrunpkg(1M) commands.


i want to know why i have to do it manualy? it should be automatic, or there is some conf i miss. what i have to do?


Honored Contributor

Re: error on activation of volume

this sounds like the fail over operation failed at vg activation face..

here are some of the chnaces.

package shutdown down befor failover did not complete successfully. Like the VG could not deactivated as some of the LVs did not dismount(mamy be some processes still holdoing it etc..)

Confirm the package log ...
Occasional Advisor

Re: error on activation of volume

thanks for quick response,

hemm, your are right. actually i allready check if it still being used. but i am confident that the partition is not being used. this is because the other node who has the tag is shutdown.

this problem only occure if i force shutdown by directly push the power button.

do you know why. thanks in advance.


John Bigg
Esteemed Contributor

Re: error on activation of volume

If a node fails then the package should start up automatically on another node without manual intervention. Manual intervention should only be required when we do not know the status of the remote system so cannot guarantee that the volume group is really not being used.

When a package starts, if the volume group has a tag which indicates it may be owned by another node the package uses cmviewcl to get the status of that other node. If the status shows failed then we know the node failed and clear the tag, activate the volume group and the package starts up.

However, if the status is something else then we do not really know what is going on and you get the message you saw indicating manual intervention.

Therefore I would ask how you failed the server and what cmviewcl showed for the node you failed. My guess is that it does not show failed as in this example here:

# cmviewcl

flowerpot up

ben down failed
bill up running

Could the node have been booting back and automatically re-joining the cluster? You really need to check the cmviewcl node status when the package starts to see what is going wrong here.

It would be good if the Serviceguard package scripts logged the status when it failed so it would help troubleshoot issues like yours. I have had this problem before. Therefore I have logged an enhancement request which has number QXCR1000772515 for when this becomes customer viewable.

Anyway, try again and check the node status and this should explain why the package does not start up automatically. It must have status failed for this to happen.

Occasional Advisor

Re: error on activation of volume

ok, thanks. i run cmviewcl -v on that day. actually (sorry.....) i did'nt capture it. but i am sure that it was showing that the cluster is up but the package is not.

about deltag you said. that is exactly what i want to ask. does the serviceguard delete the tag of other node who is owning the vg if that node is inaccessible or dead. or we have to do it manualy.

i only know how the work of mounting/unmounting of serviceguard while still in normal halt/shutdown condition.
John Bigg
Esteemed Contributor

Re: error on activation of volume

The cluster was up, but the package wasn't. But what about the status of the node you failed? That is the critical bit.

If the node status shows as "failed" then Serviceguard will remove the tag automatically. If it does not show "failed" it will not remove the tag and you will have to do this manually.

This is why we need to know what cmviewcl showed for the node status of the node you force shutdown.
Occasional Advisor

Re: error on activation of volume

ok than it should do automatically. the other node you ask is in shutdown position. i cannot reproduce this error now because my system is in production now.

should the problem reappear i now what the cause of this. mainly because the other node failed to check the failed status.

thanks for the comment. i think i will close this thread.