- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Cluster switching fail
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 07:24 AM
тАО06-01-2009 07:24 AM
in my cluster environment we are trying to switch a package to primary server.
The package is running on alternative failover node.Here i'm posting some of the outputs . can anyone please point me in right direction.
dev001 root# cmviewcl -v
CLUSTER STATUS
MDDB up
NODE STATUS STATE
dev001 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY down 0/0/0 lan0
STANDBY up 0/1/0 lan1
NODE STATUS STATE
dev002 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0 lan0
STANDBY down 1/2/0 lan1
PACKAGE STATUS STATE AUTO_RUN NODE
MDDB up running enabled dev002
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Subnet up 12.10.10.0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled dev001
Alternate up enabled dev002 (current)
May 29 08:19:20 dev001 CM-CMD[12837]: cmrunpkg -n dev001 MDDB
May 29 08:19:20 dev001 cmcld: Executing '/etc/cmcluster/oracle/control.sh star t' for package MDDB, as service PKG*16385.
May 29 08:19:20 dev001 LVM[12851]: vgchange -a n vgapp
May 29 08:19:20 dev001 LVM[12854]: vgchange -a n vgdata1
May 29 08:19:20 dev001 LVM[12857]: vgchange -a n vgdata2
May 29 08:19:20 dev001 LVM[12860]: vgchange -a n vgdata3
May 29 08:19:20 dev001 LVM[12863]: vgchange -a n vgdata4
May 29 08:19:28 dev001 cmcld: Processing exit status for service PKG*16385
May 29 08:19:28 dev001 cmcld: Service PKG*16385 terminated due to an exit(1).
May 29 08:19:28 dev001 cmcld: Package MDDB run script exited with NO_RESTART.
May 29 08:19:28 dev001 cmcld: Examine the file /etc/cmcluster/oracle/control.sh .log for more details.
########### Node "dev001": Starting package at Fri May 29 08:18:02 GMT 2009 ###########
May 29 08:18:02 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:18:02 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:18:02 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.
########### Node "dev001": Starting package at Fri May 29 08:19:20 GMT 2009 ###########
May 29 08:19:20 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:19:20 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:19:20 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.
########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########
May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
ERROR: Function activate_volume_group
ERROR: Failed to activate vgapp
May 29 08:20:36 - Node "dev001": Deactivating volume group vgapp
vgchange: Volume group "vgapp" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata1
vgchange: Volume group "vgdata1" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata2
vgchange: Volume group "vgdata2" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata3
vgchange: Volume group "vgdata3" has been successfully changed.
May 29 08:20:36 - Node "dev001": Deactivating volume group vgdata4
vgchange: Volume group "vgdata4" has been successfully changed.
dev001 root#
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 07:45 AM
тАО06-01-2009 07:45 AM
Re: Cluster switching fail
The second or other node is not permitting volume group activation in exclusive mode.
There may be error data on the second node, or you may wish to run cmhaltnode and bring that node down and try again on this node.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 08:00 AM
тАО06-01-2009 08:00 AM
SolutionAlso, I suggest yu investigate your network as it looks like you have some issues:
NODE STATUS STATE
dev001 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY down 0/0/0 lan0 <<<<<<<
STANDBY up 0/1/0 lan1
NODE STATUS STATE
dev002 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0 lan0
STANDBY down 1/2/0 lan1 <<<<<<<
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 08:28 AM
тАО06-01-2009 08:28 AM
Re: Cluster switching fail
########### Node "dev001": Starting package at Fri May 29 08:20:36 GMT 2009 ###########
May 29 08:20:36 - "dev001": Activating volume group vgapp with exclusive option.
vgchange: Activation of volume group "/dev/vgapp" denied by another node in the cluster.
Request on this system conflicts with Activation Mode on remote system.
>>>>>>>>>>It's already running on the other node. So follow Melvyn's suggestions - cmhaltpkg on the second node and then you could just run cmmodpkg -e command to force it up on the first node. But, again just like Melvyn told you, check out your lan connections cause looks like you got issues there.
Rgrds,
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 08:04 PM
тАО06-01-2009 08:04 PM
Re: Cluster switching fail
1. The package shutdown failed on the fail over node. What ever caused the package shutdown failure, the vg remained active. Hence you are unable to start it up on primary.
Try to manually unmount file systems mounted from lvols (kill any process using it) in those vgs and do vgchange -a n for each vg on the fail over node. If you get errors trying to unmount a file system, you might have to reboot it.
2. The vg was manually activated on the fail over server during an earlier failed - fail over. This means the package script was not updated correctly.
Still you'll need to unmount file systems/deactivate vgs manually.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2009 10:10 PM
тАО06-01-2009 10:10 PM
Re: Cluster switching fail
your error message is very clear
in node001
PRIMAR LAN0 is DOWN
check our LAN connections.
what does lanscan and ioscan -fnkClan show.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2009 11:12 AM
тАО06-02-2009 11:12 AM
Re: Cluster switching fail
I need to do this cluster switching on weekend.
As you all guided i have got the following out put when i did lanscan and ioscan -fnkClan
dev001 root#lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/0/0 0x001083F7B33A 0 UP lan0 snap0 1 ETHER Yes 119
0/1/0 0x001083F7B3BB 1 UP lan1 snap1 2 ETHER Yes 119
dev001 root# ioscan -fnkClan
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
lan 0 0/0/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base- TX Ultimate Combo
/dev/diag/lan0 /dev/ether0 /dev/lan0
lan 1 0/1/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base- TX Ultimate Combo
/dev/diag/lan1 /dev/ether1 /dev/lan1