- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- ServiceGuard Virtual IP binding failed
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-09-2021 09:58 PM - last edited on 12-15-2021 08:56 PM by support_s
12-09-2021 09:58 PM - last edited on 12-15-2021 08:56 PM by support_s
ServiceGuard Virtual IP binding failed
hi
I encountered a problem. When I disabled the network card on machine a with the ifdown command, my cluster package can be switched. The cluster package can be switched to machine B normally, and I replied to the network card on machine a. the cluster cmviewcl - V attributes are normal. When I disabled the network card on machine B, my cluster package is transferred to machine a, but finally it cannot be bound successfully when binding the virtual IP. The following attachment is a screenshot of my log, Can you help me deal with this problem?
Dec 10 13:23:07 root@elcndc2hanfs01 volume_group.sh[27062]: sg_activate_volume_group
Dec 10 13:23:07 root@elcndc2hanfs01 volume_group.sh[27062]: activation_check
Dec 10 13:23:07 root@elcndc2hanfs01 volume_group.sh[27062]: lvm_sanity_check
Dec 10 13:23:08 root@elcndc2hanfs01 volume_group.sh[27062]: vg_tag
Dec 10 13:23:08 root@elcndc2hanfs01 volume_group.sh[27062]: Attempting to addtag to vg hanavg...
Dec 10 13:23:08 root@elcndc2hanfs01 volume_group.sh[27062]: addtag was successful on vg hanavg.
Dec 10 13:23:08 root@elcndc2hanfs01 volume_group.sh[27062]: Activating volume group hanavg .
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: sg_filesystems
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: sg_check_and_mount
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Checking filesystems:
/dev/hanavg/lv_shared
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Number of components: 4
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: sorted indexes: 0
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: processing 0 /hana/shared 1
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Mount order: 1
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Logical Volume is /dev/hanavg/lv_shared
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Mounting /dev/hanavg/lv_shared with option -o rw on /hana/shared
Dec 10 13:23:09 root@elcndc2hanfs01 filesystem.sh[27385]: Mounting /dev/hanavg/lv_shared at /hana/shared
Dec 10 13:23:09 - Node "elcndc2hanfs01": Starting NFS daemons.
rpcbind had been running on this node.
program 100024 version 1 ready and waiting
rpc.statd had been running on this node.
Starting rpc.rquotad daemon
Starting 8 nfsd daemons
rpc.mountd had been running on this node.
Dec 10 13:23:13 - Node "elcndc2hanfs01": Exporting filesystem on /hana/shared
Dec 10 13:23:13 - Node "elcndc2hanfs01": Starting rmtab synchronization process
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: sg_ip_addresses
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: sg_add_ip_addresses
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: create_ip_subnet_list
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: add_to_ip_subnet_array
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: Adding IP address 172.16.13.22 to subnet 172.16.13.0
cmmodnet: IP address 172.16.13.22 is already configured on the subnet.
cmmodnet: Use the "ifconfig" or "ip" command to check the configured IP addresses
for the subnet.
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]:
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: ERROR: Function sg_add_ip_address
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27650]: ERROR: Failed to add IP 172.16.13.22 to subnet 172.16.13.0
Dec 10 13:23:13 root@elcndc2hanfs01 master_control_script.sh[25709]: ##### Failed to start package NFS, rollback steps #####
Dec 10 13:23:13 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/package_ip.sh stop
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27686]: sg_ip_addresses
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27686]: sg_remove_ip_address
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27686]: create_ip_subnet_list
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27686]: add_to_ip_subnet_array
Dec 10 13:23:13 root@elcndc2hanfs01 package_ip.sh[27686]: Removing IP address 172.16.13.22 from subnet 172.16.13.0
Dec 10 13:23:13 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/tkit/nfs/tkit_module.sh stop
Starting rpcbind to ensure the clear shutdown of other daemons
Dec 10 13:23:13 - Node "elcndc2hanfs01": Stopping rmtab synchronization process
7673,1 99%
Starting rpcbind to ensure the clear shutdown of other daemons
Dec 10 13:23:13 - Node "elcndc2hanfs01": Stopping rmtab synchronization process
Dec 10 13:23:13 - Node "elcndc2hanfs01": Unexporting filesystem on /hana/shared
Dec 10 13:23:13 - Node "elcndc2hanfs01": WARNING: NLM Grace period for NFSv3 is not defined. There are chances that unmount of exported shares
will fail. It is recommended to configure the NLM grace period for NFS V2/V3 server before package startup.
Stopping rpc.rquotad daemon
Restarting statd daemon
Stopping rpc.mountd daemon
Stopping nfsd daemon
Dec 10 13:23:19 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/filesystem.sh stop
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: sg_filesystems
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: sg_umount_fs
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: Number of components: 4
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: sorted indexes: 0
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: processing 0 /hana/shared 1
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: Mount order: 1
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: max mnt order: 1
Dec 10 13:23:19 root@elcndc2hanfs01 filesystem.sh[27849]: Unmounting filesystem on /hana/shared
Dec 10 13:23:19 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/volume_group.sh stop
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: sg_deactivate_volume_group
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: Deactivating volume group hanavg
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: lvm_sanity_check
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: vg_tag
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: Attempting to deltag to vg hanavg...
Dec 10 13:23:19 root@elcndc2hanfs01 volume_group.sh[27907]: deltag was successful on vg hanavg.
Dec 10 13:23:19 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/pr_cntl.sh stop
Dec 10 13:23:20 root@elcndc2hanfs01 pr_util.sh[28237]: sg_deactivate_pr: deactivating PR on /dev/mapper/hana_shared_1
Dec 10 13:23:20 root@elcndc2hanfs01 pr_util.sh[28237]: sg_deactivate_pr: deactivating PR on /dev/mapper/hana_shared_2
Dec 10 13:23:21 root@elcndc2hanfs01 pr_util.sh[28237]: sg_deactivate_pr: deactivating PR on /dev/mapper/hana_shared_3
Dec 10 13:23:21 root@elcndc2hanfs01 pr_util.sh[28237]: sg_deactivate_pr: deactivating PR on /dev/mapper/hana_shared_4
Dec 10 13:23:21 root@elcndc2hanfs01 pr_util.sh[28237]: sg_deactivate_pr: deactivating PR on /dev/mapper/hana_shared_5
Dec 10 13:23:21 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/vmfs.sh stop
Dec 10 13:23:21 root@elcndc2hanfs01 vmfs.sh[28732]: sg_vmfs
Dec 10 13:23:21 root@elcndc2hanfs01 master_control_script.sh[25709]: /opt/cmcluster/conf/scripts/sg/external_pre.sh stop
Dec 10 13:23:21 root@elcndc2hanfs01 external_pre.sh[28826]: sg_external_pre_script
Dec 10 13:23:21 root@elcndc2hanfs01 external_pre.sh[28826]: Stopping External pre Scripts
Dec 10 13:23:21 root@elcndc2hanfs01 master_control_script.sh[25709]: ###### Failed to start package for NFS ######
thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-15-2021 08:54 PM
12-15-2021 08:54 PM
Re: ServiceGuard Virtual IP binding failed
Hello @wanghw
As there is no response to the query yet, I would recommend to directly contact technical support and log a support call for quicker resolution. Please refer the links below for support ticket options:
https://support.hpe.com/help/en/Content/supportAndOtherResources.html
https://www.hpe.com/psnow/doc/A00039121ENW
Thanks,
Parvez_Admin
I work for HPE
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]