Showing results for 
Search instead for 
Did you mean: 

Changing NIC card setting


Changing NIC card setting

We have RX8640 two system running serviceguard cluster and having two NIC configured for Private IP 192.168.2.X and 192.168.3.X.  both cards are connected to L3 switch and in separate PVT VLAN. For other public IP there is separate VLAN.


Port speed of Switches are changed to 1000Mbps FD autonegotiation OFF. At server end we have NIC 1000Mbps FD Auto ON.


We want to make it Auto OFF to match with Switch port seeting. What are the steps to follows. Do we require network services to stop start after chaning setting. or server need restart to affect  the change.


Recently we are obesrving errors in log file encloed. Earlier we had L2 switches connected on which it was working fine. Now NICs are connected to L3 switches

Honored Contributor

Re: Changing NIC card setting

To understand this issue, it is necessary to know a little bit of history of the Ethernet network technology.


With 10/100 Mbps NICs, switching off the autonegotiation literally meant it: the NIC would stop listening for autonegotiation messages and sending them to the other endpoint.


But when the standard for gigabit Ethernet (for copper cables) was created, the autonegotiation was made a mandatory part of the standard. It is used to negotiate certain technical parameters for the gigabit link, such as transmission clock source aka. master/slave relationship: thus it is impossible to really switch off autonegotiation for gigabit Ethernet links.


Unfortunately, the NIC and switch manufacturers did not exactly standardize how this was to be presented to the users: some hardware and drivers (e.g. Cisco switches) allows setting the autonegotiation to OFF and speed to 1000 Mbps, which causes the hardware to disable all speeds and duplex modes other than the preferred one, but leave the autonegotiation mechanism running. In this mode, the autonegotiation is used only for the negotiation of the technical parameters of the 1000 Mbps copper link, which are normally invisible to the users anyway.


Other hardware and drivers (e.g. HP-UX) treats disabling autonegotiation as it was with the 10/100 Mbps NICs, totally disabling the autonegotiation mechanism. Since this makes it impossible to run a 1000 Mbps link (as the technical parameters cannot be negotiated), the card will then allow the selection of 10 Mbps or 100 Mbps only.


As far as I know, "1000 Mbps, full duplex, autonegotiation off" is an impossible combination of settings for HP-UX. With gigabit links, it is OK to have HP-UX set to "autonegotiation ON" and the switch set to "1000Mbps FD autonegotiation OFF", because the "autonegotiation OFF" in the switch actually means more like "autonegotiation restricted to single speed and duplex mode only".


To identify NIC-level errors, you should be looking at "netstat -i" and "lanadmin -g" outputs anyway. High-level error messages in your log will only tell that there are problems somewhere between the two cluster nodes: they don't specify where the problem actually is. Looking at the NIC statistics in HP-UX and port statistics in the switch might allow you to identify the link that is failing: it might be the link between this system and the switch, or it might be the uplink from this switch to another switch... or it might be something different.