1828217 Members
1875 Online
109975 Solutions
New Discussion

changing interface name

 
manuj kumar
Frequent Advisor

changing interface name

Hello All

Oracle complain :)
we have a setup of serviceguard Extension for RAC on HPUX v3.

Node 1:
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/4/2/0 0x001A4B095372 0 UP lan0 snap0 1 ETHER Yes 119
0/4/2/1 0x001A4B095373 1 UP lan1 snap1 2 ETHER Yes 119
0/5/2/0 0x001F290DF270 3 UP lan3 snap3 3 ETHER Yes 119
LinkAgg0 0x000000000000 900 DOWN lan900 snap900 5 ETHER Yes 119
LinkAgg1 0x000000000000 901 DOWN lan901 snap901 6 ETHER Yes 119
LinkAgg2 0x000000000000 902 DOWN lan902 snap902 7 ETHER Yes 119
LinkAgg3 0x000000000000 903 DOWN lan903 snap903 8 ETHER Yes 119
LinkAgg4 0x000000000000 904 DOWN lan904 snap904 9 ETHER Yes 119


Node 2:
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/4/2/0 0x001A4B09525A 0 UP lan0 snap0 1 ETHER Yes 119
0/4/2/1 0x001A4B09525B 1 UP lan1 snap1 2 ETHER Yes 119
0/5/1/0 0x001F290DF27C 2 UP lan2 snap2 3 ETHER Yes 119
LinkAgg0 0x000000000000 900 DOWN lan900 snap900 5 ETHER Yes 119
LinkAgg1 0x000000000000 901 DOWN lan901 snap901 6 ETHER Yes 119
LinkAgg2 0x000000000000 902 DOWN lan902 snap902 7 ETHER Yes 119
LinkAgg3 0x000000000000 903 DOWN lan903 snap903 8 ETHER Yes 119
LinkAgg4 0x000000000000 904 DOWN lan904 snap904 9 ETHER Yes 119

i build the cluster as below.


# cmviewcl -v

CLUSTER STATUS
xcluster up

NODE STATUS STATE
xdb01 up running

Cluster_Lock_LVM:
VOLUME_GROUP PHYSICAL_VOLUME STATUS
/dev/vgora /dev/disk/disk13 up

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/4/2/0 lan0
PRIMARY up 0/5/1/0 lan2

NODE STATUS STATE
xdb02 up running

Cluster_Lock_LVM:
VOLUME_GROUP PHYSICAL_VOLUME STATUS
/dev/vgora /dev/dsk/c10t0d2 up

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/4/2/0 lan0
PRIMARY up 0/5/2/0 lan3

MULTI_NODE_PACKAGES

PACKAGE STATUS STATE AUTO_RUN SYSTEM
xpkg up running enabled no

NODE_NAME STATUS STATE SWITCHING
xdb01 up running enabled

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS INTERCONNECT NAME
Service up 0 0 xpkg-srv

NODE_NAME STATUS STATE SWITCHING
xdb02 up running enabled

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS INTERCONNECT NAME
Service up 0 0 xpkg-srv

Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style modular
Priority no_priority
#


Lan0 in both nodes as public, Lan2 in node 1 is Heartbeat and Lan3 in node 2 is heartbeat.

everything is working fine, oracle face problem and the support suggest to use sam interface names in the cluster for both nodes.
unfortunately we don't have same names in both nodes.
node1: lan0,lan1,lan2
node2: lan0,lan1,lan3

lan0,lan1 in the same card with 1GB
lan2, lan3 are the local built-in interfaces with 100MB

Gentlemen, what is your opinion?

Thanks
5 REPLIES 5

Re: changing interface name

My opinion is:

i) you don't have enough NICs - for RAC you should have 2 resilient interfaces (i.e. 4 physical NIC connections teamed into 2 aggregates) That gives you a public interface for your VIP and for user connections, plus a private interface for RAC interconnect, CSS heartbeat and Serviceguard heartbeat.

ii) Don't try and use the physical interfaces in Serviceguard - configure your NICs into an aggregate using APA - that way you will have a consistent link aggregate name on both ndoes (such as lan900) - you have APA installed, but you will need to set it up - see this manual here:

http://docs.hp.com/en/J4240-90045/J4240-90045.pdf

HTH

Duncan

I am an HPE Employee
Accept or Kudo
manuj kumar
Frequent Advisor

Re: changing interface name

Thanks Dunkan for your reply,

I used to build serviceguard On physical NICs without HP Auto Port Aggregation.
APA is Extra benefit provides more data bandwidth.
I already finish my installation for SGERAC. also Oracle team installed the RAC without any problem. then i integrated The Unix with The Oracle.
The system is UP now, But facing some problems with oracle, and oracle team is blaming the difference in NIC names in both nodes for the private interface

Node1 Private interface name is: LAN2
Node2 Private interface name is: LAN3

SG and HPUX are not caring about that difference, that's why i proceeded without thinking to play with the names.
manuj kumar
Frequent Advisor

Re: changing interface name

Any opinion ?

Re: changing interface name

You don't mention what version of RAC, so I'll assume 10gR2. The manuals for RAC make this pretty clear:

http://download.oracle.com/docs/cd/B19306_01/install.102/b14202/pre_hpux.htm#sthref389

As in:

"â ¢The public interface names associated with the network adapters for each network must be the same on all nodes, and the private interface names associated with the network adaptors should be the same on all nodes.

For example: With a two-node cluster, you cannot configure network adapters on node1 with lan0 as the public interface, but on node2 have lan1 as the public interface. Public interface names must be the same, so you must configure lan0 as public on both nodes. You should configure the private interfaces on the same network adapters as well. If lan1 is the private interface for node 1, then lan1 should be the private interface for node 2."

So you can either follow my advice and use APA, or spend some time mucking around with the ioinit command to sort this out (use ioinit to move lan2 and lan3 on each node to lan4 or something like that). There are plenty of examples on the forums of how to use ioinit to do this - just doing a google search of:

ioinit lan site:itrc.hp.com

will throw up plenty of examples...

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Waelkhalil_1
Frequent Advisor

Re: changing interface name

Hi Sir,
I faced these problem before ...

below is the recommendations from oracle:

they said:
"According to the documentation the interface names should match. It is highly not recommended to have different interface names for the private interfaces on different nodes."

If you can't change the interface names on node2 then you have 2 options to configure the private interconnect for the instances:\


1. delete the cluster interconnect interface from the OCR with oifcfg delif -global lan2/xx.xx.xx.xx
and use cluster_interconnects in the spfile/init.ora

2. delete the cluster interconnect interface wile the DB is up using oracle software owner from the OCR with
oifcfg delif -global lan2/xxx.xx.x.x
configure the interfaces in the OCR with

oifcfg setif -node [nodename] if_name/subnet/if_type (please see oifcfg -help)

like:

oifcfg setif -node xxxxxx01 lan2/xxx.xx.x.x:cluster_interconnect

oifcfg setif -node xxxxxx02 lan3/xxx.xx.x.x:cluster_interconnect

The instances need to be restarted after the changes.

i select the second option and the DB started successfully after this...

I hope this can help...