LAN Routing
cancel
Showing results for 
Search instead for 
Did you mean: 

Delay in changes to routing

 
Highlighted
Occasional Visitor

Delay in changes to routing

I have 2 4500zl chassis that are setup to route traffic (vrrp owner and backup) between 9 different vlans.

We now have a high capacity Next Gerneration Firewall Cluster in place and are happy for that to takeover routing duties from the switches. When we last attempted to make this change we found that it took a very very long time for the changes to take effect.

- For a given vlan we undid the vrrp configuration in advance.

- We prepared the firwall cluster configuration and only applied that once we finally removed the IP address from the vlan configuration on the 5400zl.

- Cleared arp on both 5400zl's.

It then took over an hour before the changes seemed to properly take effect and the firewall cluster was hit with the expected traffic.

Recently I wanted to switch routing for one vlan from one switch to the other, and experienced the same again.

Are there other cache/tables that can be reset without having to restart the switches that would allow the changes to take affect much more quickly?

Any recommendations would be greatly appreciated.

5 REPLIES 5
Highlighted
Honored Contributor

Re: Delay in changes to routing

Hello, can you clarify how exactly your two HP 5400 zl switches are interconnected (if they are)? Were they deployed using the DT (Distributed Trunking) implementation used against downlink peers (downlink peers: distribution/access switches/hosts directly concurrently uplinked via usual LACP to both 5400 zl Switches, the DT pair instead uses DT-LACP on downlinks) or not? ...in other terms was the VRRP implementation deployed over two standalone switches with just an interlink used for exchanging VRRP advertisements or was the VRRP implementation depolyed along with Distrubuted-Trunking technology?

There is a mix of things (STP, VRRP, IP Routing, DT, etc.) to consider when your plan is to move away from a VRRP implementation because the routing duty will be migrated to an uplink device (Firewall/Firewall Cluster).

Kudos and Accepted Solution banner
Highlighted
Occasional Visitor

Re: Delay in changes to routing

Hi, the two 5400zl are currenlty our core. They are interconnected with 2 x 10GbE as a trunk. Core_1 is the owner for any vlan vrrp configuration and Core_2 the backup.

Where we have tried to move a vlan's routing to the firewall cluster I have first disabled vrrp for that vlan on Core_2 and then Core_1, and leave Core_1 with the routing/gateway address in it's config for that vlan:

e.g. vlan 801, ip address 192.168.163.1 255.255.255.0

Core_1 continues to route to other vlans without any interruption due to those changes.

It is when we configure the firewall cluster to take the routing address, remove it from Core_1. It seems to take a very long time for the change to take effect. It is though the new router/gateway is unavailable and then suddenly 'something' learns and then everything works as expected, but it took over an hour for that to happen. We tried clearing arp cache on Core_1, but it did not make a difference.

When you make changes to what device holds the gateway/routing address of a subnet is there a way of ensuring that the traffic is quickly picked up by the new router?

 

Highlighted
Honored Contributor

Re: Delay in changes to routing

Can you explain how exactly the actual VRRP pair is connected to upstream firewall(s) and how are you planning to connect just one member of the actual VRRP pair (once the VRRP features will be removed and the routing turns back to just one switch of the two switches) to upstream new/old/rearranged firewall(s)? I mean...logically and physically.

As example we have an Active/Active Firewalls Cluster where each Firewall has its own IP Addresses (Node IPs, on both LAN facing side and WAN facing side) other than sharing Virtual IP Addresses (Common IPs, on both LAN facing side and WAN facing side)...so our Firewalls Cluster consumes 3 IP Addresses per logical interface (and we have many as you can image...).

Our Core is just single link uplinked to Firewall Cluster Node 1 and to Node 2, we just have a very simple Last Resort Route (0/0 via Next-Hop-IP) that points to the Common IP f relevant Firewall Cluster LAN interface...routing changes are rather immediate AFAIK.

Kudos and Accepted Solution banner
Highlighted
Occasional Visitor

Re: Delay in changes to routing

Thanks Grant,

We have a Forcepoint NGFW consisting of 2 x 1105 Appliances.

  • We have aggregated pairs of interfaces on each appliance
  • In each pair one is connected to Core_1 the other to Core_2

 

We have configured Distributed Trunking and are using that on the switch side for the trunks to the Firewall Cluster

 

The two Core switches are 5400zl:

  • Software revision  : K.15.10.0022

ROM Version        : K.15.30

  • They are interconnected with 2 x 10GbE set as Trunk (not LACP)

 

Some of our subnets are already routed by the Firewall

e.g. Vlan120 ‘Site_Visitors’

  • no ip address on the switches
  • tagged on the trunks to the appropriate Firewall interface
  • the Firewall interface configured for Vlan120 has the address 10.27.120.1

 

 

Others are routed by Core_1 with Core_2 as a backup, just like the example in my earlier email.

 

 

Now that we have the Firewall Cluster with plenty of capacity and redundancy we would like to migrate all routing to the firewall.

When we tried this after the Firewall Cluster was first installed we did the following:

 

  • Disabled the vrrp for the vlan on Core_2 and then also Core_1

 

In this case it was our Voice vlan; the resulting configuration looked like this:

 

On Core_1

vlan 10

   name "Voice_vlan"

   untagged E14,J18

   tagged A3-A4,B1-B4,B12,B21,H1,K13,L13,Trk1,Trk45-Trk46

   ip address 10.27.10.1 255.255.254.0

   voice

   exit

 

On Core_2

vlan 10

   name "Voice_vlan"

   untagged J18

   tagged A3,B1-B4,B21,B23-B24,I5,J19,K4-K6,Trk1-Trk2,Trk45-Trk46

   no ip address

   voice

   exit

 

  • We then changed Core_1 vlan 10 to no ip address
  • And then enabled the appropriate Firewall interface configured for Vlan10 with 10.27.10.1
  • Cleared the arp cache on both Core_1 and Core_2

 

But … the firewall did not seem to pick up the routing. The firewall rules were to allow all traffic between internal subnets. We restored the ip address to the switch and it resumed routing instantly.

 

Is there anything that you would recommend to differently from the point of view of the switches? Is there another cache or table that we need to clear? Ideally, all without having to restart the Core switches.

Highlighted
Honored Contributor

Re: Delay in changes to routing

Hi! we too have a cluster of Forcepoint NGFW 11052105, it is connected to a VSF Virtual Switching Framework (2 x Aruba 5400R zl2)...no LACP (yet). Let me think about your scenario a little bit more (our one is simplified by VSF which is able to permit us to avoid VRRP...but we don't use the IPv4 Routing on the VSF indeed the routing is not in charge of the VSF, it's actually done by another Aruba 5400R zl2 equipped with dual Management Modules working in NonStop Redudancy mode, that one is connected to our VSF via a LACP LAG...so our scenarios are not directly comparable).

One thing I'm pretty sure is that each NGFW appliance can form LAGs (Links Aggregation Groups) only on its ports...so no "Multi-Chassis LAGs" are possible...it means that each LAG should then terminate into a single switch (physical or logical, it doesn't matter)...your DT Pair is like (not equal) our VSF so it acts like a single logical switch and the LAG's member links coming from each NGFW can/could be spread across both DT Members (exactly as they can/could be spread across VSF Members)...thus the issue shouldn't be on the way your Firewall Cluster is physically downlinked to your routing/switching devices (but logically it's another story).

Kudos and Accepted Solution banner