Comware Based
1820274 Members
3242 Online
109622 Solutions
New Discussion

Latency difference when comparing Firewall/L3 Switch routing between VLANs

 
parnassus
Honored Contributor

Latency difference when comparing Firewall/L3 Switch routing between VLANs

A general question:

When comparing VLANs routing done using a Layer 3 Switch (IP Routing enabled, VLAN A routed to VLAN B and vice-versa, Default Gateways - for both VLANs - point to Firewall LAN ports those ones connected respectively on Switch separated tagged ports) against VLANs routing performed using an external Firewall appliance (the Firewall routes VLAN A and B together using its separate physical LAN ports through a set of defined static routes, Switch only defines VLANs without performing any IP Routing between them)...apart from clear differences regarding where (and how relevant) is the PoF (if the Firewall goes down...versus...if the Switch goes down) and the intrisically limited Firewall LANs' physical ports throughputs (compared to Switch port-to-port backplane throughput) considering it routes traffic back and forth its VLAN related LAN ports...are there other appreciable routing latencies/timings/issues between hosts on VLAN A and hosts on VLAN B when they both need to continuously exchange data (as in the case of a VLAN A subnet used for Clients and VLAN B subnet used for Servers)?

Is the Layer 3 Switch IP Routing approach commonly the most appropriate in a similar (and other) case?


I'm not an HPE Employee
Kudos and Accepted Solution banner
1 REPLY 1
Ian Vaughan
Honored Contributor

Re: Latency difference when comparing Firewall/L3 Switch routing between VLANs

Hello,

Just my opinion but I would always try and match the right tool for the right job by matching the bandwidth requirements and the level of inspection / features I was after. Some firewalls can do parellised processing which improves things but you'll never get 32 ports of non-blocking 100GbE in a firewall but you can with a switch due to the capabilities in the ASIC. Firewalls and routers may have FPGAs which may get close to ASIC performance but the cost per port leaps up significantly.

Increasingly we see the security layer in the virtual environment expoiting newer capabilities in x86 chipsets which brings security performance at the right price point. This has enabled "network function virtualization" where network appliances are much more likely to appear in the infrastructure as VM's rather than physical boxes. Likewise efforts like DPDK and fd.io (google is your friend) are bringing hardware ASIC-like performance into the NIC under Linux to get line rate performance into the hosting server.

Back to the question - In general:

Line rate packet shifting with tons of bandwidth - Switch

Stateful sessions - Firewall

Deep packet inspection / application awareness - Firewall

Anything to do with Crypto - Firewall

etc

The only other thing to note about swtches is that not so much the latency, but certainly the responsiveness to a failure, can vary by a considerable degree depending on how you set up the interfaces.

For example, and this is from memory, a "routed" interface will failover faster than a VLAN or SVI type interface and I'm sure that there was something about LACP LAGs with BFD responding faster than ECMP to link loss

If you have a low bandwidth requirement e.g. a branch environment and you want the lowest- maintence option - router on a stick is a good option.

If you need to shift packets about as quick as possible and don't need stateful / crypto/ DPI - Switching all the way.

Thanks

Ian

 

Hope that helps - please click "Thumbs up" for Kudos if it does
## ---------------------------------------------------------------------------##
Which is the only cheese that is made backwards?
Edam!
Tweets: @2techie4me