Telecom IQ
Showing results for 
Search instead for 
Did you mean: 

HPE takes Carrier Grade Open Stack to new highs


OpenStack with OVS DPDK and OpenDaylight – performance benchmarking review

Author: Sharon Rozov, Senior Architect at HPE

Communication Service providers (CSPs) are turning to OpenStack for their Virtualization Infrastructure. While OpenStack’s known benefits include openness, agility and cost effectiveness, the industry is still seeking proof for OpenStack’s ability to meet carrier performance.

HPE is committed to Open Source and encourages adoption of SDN and NFV in CSPs over OpenStack.

HPE’s OpenStack reference architecture for CSPs is designed to enhance OpenStack’s networking performance. It is built on Helion Carrier Grade (HCG), an OpenStack distribution that is tested, debugged and certified to meet carrier-grade requirements. Packet processing is optimized by introducing a DPDK accelerated version of Open vSwitch (OVS DPDK) and an OpenDaylight SDN Controller, as a replacement to the vanilla OVS agents.

The Open Source marriage of OpenStack, OVS DPDK and OpenDaylight yields a high performance architecture, benefiting from the optimization gains of its parts: OVS DPDK optimizes NIC (Network Interface Controller) read/write actions by bypassing the Linux kernel; and OVS, being configured with OpenFlow rules, makes optimized forwarding/routing decisions by overriding Linux networking logic.

To demonstrate performance improvements, we pulled our sleeves and built a throughput benchmarking test setup, designed to quantify the throughput gains of an ODL-controlled OVS DPDK in an OpenStack environment.
The benchmarking test results of our reference architecture came out surprisingly high and not before double- and triple-checking our findings we publish them to the industry.

The test uses two Traffic Generator VMs transmitting small packet (64 Byte) full duplex traffic across the OpenStack Network Virtualization realm and measuring maximum layer 2 forwarding throughput and maximum IPv4 layer 3 routing throughput.

The test compares the performance of OpenStack HCG with OVS DPDK and ODL, to the performance of OpenStack with OVS-DPDK and with no ODL, on both HCG and Red Hat RDO Mitaka. Results are measured on two configurations: traffic within the same compute node and traffic between two separate compute nodes. The test also compares HCG with OVS DPDK and ODL to Red Hat RDO Mitaka with vanilla OVS and no ODL.

Device-under-test runs on an HP ProLiant DL360 Gen9 Server with an Intel Xeon CPU (E5-2680 v3 @ 2.50GHz). A single CPU socket is in use and hyperthreading is enabled.
Setup configuration includes DPDK 2.2, OpenStack HCG 2.0 and OVS DPDK 2.5, with multiqueue enabled, using 4 queues. 1GB huge pages are applied.

The first set of tests focuses on HPE’s reference architecture: OpenStack HCG, OVS DPDK and ODL.
Figure 1 shows a setup of two VMs running traffic within a single compute node. The performance test results are depicted in figure 2, showing that performance grows linearly on core scaling for both Layer 2 forwarding and Layer 3 routing.


Figure 1  Test setup, single compute node


Figure 2 HCG+OVS DPDK+ODL: Traffic throughput on a single compute node, core scaling


Figure 3 shows a setup of two VMs running traffic between two compute nodes, through 10Gbps full duplex tunnels. Figure 4 shows linear growth in performance per core scaling. We also conducted tests with larger packets and the throughput reached line rate at packet size of 256 Bytes for Layer 2 and at 512 Bytes for Layer 3 traffic.


Figure 3 Test setup, two compute nodes


Figure 4 HCG+OVS DPDK+ODL: Traffic throughput between two compute nodes, core scaling


The second set of tests conducts performance benchmarking of OpenStack with OVS DPDK, with and without ODL, assessing the contribution of the ODL-controlled OVS model to throughput gains.

Figure 5 shows the throughput comparison of an OpenStack HCG with OVS DPDK, with and without ODL. This test uses the setup shown in figure 1, where traffic runs within a single compute node.

The next test which its results are in figure 6 run Red Hat RDO Mitaka OpenStack with OVS DPDK, without ODL. Traffic runs within a single compute node.

Figure 7 presents the results of a test running traffic between two compute nodes, as depicted in figure 3, comparing HCG with OVS DPDK and ODL to Red Hat RDO Mitaka with OVS DPDK (no ODL).


Figure 5 HCG+OVS DPDK+ODL vs. HCG+OVS DPDK (no ODL), traffic throughput with a single compute node

Note: Layer 3 performance of the no-ODL model is very low, probably due to Linux qrouter performance issues, hence the alternative architecture, applying OpenFlow rules to OVS and eliminating the use of qrouter, demonstrates significantly high improvements.


Figure 6 RDO Mitaka+OVS DPDK vs. HCG+OVS DPDK+ODL, traffic throughput with a single compute node


Figure 7 RDO Mitaka+OVS DPDK vs. HCG+OVS DPDK+ODL, traffic throughput between two compute nodes

Finally, Figure 8 shows the results of the test comparing between Red Hat RDO Mitaka with vanilla OVS (no OVS DPDK, no ODL), against HCG with both OVS DPDK and ODL, when running traffic within a single compute node.


Figure 8 RDO Mitaka+native OVS vs. HCG+OVS DPDK+ODL, traffic throughput within a single compute node


Significant throughput improvements are demonstrated with OVS on OpenStack, proving that carrier-grade performance can be met without having to bypass the Open vSwitch and turn to physical I/O networking.
CSPs can benefit from the elasticity and flexibility of virtualization without compromising their network performance goals.

 Want to learn more? Join us at OpenStack Day in Israel June 2nd, and visit us at our HPE website, and follow us on Twitter at @HPE_NFV and @HPE_CSP.

0 Kudos
About the Author