Telecom IQ
cancel
Showing results for 
Search instead for 
Did you mean: 

The need to publish virtual-physical topology metadata to VNF VMs: NFV Nuts and Bolts

Telco_Editor

By Senthil Kumar Subramaniam

Lead Architect – NFV Solutions

 

Communications service providers (CSP) want to leverage cloud technology for telecom network applications because the cloud provides enormous flexibility and scalability. But the carrier cloud needs to be designed in a different way compared to the IT cloud since the service level agreements (SLAs) are different, and the applications running on it are primarily network-intensive applications. Every packet and CPU cycle matters in a carrier cloud.

 

In order to meet strict SLAs and the low latency requirements of the carrier cloud, several NFV-specific features need to be provided by the VIM layer. Among them, the following are important in the context of this article:

  • Core pinning of vCPUs
  • Direct attach of vNICs to physical NICs

Core pinning of vCPUs eliminates context switches that may happen due to the host OS process scheduler. The vCPU processes are pinned to certain cores inside a NUMA cell so that memory proximity can be leveraged and QPI (Intel QuickPath Interconnects) communication avoided.

 

Direct attachment of vNICs enables the VNF components to communicate with the provider network by bypassing the host virtualization overhead (OVS, Host TCP/IP Stack etc.).

 

The above requirements are captured as part of VNF on-boarding descriptors, and the NFV orchestrator requests the VIM layer to provision the VMs with the above conditions. Once the provisioning is done, the VMs should also pin their internal processes (threads) to guest vCPUs (which in turn are pinned to host cores) in order to make full use of the core pinning feature.

 

For example, let’s take a scenario where a VNF component doing packet processing functionality requires that its vCPUs need to be pinned to host CPU cores, and its vNICs use the direct attach feature. This application has two internal processes – receiver and sender. The NFV orchestrator would request the VIM for the above configurations. The ideal topology would be as shown below, on a host having two NUMA nodes and each having two cores. This mapping is ideal because there are no QPI communications happening between NUMA nodes. The VNF VM can choose to pin the internal application processes to any of the vCPUs and it would not have any impact on performance or latency.

 

nuts and bolts.png

 

 

But due to resource constraints or limitations in the VIM scheduler (such as OpenStack Nova), the VIM layer may assign the host core pinning in a different way as described below.

In this case, the VNF app running in the VM cannot pin its internal processes in any random way. The VNF VM needs to be aware of the virtual-physical topology in order to optimize the guest vCPU pinning. Let’s see how the various guest CPU pinning methods impact the performance and latency, due to increased QPI communications.

 

Case 1: Sender process pinned to vCPU0, receiver process pinned to vCPU1, total QPI = 3

 

nuts and bolts 1.png

 

1 - To access data from NUMA 0 to NUMA 1 so that receiver can process the packet

2 - To access data from NUMA 1 to NUMA 0 so that sender can process the packet

3 - To access data from NUMA 0 to NUMA 1 so that NIC3 can send the packet

 

As shown above, case 1 incurs more QPI communications. This could have been avoided if the VM was made aware of the virtual-physical topology as part of VM provisioning process by the NFV orchestrator.

 

If the topology metadata is available, the VM can perform optimal pinning of guest vCPUs to its internal processes as shown in Case 2 below. For the above scenario, the metadata published would be:

 

                vCPU0-NUMA0-Core0, vCPU1-NUMA1-Core1

                vNIC0-NUMA0-NIC0, vNIC1-NUMA1-NIC3

 

Case 2: Receiver process pinned to vCPU0, sender process pinned to vCPU1, total QPI = 1

 

nuts and bolts 2.png

 

1 - QPI communication to access data from NUMA 0 to NUMA 1 so that sender can process the packet

 

In summary, to make best use of carrier grade features like core pinning and direct attach vNICs, the NFV orchestrator needs to be integrated with VIM in order to publish the virtual-physical topology mapping, so that it can be useful to the VNF VMs for their internal assignments. For more information on how HP is providing an integrated NFV Solution offering, please refer to HP`s OpenNFV Architecture.



0 Kudos
About the Author

Telco_Editor

Labels
Events
June 5-6, 2018
Online
Expert Days - 2018
Visit this forum and get the schedules for online HPE Expert Days where you can talk to HPE product experts, R&D and support team members and get answ...
Read more
June 19 - 21
Las Vegas, NV
HPE Discover 2018 Las Vegas
Visit this forum and learn about all things Discover 2018 in Las Vegas, Nevada, June 19 - 21, 2018.
Read more
View all