BladeSystem - General
1753726 Members
4877 Online
108799 Solutions
New Discussion

NC553i / NC553m and VMDirectPath I/O support (Cisco Palo adapters competition)

 
chuckk281
Trusted Contributor

NC553i / NC553m and VMDirectPath I/O support (Cisco Palo adapters competition)

Rinaldo had a customer issue for supporting QOS for interconnects on a Request-for-Proposal (RFP) Let's follow the discussion:

 

************************

 

Hi,

 

We’re under attack on a local account by the Cisco Palo adapters’s ability to provide 56 vNICs bypassing the ESX hypervisor.

 

The attached document (Emulex source) states that the BE3, used in the NC553NIC and NC553m adapter,  supports SR-IOV/ VMDirectPath I/O but I can’t find any reference in VMware HCL or in our documents that the NC553i/m support it and if the latest release of the drivers already support it.

Do you have more information on this?

 

********************

 

Lionel jumped in:

 

*********************

 

SR-IOV is not supported yet by VMware ESX, it will require significant kernel change, last time I heard from VMware was something like 2012… But why on earth would they require so many NICs under VMware? Have they realized that with the PALO adapter the 56 Vnic share a single 2x10Gb pipe…  8 FlexNIC servers cannot suit here?  

 

If they really want lots of NIC ports, you can propose to use 3 x dual port Flex-10 Mezzanines using full height servers (3x2x4+16 =40 ports) but that’s a very expensive solution but at least we can provide a substantial amount of bandwidth that Cisco does not provide.

 

*****************

 

Now Greg had some info:

 

********************

 

Rinaldo

 

Remember to review the VMware configuration maximums that are attached to this message and available at:

http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf

 

***********************

 

Hi Greg and Lionel,

 

Cisco strongly  influenced the customer’s RFP apparently, there’s a mandatory request for QoS for each vNIC with minimal bandwidth assignment hypervisor bypass.

We will point out the pros of the FlexFabric solution (the lack of bw on UCS and the proprietary VN-Link technology are our main attack points in this scenario) but we’re also trying to find a way to answer the RFP as it’s already been released and can’t be changed (public sector).

 

The VMdirectPathIO was one of the ideas from the top of heads but considering the configuration limits reported in the PDF below:

 

VMDirectPath PCI/PCIe devices per host 8

VMDirectPath PCI/PCIe devices per virtual machine 4 

 

Looks like it’s not a feasible way to address the required 56 vNIC on a ESX host.

 

The customer also released a note that states that only Cisco can provide 56 vNICs on a single blade, “other vendors” require a huge amount of hardware to provide the same number of interfaces on a VMware environment…ok but what about the underlying bandwidth of the 56 vNICs? Pretty insane…

 

Thanks for the support.

 

************************

 

Back to Greg:

 

******************

 

How does Cisco claim to support the number of vNics when it is clearly a VMware limitation? Is the Cisco RFP not a VMware environment?

 

***********************

 

And Rinaldo :

 

******************

 

They are using the PALO adapter which provides up to 56 vNICs but they don’t need SR-IOV, see the following link:

http://bradhedlund.com/2010/12/31/cisco-ucs-criticism-and-fud-answered/

 

6) “Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard”

The Cisco Palo card accomplishes interface virtualization in way that’s completely transparent to the OS — This is done through simple standards based PCIe.  There’s nothing proprietary happening here at all.  When installed into the server, the Cisco Palo card appears to the system like a PCIe riser hosting multiple standard PCIe adapters.  In other words, Cisco has effectively obsoleted the need for SR-IOV with the design of the Cisco VIC (Palo).  There’s nothing stopping any other vendor from using the same transparent PCIe based approach to interface virtualization.

 

Looks like once again Cisco is looking to be proprietary and not standards-based. Keeps their customers locked in.

 

My understanding is that the memory space is directly tied to the number of underlying 10 GbE ports.

As the 56 vNICs in the Palo adapter lay on 2 x 10 GbE they don’t have any issue with it.

 

But there’s no hypervisor bypass like VMDirectPath (wrong statement by the customer ?), it’s just a bunch of devices everyone with a MAC address that actually lay on just 2 x 10 GbE ports.

Maybe the Palo adapters do something at hardware level to support VNLink?

 

*********************************

 

Franco joined in:

 

***********************

 

 

VMDirectPath on USC with Palo Adapter with reference also to VN-Link in hardware configuration.

 

http://www.cisco.com/en/US/products/ps10280/products_configuration_example09186a0080b24530.shtml#req

 

Install and configure VMDirect Path in UCS

 

Components Used

The information in this document is based on these software and hardware versions:

  • Cisco UCS Manager 1.1(1)
  • Cisco UCS 5108 Blade Chassis
  • Cisco UCS M81KR VIC (aka PALO)

*******************

 

And Chris had some suggestions:

 

*****************

 

Also keep in mind that with using VMDirectPath, there is no VMotion or FT support.  So, why not position NetIOC as an alternative?  SR-IOV products and capabilities will not be available until 2012.  We may see some early products this year, but highly doubtful.  Both VMware and Microsoft have yet to incorporate SR-IOV capabilities to their OS kernels.  Just because the hardware vendor would have support for it, doesn’t mean the OS vender will anytime soon.

 

Plus, QoS in UCS is Transmit only.  They have no receive side queue’s.  Unlike that of NetIOC, which offers both Transmit and Receive support.  Granted, there is no fine grain VM control.  But with FlexNICs, you can segregate higher classes of VMs from other VMs at the hardware level.

 

*******************

 

Always been an argument from Cisco that their QOS implementation has been better that the rate limiting of Virtual Connect and Brad has always bashed the receive side of VC not doing rate limiting. It appears that Cisco doesn't regulate the receive side for QOS either. I wonder why they HP so hard on this point then? I guess a win at all cost metality.

 

Other thoughts or comments?