Servers - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Tech Refresh Projects - Keep and Eye on the I/O

 
Todd_Owens
Occasional Contributor

Tech Refresh Projects - Keep and Eye on the I/O

Tech Refresh with an Eye on I/O – Yes, I/O Matters!

As you look to refresh server and storage technology, keep your eyes on the future with respect to I/O connectivity decisions. Not all I/O adapters leverage the latest features and capabilities that will serve customer requirements today and in the future. This blog examines a few specific areas that you should consider.

Gigabit Network Connections

Many commercial and enterprise customers want to update their infrastructure, but budget constraints limit the extent to which they can upgrade. In many cases, they might have funding to update servers, but not the network infrastructure. In this case, they connect new state-of-the-art servers to legacy 1Gb Ethernet networks using a 1GbE I/O connection.  While this might make sense given the network topology, it does not help the customer get the most from their new server capability due to network limitations or for when they upgrade the network to 10GbE in the future, which would require a new I/O adapter.

A better solution is to deploy a 10GBASE-T adapter in the new server from the start. These adapters will auto-negotiate and connect to the legacy 1GbE network, and when they upgrade the network to 10GbE, the adapter will automatically connect at the higher speed. In addition, 10GbE adapters support features and capabilities not found in 1GbE adapters. This includes support for Single Root I/O Virtualization (SR-IOV), stateless TCP/IP offloads, and VXLAN tunnel offloads. The 10GBASE-T adapter supports these capabilities, even while running at 1Gbps bandwidth.

In addition, advanced 10GbE adapters like the HPE 10Gb Ethernet 530T, 533FLR-T or 536FLR-T adapters can support features like Network Partitioning (NPAR), Storage Offload, DPDK small packet acceleration, and Remote Direct Memory Access (RDMA) offload when running at 10GbE speeds in the future. Deploying 10GBASE-T adapters will “future-proof” the server today for network upgrades down the road. With a cost difference between $100-$200 per adapter, the upgrade can save you significant time and resources you do not need to spend on re-configuring the server I/O when you upgrade the network in the future.

Driving the Network Upgrade - 1GbE LOM Today to Become 10GbE Tomorrow

Most servers today still come with two or more 1GbE ports on the motherboard (known as a LOM or LAN on Motherboard). This is an inexpensive way to provide basic connectivity at a low cost, but it is not optimal for application performance and virtualization scalability. Many of these same vendors recognize this and want to standardize on 10GBASE-T LOMs for connectivity on the next-generation server. This will drive commercial and enterprise customers to upgrade networks to 10GbE in the very near future. That means the servers being deployed today will need to connect to 10GbE networks tomorrow.

10GBASE-T I/O is again a great choice to address the potential issue. As mentioned above, a 10GBASE-T adapter will connect seamlessly to both 1GbE and 10GbE networks. As in the above scenario, configuring new servers today with 10GBASE-T adapters will save customers time and resources when they upgrade their legacy network in the future.

Virtualization Scalability with NPAR and SR-IOV

Most data centers deploy server virtualization with VMware, Microsoft Hyper-V or some other hypervisor technology. The benefits of virtualization are well known, but also require scalable network connectivity. For example, the best practice for VMware is to deploy a minimum of six independent networks for different tasks such as management, storage traffic, virtual machine migration and more. That means six connections, cables and switch ports per server, and if you are deploying a high-availability solution, it would be twelve connections per server for redundancy.

What if in the next server technology refresh project we could reduce the number of physical connections and provide more independent networks at the same time? Some (but not all) intelligent high-performance Ethernet adapters support a feature called Network Partitioning or NPAR. With NPAR, the adapter virtualizes the physical port into four or eight independent physical functions that are presented to the hypervisor over the PCIe bus. The O/S sees a single 10GbE (or 25GbE) physical connection as up to sixteen independent network adapters. An administrator can set each connection using a fine-grain bandwidth and quality of service (QoS) control. One adapter becomes sixteen!

The above VMware example requires only two physical connections to the network (two for redundancy purposes) and can easily support the best practice with network connections and bandwidth to spare. Fewer connections increases reliability and lowers overall costs by reducing the number of switch ports required. In addition, the higher 10Gb (or 25Gb) Ethernet bandwidth provides more network performance allowing for more virtual machines to be deployed per server.

Another I/O virtualization technique used in virtual server environments is Single Root I/O virtualization or SR-IOV. This reduces VM-to-VM latency by moving the virtual network management out of the hypervisor and offloading that work to the adapter. This also reduces the server CPU utilization, freeing up CPU resources to focus on application-specific tasks. Combining SR-IOV with NPAR capability allows the administrator to effectively build a high-performance, scalable network within the server adapter itself.

As you work with your customers on server technology refresh projects, be sure to choose adapters that support both NPAR and SR-IOV concurrently like the HPE 10Gb 530, 533, 534 or 536 series Ethernet adapters which support eight virtual functions per adapter or the newer HPE 10Gb 521T or 10/25Gb 621SFP28 and 622FLR-SFP28 series adapters that can support up to sixteen virtual functions per adapter.

Reduced Latency and Universal RDMA

Microsoft and Linux operating systems support RDMA for I/O operations to reduce latency. That makes these operating systems well suited for latency-sensitive applications like Microsoft Storage Spaces Direct, Oracle RAC, SAP Hana and others. However, to take advantage of this, the I/O adapters must support RDMA as well.

Commercial and enterprise customers who want to use these latency-sensitive applications can upgrade servers and include RDMA-enabled 10GbE or 25GbE I/O. RDMA reduces latency by bypassing the CPU software kernel and allowing I/O transactions to be executed directly to memory.

Customers can implement RDMA using either RDMA over Converged Ethernet (RoCE) or Internet Wide Area RDMA Protocol (iWARP). RoCE is well suited for smaller environments and iWARP provides more scalability. Most I/O vendors provide one or the other version of the RDMA protocol. However, HPE and Cavium 10/25GbE technology found in the HPE Ethernet 521T, 621SFP28 and 622FLR-SFP28 adapters can take advantage of Universal RDMA technology that allows each port to run RoCE or iWARP protocols. This eliminates having to choose one protocol over the other, adding more flexibility to the customer environment. When deploying RDMA-enabled adapters, consider recommending adapters that support Universal RDMA for maximum flexibility.  For more on Universal RDMA, check out www.cavium.com/universalRDMA.

Support for New Storage Protocols – NVMe-ready

Customers undergoing server refresh projects with servers connecting to shared storage or deploying a software-defined storage solution like VMware vSAN or Microsoft Storage Space Direct have several protocol choices to make. Fibre Channel, iSCSI or FCoE are the most common. These protocols leverage the SCSI command set that has been the standard storage command language for a couple of decades. There is a new storage language that is on the horizon called Non-volatile Memory Express or NVMe. This new command set is streamlined to communicate with flash and memory-based storage architectures and is much more efficient with lower latency than SCSI-based protocols.

If you want to upgrade servers today and plan to leverage NVMe in the future, you should consider using I/O adapters that will support the NVMe protocol in the future. NVMe-ready adapters are available today, like the HPE Ethernet 521T, 621SFP28 and 622FLR-SFP28 adapters or the HPE StoreFabric SN1100Q and 1600Q Fibre Channel adapters. When upgrading the server infrastructure, think about storage connectivity and consider deploying an I/O technology that will support the customers’ needs for SCSI-based protocol today and NVMe protocol tomorrow.

Summary – Intelligent I/O Matters

Intelligent I/O matters. There are lots of capabilities in today’s I/O technology that you should consider in your technology refresh projects. The good news is that HPE and Cavium provide a complete portfolio of I/O adapters that support most (if not all) the features described above. Don’t be afraid to ask about your customers’ I/O needs. This is a great way to add value and protect your customers’ investment in their technology refresh projects. If you need help understanding what HPE Ethernet and Fibre Channel adapters have what features, consult our I/O experts on the HPE Team here at Cavium. Their contact information can be found here.

 

Todd Owens
Marvell Field Marketing Manager
Intelligent I/O matters!