BladeSystem Server Blades

2GB memory required for 10/20GB adapters.

Trusted Contributor

2GB memory required for 10/20GB adapters.

Keith had a customer question regarding using high bandwidth adapters:




Cust is running on older blades….BL460c G6 with minimal memory – I think 12GB.  They just added an additional workload and are working through some issues.  They uncovered this note in the BL460c G6 QuickSpecs relating to 10Gb adapters:


NOTE: Each 10 Gigabit Ethernet adapter requires a minimum of 2GB of server



When they run tools like ‘ps aux’ and ‘top’, they’re not seeing anything specifically for the adapter.  Is it part of the RHEL kernel when it loads?  Pretty sure it is.  Just want to confirm and get some context.




Reply from Richard:




Others may have a veto, but indeed, it is memory consumed in the kernel. The 10 Gbit NICs tend to have rx multiqueue enabled (multiple receive queues), and between that and enabling JumboFrames, and perhaps tweaking the size of the receive queues (for those NICs where it is possible at least (... and your inability to tune receive queue sizes...) and you can have a fair bit of memory tied-up in receive buffers for the NIC to DMA inbound traffic to a memory location.


Whether adding a 10Gb NIC to an exsiting system will increase performance will depend of course on whether or not the 1 Gbit NIC was actually a bottleneck... In the broadest handwaving terms, the CPU cycles to send/recieve a frame over a 10 GbE network is no different than a 1 GbE Network, so if the workload is not all small packets, odds are quite good there certainly won't be a huge improvement. But getting the multi queue support going can help those things - assuming it isn't a single stream/flow of small packets...




Other comments?