1748198 Members
2707 Online
108759 Solutions
New Discussion

help

 
wangler
New Member

help

Good day.
Could you tell me the maximum number of connections (FC / InfiniBand / iSCSI / etc.) For models of C7000 enclosure with 16 BL460c blades Gen8.

 

2 REPLIES 2
Matti_Kurkela
Honored Contributor

Re: help

FC:

The maximum number of blade-to-I/O-module connections can be realized with two dual-port FC mezzanine cards per blade, giving you a total of  2 * 2 = 4 FC ports in each blade (for a total of 64 ports). There are both 8 Gb and 4 Gb FC mezzanine card models: see below.

 

If you populate the I/O module slots 3, 4, 5 and 6 with 4 Gb FC Pass-Thru modules, each of these will be a physical port, giving you 64 * 4 Gb ports (and a job of carefully routing and plugging in a lot of fiberoptic cables in a small space without obstructing the cooling airflow of the enclosure). That would give you 4 * 4 Gb of guaranteed dedicated physical bandwidth per blade.

 

Alternatively, you can use 8 Gb fabric switches as your I/O modules, giving you a maximum of 4 * 8 = 32 uplink ports with a 8 Gb speed. This results in the same total bandwidth (64 * 4 Gb vs 32 * 8 Gb), but with switches you will have a smaller number of physical cables, and if all blades are not producing a full I/O load simultaneously, individual blades can get higher peak I/O bandwidths (a maximum of 4 * 8 Gb for a blade as long as there is enough uplink capacity).

 

 

Infiniband: (Disclaimer:I haven't used Infiniband, this is just from reading the docs)

 

The 40 Gbps QDR Infiniband switches (QLogic) can only go to I/O module slots 5-8, so only one dual-port IB mezzanine card per blade can be used. Each Infiniband I/O module is double-wide (takes two slots) and has 16 internal and 18 external ports. This means a total of 32 Infiniband uplinks from your enclosure, each rated at 40 Gbps per port maximum.

 

There is also a newer Mellanox-based set of Infiniband modules, which can provide 56 Gbps connectivity, although the maximum number of 56 Gbps ports per enclosure is lower (18 uplink ports with 56 Gbps speed). Apparently only the mezzanine slot 1 in each blade can handle the 56 Gbps speed, and so this Infiniband I/O module must go to I/O module slots 3 & 4.

 

You cannot mix the two Infiniband sets: you must choose either QLogic or Mellanox version.

 

 

iSCSI:

iSCSI is essentially SCSI over Ethernet, so each NIC can also be an iSCSI interface. Some NICs have hardware-level support for iSCSI, allowing the system to boot from iSCSI storage and maybe some performance enhancement, but since you asked for the maximum number of connections, I'm going to assume that you're using every NIC for iSCSI traffic.

 

For a maximum number of physical connections, you could plug in a 4-port NIC mezzanine card in each mezzanine slot. This way, you would get 2 x 10 Gb (integrated NICs) + 8 x 1 Gb (mezzanine cards) connections in each blade. If you populate I/O module slots 1 & 2 with 10 Gb pass-thru modules and the rest with 1 Gb pass-thru modules, you'll get a total of 32 x 10 Gb + 96 x 1 Gb uplinks.

(A total of 128 network cables: you would have to be very very neat and methodical in your cable routing or else this will definitely become a huge mess and interfere with the cooling airflow.)

 

If you want all 10 Gb NICs, there are no quad-port 10 Gb mezzanine cards, only dual-port. This would work out to 96 x 10 Gb network ports. This would achieve the maximum bandwidth per blade: 6 x 10 Gb for each blade.

 

You could replace the pass-thru I/O modules with switches, but that would reduce the number of uplinks, limiting the overall bandwidth. And you asked for the maximum number of connections...

MK
wanglervinicius
New Member

Re: help

hi,

Thank you for quick response, sorry for the delay in feedback, once again thank you for your help, I have no words to describe how this information was useful.

Grateful.