Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-20-2012 06:02 AM
07-20-2012 06:02 AM
help
- Tags:
- NIC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-20-2012 02:45 PM
07-20-2012 02:45 PM
Re: help
FC:
The maximum number of blade-to-I/O-module connections can be realized with two dual-port FC mezzanine cards per blade, giving you a total of 2 * 2 = 4 FC ports in each blade (for a total of 64 ports). There are both 8 Gb and 4 Gb FC mezzanine card models: see below.
If you populate the I/O module slots 3, 4, 5 and 6 with 4 Gb FC Pass-Thru modules, each of these will be a physical port, giving you 64 * 4 Gb ports (and a job of carefully routing and plugging in a lot of fiberoptic cables in a small space without obstructing the cooling airflow of the enclosure). That would give you 4 * 4 Gb of guaranteed dedicated physical bandwidth per blade.
Alternatively, you can use 8 Gb fabric switches as your I/O modules, giving you a maximum of 4 * 8 = 32 uplink ports with a 8 Gb speed. This results in the same total bandwidth (64 * 4 Gb vs 32 * 8 Gb), but with switches you will have a smaller number of physical cables, and if all blades are not producing a full I/O load simultaneously, individual blades can get higher peak I/O bandwidths (a maximum of 4 * 8 Gb for a blade as long as there is enough uplink capacity).
Infiniband: (Disclaimer:I haven't used Infiniband, this is just from reading the docs)
The 40 Gbps QDR Infiniband switches (QLogic) can only go to I/O module slots 5-8, so only one dual-port IB mezzanine card per blade can be used. Each Infiniband I/O module is double-wide (takes two slots) and has 16 internal and 18 external ports. This means a total of 32 Infiniband uplinks from your enclosure, each rated at 40 Gbps per port maximum.
There is also a newer Mellanox-based set of Infiniband modules, which can provide 56 Gbps connectivity, although the maximum number of 56 Gbps ports per enclosure is lower (18 uplink ports with 56 Gbps speed). Apparently only the mezzanine slot 1 in each blade can handle the 56 Gbps speed, and so this Infiniband I/O module must go to I/O module slots 3 & 4.
You cannot mix the two Infiniband sets: you must choose either QLogic or Mellanox version.
iSCSI:
iSCSI is essentially SCSI over Ethernet, so each NIC can also be an iSCSI interface. Some NICs have hardware-level support for iSCSI, allowing the system to boot from iSCSI storage and maybe some performance enhancement, but since you asked for the maximum number of connections, I'm going to assume that you're using every NIC for iSCSI traffic.
For a maximum number of physical connections, you could plug in a 4-port NIC mezzanine card in each mezzanine slot. This way, you would get 2 x 10 Gb (integrated NICs) + 8 x 1 Gb (mezzanine cards) connections in each blade. If you populate I/O module slots 1 & 2 with 10 Gb pass-thru modules and the rest with 1 Gb pass-thru modules, you'll get a total of 32 x 10 Gb + 96 x 1 Gb uplinks.
(A total of 128 network cables: you would have to be very very neat and methodical in your cable routing or else this will definitely become a huge mess and interfere with the cooling airflow.)
If you want all 10 Gb NICs, there are no quad-port 10 Gb mezzanine cards, only dual-port. This would work out to 96 x 10 Gb network ports. This would achieve the maximum bandwidth per blade: 6 x 10 Gb for each blade.
You could replace the pass-thru I/O modules with switches, but that would reduce the number of uplinks, limiting the overall bandwidth. And you asked for the maximum number of connections...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 07:11 AM
07-25-2012 07:11 AM
Re: help
Thank you for quick response, sorry for the delay in feedback, once again thank you for your help, I have no words to describe how this information was useful.
Grateful.