HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Number is NICs available to a blade in a bladecenter c7000

 
Glen Collins
Occasional Visitor

Number is NICs available to a blade in a bladecenter c7000

Hello all. I have a question that I can't seem to find the information on. I have a blade center c7000 with 16 blades. What is the maximum number of NICs available to a single blade. I know from reading it's all virtual and I've been told I can only have 4. And the blade center has a total of 32 physical network ports so that would be 4 network blades with 8 gigabit ports each in them. And we are only using 2 ports each for the trunks. Any info would be helpful!

Thanks!
4 REPLIES
The Brit
Honored Contributor

Re: Number is NICs available to a blade in a bladecenter c7000

Hi Glen,
The number of NICs/Server depends on the server.

Basically, half height servers have two integrated NICs (LOMs) and full height blades have 4 NICs.
Half height servers have 2 additional "Mezzanine" slots where additional NIC cards can be installed. Full height blades have 3 Mezzanine slots.

Although there are a total of 32 "physical" network ports on the blades, there are only 2 physical paths from each server bay, one path goes to Interconnect bay 1 and the other to Interconnect bay 2. (for full height blades the there are 4 paths)

On each halfheight blade, LOM1 is hardwired to IC Bay 1 and LOM2 is hard wired to IC bay 2 (on full height blades LOM3 goes to IC Bay 1 also, and LOM4 goes to IC Bay2)

If you check the Port Mapping diagrams for full and half height blades you will get the idea. (Note, the mappings are slightly different between c3000 and c7000 enclosures)

HTH

Dave.
Glen Collins
Occasional Visitor

Re: Number is NICs available to a blade in a bladecenter c7000

Thanks Dave for the reply. I then have a design question then if you can answer it.

I am running a VCS cluster in two blades and I'm using 2 NICs for the private and two NICs for the public. I want to be able to get an additional NIC for backups so I'm not going through my public NICs. Can I even do this? I'm using fiber right now for backups(not in the bladecenter) but since I cannot do that in the new bladecenter then I'm stuck with going n the network.

I'm beginning to wonder now if this will be possible!

Thanks again for the responce Dave!
Steven Clementi
Honored Contributor

Re: Number is NICs available to a blade in a bladecenter c7000

Glen:

You have a varity of options for additional NIC's.

it would be helpful to know which model Blade Servers you have as well as which Network Interconnects (and in which bays they are in).


"I am running a VCS cluster in two blades and I'm using 2 NICs for the private and two NICs for the public."

Are you counting the nics for both servers? or 4 nics per blade? (just trying to understand the environment without knowing the answers to the above request)

"I want to be able to get an additional NIC for backups so I'm not going through my public NICs. Can I even do this?"

It is likely that you can, there will / may be some investment involved.


"I'm using fiber right now for backups(not in the bladecenter) but since I cannot do that in the new bladecenter then I'm stuck with going n the network."

The Blade servers/enclosure have Fibre Channel options.

"I'm beginning to wonder now if this will be possible!"

ANYTHING is possible... (almost).


Steven
Steven Clementi
HP Master ASE, Storage and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5)
RHCE
NPP3 (Nutanix Platform Professional)
The Brit
Honored Contributor

Re: Number is NICs available to a blade in a bladecenter c7000

Glen,
As Steve pointed out above, it would really help to know whether you are using Full height, or half height blades.

In order to get redundancy and failover, it is normal practice to set up the LOMs in teamed pairs. (this is basically why the hard paths from odd# LOMs go to IC Bay1, and even# LOMs go to IC Bay 2. So in general, half height blades provide one (1) teamed pair (LOM1/LOM2), and full height blades provide two (2) teamed pairs, (LOM1/LOM2, and LOM3/LOM4). This provides both redundancy and failover in the event that you lose the Ethernet Module in either IC Bay 1 or Bay 2.

Additional NIC cards are available (for installation on Mezz slots). The additional cards always carry an even number of ports, and the mappings to Interconnect bays will depend on which Mezz slot you use, and which type of card you install (2 port or 4 port). The port mappings are ALWAYS such that the odd# ports map to the odd# IC Bay, and the even# ports map to the even# IC Bay.

This is why it is recommended that IC modules always be installed in horizontal pairs. If they are not installed as a pair, then you will lose half of the port on the installed Mezz card.

So to return to the original discussion, to get additional (teamed) NICs, a 2 port MEZZ card will get you 1 addition (redundent) teamed NIC (with failover). Or two NICs if you dont team (not recommended.) A 4 port Mezz card will get you 2 teamed NICs, (or 4 unteamed ... etc. etc)

Couple of notes:
Regarding teaming: You could depend on VC module failover (i.e. during FW upgrades etc.), however this takes ~30 seconds and most windows/Linux-type clusters cant deal with that.
Second: Beware, there are two types of Mezz cards and not all Mezz slots will accept all Mezz Cards.
Finally (for now), It is imperative that you understand the Mapping Diagrams. (I have been running this stuff for several years and I still constantly refer to these maps). In particular, Interconnect modules must be installed so that they connect to the appropriate mezz cards. i.e. ethernet -> Ethernet, and FC -> FC.

Anyway,

HTH

Dave