BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Confusion on the network interface aspect of blade servers?

Cajuntank MS
Valued Contributor

Confusion on the network interface aspect of blade servers?

New to blade servers so I bought a C3000 chassis and a couple of bl460 servers. With the chassis came a GbE2c layer 2/3 switch. I know each server shows 2 10Gb NICs available, but how do you map or correspond those NICs to the GbE2c switch?
5 REPLIES
gregersenj
HPE Pro

Re: Confusion on the network interface aspect of blade servers?

This will also helpfull to you:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01508406/c01508406.pdf

The C7000 is designed for High Availability.
Port 1 on the embbeded nic (LOM) is routet to Interconnect bay (IB) 1.
Port 2 on the LOM is routet to IB 2.
On the half hight blades.
The full hight servers got 4 LOMs, so they got 2 connection to each IB.

The C3000 is a C7000 cut on the middle.
You got the same number of LOMs on the servers, and the same number of port in the switches. So 2 (HH)/ 4 (FH) connections to each Interconnect.

Port mapping is the same for all switch modles.

Heres link for the tech brief to the C7000 also:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00816246/c00816246.pdf

By the way. HP got a lot of technology bries on the web.
Try google, what ever HW you like to know aboute, end then technology brief. Don't be to accurate.

BR
/jag
The Brit
Honored Contributor

Re: Confusion on the network interface aspect of blade servers?

Hi and Welcome,

You need to be a little careful, since the port mappings on a c3000 are NOT quite the same as the c7000.

For example, in a C7000, the blade server Onboard NICs, (LOMs) are mapped to interconnect bays 1 and 2 (odd numbers to bay 1, and even numbers to bay 2). On a c3000, the LOMs are ALL mapped to Interconnect Bay 1. From this it can be seen that Interconnect Bay 1 MUST contain an Ethernet capable module of some type.

In order to get Network redundancy, (i.e. NIC Teaming), you will need to put a second (preferably matching) ethernet module in Interconnect Bay 2, and an ethernet Mezzanine card on Mezz Slot 1 on all of your blade servers.

Note also, I dont thing that the GbE2c switch is capable of 10Gb communication, even though your Blade NICs are.

The downlinks between the blade and the Interconnect bay are Hard-wired, and you really dont need to worry too much about them, however, take a look at,

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01508406/c01508406.pdf

Figure 11.

Dave.
Cajuntank MS
Valued Contributor

Re: Confusion on the network interface aspect of blade servers?

Ok, I think I am getting a little clearer on what's going on with the port mappings. On a follow up question, if I have my thinking correct, I have created my two vlans on this GbE2c switch (one for servers and one for iSCSI). I have defined at the ports to be part of one of those vlans and now am looking to trunk that switch connection over to my top of the rack switch. So would it be better to create one 4port trunk that carries both "server" and "iscsi" traffic over, or would it be better to create two 2port trunks, one for "servers" and the other trunk for "iscsi"?
I know this might be closer of a network question, than a Blade server question, so if I need to post to the network side, let me know.
Cajuntank MS
Valued Contributor

Re: Confusion on the network interface aspect of blade servers?

Ok, things are a little clearer. I am creating one 4port trunk for my connection back to my main switch. I will use my pass-thru interconnect for my iSCSI connections as well as investing in some mezzanine cards for each of my servers for those iSCSI connections. If anyone sees something I should do differently, please chime in.
Thanks.
Proliant VMS San Mgrs
Frequent Advisor

Re: Confusion on the network interface aspect of blade servers?

Disclaimer: I don't have a c3000, and as "The Brit" has already pointed out, those are differently mapped than c7000.

To get a clearer picture, log in to the OnBoard Administrator. In the pane on the left, expand interconnect bays; select an interconnect module, and expand to Ports. You will see how each blade is mapped to that interconnect module.

In the pane on the left, expand device bays, select a blade, expand to Ports. You will see how each LOM (and Mezzanine adapters, if installed) is mapped to an interconnect module.

Hope this helps. Virtual Connect can be interesting. Make sure you enable NPIV on each fabric switch port connected to your VCFC.
Problems Solved