1748136 Members
3615 Online
108758 Solutions
New Discussion юеВ

New to Virtual Connect

 
pnearn
New Member

New to Virtual Connect

Hi there Were starting to look at Blades to replace our existing DLx estate. Im interested in understanding a little more on Virtual Connect. Is the main point to effectively abstract network connections such that they may be shared across all Blades in an enclosure ( as opposed to using pass-through or switch blades) For example if we have a 3 blade ESX cluster which needs say 6 NICS (on seperate VLANS) can we configure VC to pool those NICS so when we add another blade they are automatically added in some sort of profile? Does this also mean were not reliant on having 6 physical NICs on each blade as its managed at an enclosure level? Further I assume that VC passes through VLAN tags to ESX to allow vSwitch management and that the VC NICS are exposed as pNICS to ESX Or am I way off base with my high level understanding here?
10 REPLIES 10
UK-Blr
Frequent Advisor

New to Virtual Connect

Hi, You are exactly right. Virtual Connect (VC) modules will work as shared links. All the blades within an enclosure will share the uplink ports on enclosure, but this depend on how you design the VC networking design. Also you need to consider the type of Blade server you purchase and also the mezzanine cards in the blades. Regarding VLAN with ESX, you can define the VLANS within the port trunk, so multiple VLAN traffic will go thru a shared uplink. Later these will be defined in profile and will linked to blades to send multiple VLAN traffic. Thanks Uday
pnearn
New Member

New to Virtual Connect

Tks, so lets say an ESX blade (i.e BL460c in an ESX cluster within an cSeries) needs access to 6 NIC's Does VC allow me to bypass buying mezz cards for each balde and instead allow me to use two 1/10Gb Virtual Connect Ethernet interconnects (with 8 NICS). One to connect internal blades to the enclosure and the second to connect the enlosure (VC) out to to external switches? Is this right?
chopper3
Frequent Advisor

New to Virtual Connect

Yes it does, a 460 G6 has dual 10Gbps NICs, each of which can 'flex' to present as 4 NICs each, for a total of 8 NICs without the need for Mezz's. With a further two dual-port Flex-10 Mezz's a 460 G6 can support 24 NICs and a 485 G6 can support 32!
Adrian Clint
Honored Contributor

New to Virtual Connect

There's a couple of misunderstandings here. 1. You cant pool NICs and let a server pick from a pool. What VC really does is virtualize the MAC addresses and virtualize the connections from the NICs to the ports on the back of the VC modules. 2. You can get away without buying mezz cards for your extra NICs but only with Flex10 not 1/10Gb Virtual Connect. Flex-10 can turn the two onboard NICs into 8 NICs. Anyother virtual connect option needs a physical NIC and a physical virtual connect module for each connection to a server.
pnearn
New Member

New to Virtual Connect

OK so would you have to decide mow many Flex 10 interconnet modules you need make connections to external switches? Some would be on one VLAN only and some could be managed as trunk ports.You can delcare these as shared uplink ports within VC such that they are available to all blades. You could also duplicate for resilience? Then youd need another set of Flex 10 interconnects to connect the blades with the enclosure itself. Doing it this way allows each blade to present its 2 onboard NICs as 8 Nics without the need for additional Mezz cards? Can you then use ESX to create vSwitches and ESX sees these shared uplink portsas pNICs? VC would present these pNICs to each blade and pass through (tunnel)any tagged VLAN traffic? IN ESX youd still have to go create the IP addresses to bind to each Virtual server vNic as required Also do the BLx blades come with 2 x10BG onboard NICs as standard ?
vcblades
Frequent Advisor

New to Virtual Connect

http://www.hp.com/education/currpath/hp-proliant-essentials.html
Adrian Clint
Honored Contributor

New to Virtual Connect

You would only need more than two Flex-10 modules if the maximum no of NICs you needed on an ESX server was more than 8. If different ESX servers connect to different VLANs - thats what Virtual Connect can manage for you. You create a connection profile and apply it to a specific server. Shared uplinks are uplinks - they have no relation to the NICs on the server as the links to the servers are downlinks. The 10GB NIC on a server is presented to Flex-10 as a 10GB downlink. Flex-10 can then split this 10GB link to 4 connections. Each of these connections can connect to what HP call "a network" (like a VSwitch in VMWare terms) these networks then connect to uplinks to switches from the back of the blade chassis (either shared or not). There is no need/requirement/possibility to connect the blades to the enclosure. The enclosure is a tube of metal and a nearly dumb backplane. NICs connect to Flex-10 or VC1/10, HBAs connect to Virtual Connect SAN modules and iLO management ports connect to Onboard Administrators. The Onboard Administrators need to connect to an external Switch, and so to the Flex-10's. G6 blades and I think most if not all of the G5 blades have 10Gb NICs which can split to 4 NICs. The other blades with 1GB NICs will run at 2.5GB when connected to a Flex-10 but you cannot split them down.
pnearn
New Member

New to Virtual Connect

Great tks. So if I have 2 Flex10 interconnects I have to stack them via say a 10GB crossover cable. If these are ethernet modules in bays 1 and 2 do I also have to connect these to the OA ports? Or is the blades that get their iLO ports connected to OA? Our ESX DL585 cluster currently have 10 NICS each, some trunks (the 10 includes teaming for some connections). So for Flex-10 I can see us needing at least one interconnect as we will still need to send those connections down to the blades. And a second mirrored for resilience Where possible we shall tyy and aggregate the ports to single Flex10 Nics (via server profiles with vcNet aggregation) but we'll still want that teaming exposed at the blade server level I guess in case of physcial NIC failure Thus if we say 2 x 10GB NICs per blade server (i.e 8 FlexNics) then we need 2 x NICS on the interconnect to downlink each blade to i.e. 3 blades = 6 x Interconnect NICS + the 10 we need to connect to external switches. So in our case one VC Flex 10 module with 16 10GB NICS would equal a 3 x ESX cluster Am I close ! [Updated on 6/19/2009 1:07 AM]
Neal Bowman
Respected Contributor

New to Virtual Connect

As others have stated, you only need additional VC/VC-Flex interconnects when you install mezzanine adapters in your blades. You state that your DL585 uses 10 NICs today. Are these for connecting to different VLANs/ switches or do you have that many for required bandwidth or redundancy? With VC, you can use VLAN Tunneling, so that any packet tagged with a VLAN ID is automatically passed through VC straight to the ESX vSwitches, where it is then sent to the correct guests. You can also use VLAN Mapping, where you must define each VLAN that is to presented to each of the vSwitches. If a packet from a VLAN that is not defined in VirtConnect, that packet is dropped and is not sent through the interconnect modules. The number of blades installed in the chassis does not determine the number of interconnect installed. That is determined by the number of NICs that are present via the embedded system board and any mezzanine adapters. Cross-over cables are not required for interconnects that are in adjacent bays (1 &2, or 5&6), but stacking links between the different rows are required.