BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

artiman
Occasional Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Hi, I'm working on a big Vmware implementation and I do not have experience with the Virtual Connect Modules, I'm working on the design. Each BL460p has 8 NICs: 2 Integrated NICs 1 Dual port NC373m 1 Quad Port NC325m Vmware Network configuration: 2 for SC-Vmotion 2 for iSCI traffic 4 for Virtual Machines Traffic The C7000 chassis Interconnect Bays has been populated using s 8 Virtual Connect Modules. My understanding is that there is a one-to-one relationship between each pNIC and one of the Interconnect Bays, in other words vmnic0 - --> VirtualConnect switch on Module 1 vmnic1 - --> VirtualConnect switch on Module 2 vmnic2 - --> VirtualConnect switch on Module 3 vmnic3 - --> VirtualConnect switch on Module 4 vmnic4 - --> VirtualConnect switch on Module 5 vmcni5 - --> VirtualConnect switch on Module 6 vmnic6- --> VirtualConnect switch on Module 7 vmnic7 - --> VirtualConnect switch on Module 8 Is this correct? Does the number of Interconnect Modules has to be >= pNICs per blade ? or even if the blade has 8 NICs can I use start using only 4 Virtual Connect Modules: 2 for iSCSI traffic 2 for SC-VMotion and VM Traffic Based on what I read if I stack all 8 virtual connect switches can I have a connect any pNIC of any blade to any VirtualConnect switch? Thanks for your help.. like I said I've been reading a lot but still new to the Virtualconnect technology Thanks Artiman
9 REPLIES
David Billot
Frequent Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Hello Artiman, There is a picture of how the NICs line up to the interconnect bays for a half-height server in the c7000 user guide on page 46. http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698286/c00698286.pdf What you stated about how each NIC having a 1:1 relationship with an interconnect bay is correct. Q: Does the number of Interconnect Modules has to be >= pNICs per blade ? A: Yes, for the half height server, but only if you are needing to use each NIC. Think of it this way. If a NIC doesn't have an interconnect installed, that is the same as not having a network cable plugged into a rack-mounted server's NIC port. Q: or even if the blade has 8 NICs can I use start using only 4 Virtual Connect Modules: 2 for iSCSI traffic 2 for SC-VMotion and VM Traffic A: Sure. There is no requirement that a NIC must have an interconnect bay module installed unless you are needing to use that NIC. Q: Based on what I read if I stack all 8 virtual connect switches can I have a connect any pNIC of any blade to any VirtualConnect switch? A: No. You can not change the NIC to interconnect mapping - that is predetermined and hard-wired within our enclosure midplane. What you can do however, is have the traffic for the NIC exit out of any of the Virtual Connect modules no mater which interconnect module it initially connects to. That is what the stacking does for you. To clarify, let's take your 4 NICs that are going to support your virtual machine traffic. You don't need to have multiple uplinks to your data center network from each of the 4 VC-Enet modules that each NIC connects to in order to support these NICs. Let's say you using vmnics 5 thru 8 to support the VM traffic. And let's say you need to provide at least two 4Gb port trunks to support the aggregated traffic from all of the blades (two for redundancy.) You'd have a couple of ways to design this, I'll give you two. You could create a vNet called vmProd that connects to 6 uplink ports in interconnect bay 7, and 6 uplink port in interconnect bay 8. That's 2 groups of ports containing 6 ports each (6Gb per group). Now on the uplink switches you would configure the ports that you'd be connecting in a LACP port channel (1 port channel with 6 interface ports per switch in this case). Now you have 2 port trunks each with 6Gb of bandwidth. One port trunk will be active, and one will be in standby mode (this is managed automatically within the Virtual Connect Manager). For this example, let us assume that the active port trunk is on interconnect bay 7. Next you would assign the vNet vmPROD to each of the 4 NICs via a VC Server Profile. Now when traffic goes through vmNIC 5, that traffic will flow across a stacking link to get from interconnect bay 5 to interconnect bay 7. For vmNIC 6, traffic would flow across the fastest path determined to get to interconnect bay 7 (could be down to 8 then over to 7, or it could be over to 5 then down to 7). For vmNIC 7, it would simply go into interconnect bay 7 and exit the same bay 7. For vmNIC 8, it would simply flow across from 8 to 7. Another popular way of doing this same design is to simply substitute the 12 1Gb ports with 2 10Gb ports. Now you have the same traffic flow, but you provide an even greater amount of bandwidth but you are only using two cables instead of 12. Hope that helps. In the meantime, if you haven't already, be sure to download the Virtual Connect Ethernet Cookbook found right here on this same community site, as well as the Virtual Connect 201 Lab Guide. Both have sections dedicated to VMware solutions for Virtual Connect. Thanks, Dave
artiman
Occasional Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Dave, First of all I really appreciatte the time that you spent creating such as elaborated answer. It helped me a lot to understand how the HP Virtual ConnectSwitches works. I feel much better now about how my design... I already read the Lab Guides and the Cookbook and definetely there is a lof of good stuff there. I do have the following extra questions: 1. Another advantage of stack the VirtualConnect switches would be the fact that all network traffic between the 16 blades would stay local within the chassis and it will never touch the uplink ports ? (btw we are using Cisco 3750 switches as distribution switches) 2. The ESX farm is going to be 48 blades (3 x c7000 Enclosures fully populated); Do you know if it is possible to stack Virtual Connect Switches that belong to different enclosures? 3. Any special considerations for iSCSI traffic, In the past I have use pass-thought modules for this type of traffic but the customer insisted in us virtual connect switches... the 3750 are very beefy switches and I'm a little bit concern that the virtualconnect switches are going cause some type of bottleneck.. We are expecting heavy ammounts of traffic in the iSCSI backbone.. Thanks again for your help... Thanks again
David Billot
Frequent Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

1. Another advantage of stack the VirtualConnect switches would be the fact that all network traffic between the 16 blades would stay local within the chassis and it will never touch the uplink ports ? (btw we are using Cisco 3750 switches as distribution switches) Answer: Correct. This is especially useful for intra-network communication such as VMotion. 2. The ESX farm is going to be 48 blades (3 x c7000 Enclosures fully populated); Do you know if it is possible to stack Virtual Connect Switches that belong to different enclosures? Answer: Stacking enclosures isn’t supported yet. Since your ESX farm will encompass all three enclosures, you’ll likely need to extend your VMotion network to upstream switches in order to allow each server the ability to VMotion across enclosures. Note however, that even though you have a VMotion vNet with uplinks associated with it, the blade servers within a single enclosure will still be able to perform intra-chassis communication with one another. Thus, if server 2 in enclosure 1 performed a VMotion to enclosure 1 server 8, the VMotion traffic would remain in the chassis. Only when enclosure 1 server 2 VMotions to a server outside of the enclosure would the traffic need to traverse the upstream ports. 3. Any special considerations for iSCSI traffic, In the past I have use pass-thought modules for this type of traffic but the customer insisted in us virtual connect switches... the 3750 are very beefy switches and I'm a little bit concern that the virtualconnect switches are going cause some type of bottleneck.. We are expecting heavy ammounts of traffic in the iSCSI backbone.. Answer: No special concerns really with regard to Virtual Connect carrying iSCSI traffic. There are ways to ensure enough bandwidth especially with the number of Virtual Connect modules you are using. One example would be to dedicate bays 7 and 8 (randomly picked for this example). Create a single vNet to carry the iSCSI traffic and include all 8 ports from each module into a LACP port channel. That would give you 8Gb of bandwidth for the iSCSI network (port channels would be in a Active/Standby config). If you need even more bandwidth, you could split the iSCSI network across two separate vNets. The vNets themselves would carry the same iSCSI network, just down different paths. What this does is allows you two create two LACP port channels (8Gb per in this example) and allow both port channels to remain Active/Active. That gives you 16Gb of active bandwidth for your iSCSI traffic. ESX won’t know the difference as to whether you have one or two vNets associated with the vSwitch – ESX is simply looking at what NICs to communicate with. So, if you have a vSwitch configured with the default ESX load balancing method of “originating virtual port ID” (transmit load balancing), and you are using 2 NICs to support the iSCSI traffic (per your original email), then what will occur is that ESX will load balance the iSCSI traffic from the VMs to both NICs. Virtual Connect will then forward the iSCSI traffic from the NICs to their respective vNet and out the port channel. Load balancing will occur on the Virtual Connect/3750 port channel using an IP SRC+DST hash. NOTE: Be sure to enable Smart Link on each of these vNets. Thanks, Dave
artiman
Occasional Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Dave, Thanks again now everything make sense to me.. I just have a final question related the VC switches hot/standby configuration.. I'm thinking on have to separate vNETS using two separate VLANs for the iSCSI traffic... each vmnic is going to be on a separate VLAN (VLAN A and VLAN B), I'm planning to use two virtual connect switches to provide the uplink connectivity to the 3750s: vmnic 3 --> virtual connect switch bay 4 vmnic7 --> virtual connect switch bay 8 I want the virtual connect switch bay 4 to be the primary switch for VLAN A and the stand by switch for the VLAN B , same thing for VLAN B, primary switch on bay 8 standby .. is that possible ? If I cross-connect (each virtual connect switch connected to both cisco switches) all VC with the Cisco switches? I do not think so... (due to one to one mappings between the blade nic and the virtual connect switch) let's say that the virtual connect switch on module 4 goes bad, the vmnic 3 will lose connectivity right? of because all switches are stacked now the vmnic3 can use any of the remaining virtual connect switches that are still alive for uplink connectivity..? thanks again.. I hope to meet you one day to invite you a good lunch Andres
David Billot
Frequent Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Question: I want the virtual connect switch bay 4 to be the primary switch for VLAN A and the stand by switch for the VLAN B , same thing for VLAN B, primary switch on bay 8 standby .. is that possible ? Answer: Yes and no. Today there is no option within Virtual Connect that allows you to designate a preferred path when using redundant port channels. VC decides internally which port channel will be Active. Thus, if your intent is to try to load balance the two vNets across two modules, then your results may vary depending on failure scenarios. There are ways however to manipulate which port channel will be active. One way would be to include one additional port on the primary channel (e.g., VLAN A = 4 ports on bay 4, 3 ports on bay 8 VLAN B = 4 ports on bay 8, 3 ports on bay 4). The algorithm used to determine which channel is set to active first looks at which channel has the greatest amount of bandwidth. Question: If I cross-connect (each virtual connect switch connected to both cisco switches) all VC with the Cisco switches? I do not think so... (due to one to one mappings between the blade nic and the virtual connect switch) let's say that the virtual connect switch on module 4 goes bad, the vmnic 3 will lose connectivity right? of because all switches are stacked now the vmnic3 can use any of the remaining virtual connect switches that are still alive for uplink connectivity..? Answer: It would be better to use the NIC teaming feature within ESX to provide redundancy for each of your VLANs. With your proposed configuration, if you lose NIC 3, then VLAN A will be down. Same goes if you have a whole module failure in bay 4 – VLAN A will go down because NIC 3 won’t be able to communicate via bay 4 in order to get to bay 8. A better solution would be to have a single vSwitch that supports both VLAN A and VLAN B. Next, configure the vSwitch port configuration for VLAN A to use NIC 3 as its primary path, and NIC 7 as the secondary (Active/Standby). Do the opposite for VLAN B (NIC 7 = primary, NIC 3 = secondary). This way, if you lose a NIC, both VLAN A and VLAN B will still survive. If you lose a Virtual Connect module, then the vSwitch will failover whichever NIC is affected. With regard to whether or not you should cross-connect each VC module to separate upstream switches, either way would work so long as you don’t split the port channels themselves (every port channel must be connected to only one switch). However, I don’t see any advantage in cross-connecting from a failover perspective or otherwise, and your configuration will be simplified if you don’t cross-connect. No worries about the additional questions et al – that’s why we setup this community – to provide our customers with quick answers to things like technical design questions! Thanks, Dave
pching
Occasional Visitor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Hi Dave, I am also interested in the scenario Andres describes (reference the first question in your post dated 6/3/2008 12:14AM) for one of my customers. However, each port channel will need to service multiple VLANs. The blades will run Windows rather than Vmware. 1. Have there been any enhancements to-date that allows this scenario? 2. Is an active/active configuration using Shared Uplink Sets achievable as an alternative? Thanks, Peter
David Billot
Frequent Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Yes, VC firmware 1.3x allows for the ability to either tunnel multiple host-based tagged networks to an upstream .1q enabled port trunk (non-Shared Uplink Set mode), or to map the individual host-based tagged networks to one or more active ports/port trunks using Shared Uplink Sets. Some history... In previous VC firmware releases (prior to 1.3x), a Shared Uplink Set (SUS) provided .1q tagging for the incoming and outgoing traffic on the external VC ports. A SUS did not provide the tagging nor did it understand a tagged frame coming from the blade server NIC side. In order to pass host-based tagged traffic, you would have setup a standard vNet (non-SUS) to tunnel the tagged frames through the VC module and up to a .1q enabled switch port/port trunk. In addition, with previous firmware (prior to 1.3x), a SUS only supported a single active port or port trunk and thus all tagged traffic had only one possible active path. Today.... With firmware 1.3x, you can now associate multiple active distinct ports or port trunks with a single SUS. This allows the ability to direct tagged traffic to separate upstream network ports. If you require that a specific port trunk is considered to be the primary active link by VC, then you'll likely want to either provision an additional port to one of the port trunks (aka Link Aggregation Groups, aka LAG(s)), or otherwise ensure that one LAG has greater bandwidth than the other. There was a change with the 1.3x firmware in the algorithm that VC uses to decide which LAG is the primary versus secondary. Prior to 1.3x, if you had two equivalent redundant LAGs supporting a SUS or simple vNet, VC would first determine if the LAGs had equal bandwidth, and if so, would break the tie based upon the physical MAC address of the VC-Enet modules themselves -- lowest MAC address of the VC module would be chosen as the primary. In the event of a port failure within the active LAG, VC would determine that the standby LAG now had the greatest bandwidth and would failover the LAG. Now, here is the key point ... in the prior 1.3x code, when the degraded LAG came back to full bandwidth, VC would determine that both LAGs were equal and would then failback to the primary LAG (however this was considered to be an unnecessary failback given that the LAGs were equivalent and thus a change was put into place in the 1.3x code.) Now, with 1.3x, a change was made in that VC would not failback to the primary LAG if both LAGs were equivalent. What this means of course is that in the prior firmware, you could predetermine the active LAG by simply ensuring that the preferred LAG was on the lowest MAC addressed VC module. With the 1.3x code, this won't work because VC won't failback in the case of a tie. This is why I stated above that the best way to ensure a preferred path when equivalent LAGs are created, is to simply provision one additional port to your preferred LAG (if that's possible). With regard to deploying an Active/Active configuration.... I'm not sure what Active/Active would buy you here since I would assume that you want to avoid the secondary LAG whenever possible. However, if Active/Active does benefit your design, then you could simply split the traffic among two separate SUSs. Each SUS would carry the same network traffic from whichever NIC is communicating in your NIC teaming driver. You would have for example, SUS-1a, and SUS-1b, each carrying VLAN 10/20/30 to their respective LAG(s). This configuration would support a Transmit Load Balance (TLB) teaming configuration or an Active/Standby configuration whereby you would designate different preferred path NICs in your blade servers to balance out the traffic. -- TLB would be the preferred method though. Hope that helps. Thanks, Dave
pching
Occasional Visitor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

Hello Dave, Thank you. I have a better insight into the history and function with differing firmwares. I have two more questions/scenarios to understand: 1. Under 1.3x is it possible to utilize one .1q port trunk to carry traffic for both tunneled 'and' host-based tagged networks (eg using ESX virtual machines), with the ability to failover to a second .1q port trunk on a separate VC Enet module? 2. Its not clear to me yet but is it possible to have the following scenario (assuming firmware 1.32 is installed, VC Enet modules are installed in bays 1,2,5,6,7,8): - one .1q port trunk (port trunk A) is configured using ports on VC Enet bay 1 with a SUS servicing, VLANs A, B and C - a second .1q port trunk (port trunk B) is configured using ports on VC Enet in bay 6 with a SUS servicing VLANs D, E and F - port trunk A is 'active' for VLANs A, B, C but is 'standby' on port trunk B - likewise, port trunk B is 'active' for VLANs C, D, E but is 'standby' on port trunk A I've yet to reference the LAB 201 guide you've mentioned however, through trial and error, I previously attempted to configure such a arrangement under 1.31 but couldn't identify how it's possible. Is this operation/configuration possible? If so, can I also achieve the same scenario as item 1 above with this specific failover operation? Thanks, Peter
David Billot
Frequent Advisor

Confusion : Relation between # pNICs vs # Interconnect Bays Vmware Implementation

<Peter> 1. Under 1.3x is it possible to utilize one .1q port trunk to carry traffic for both tunneled 'and' host-based tagged networks (eg using ESX virtual machines), with the ability to failover to a second .1q port trunk on a separate VC Enet module? <Dave> Here are two possibilities to support your traffic -- both options support two port trunks each on separate VC modules in an active/standby fashion. The first option is a SUS in mapping mode, the second option is a simple vNet in tunnel mode: 1) The 1.3x firmware mapping mode allows for one non-tagged network to coincide with the tagged networks. So, in that case, as long as you only had one untagged network being supported by a given SUS, then yes, you could pass both types of traffic through a single SUS. 2) Since you only require a single port trunk to carry both the tagged and untagged traffic, you could just use a standard vNet. The vNet (non-SUS) would tunnel the traffic (both tagged and untagged) to and from your upstream switch. This the most common and simplest way to support ESX. The reason you would typically go with option 1 is when you have multiple networks coming from a single NIC that need to go to different external VC ports (separate physical networks). Otherwise, I'd opt for option 2. <Peter> 2. Its not clear to me yet but is it possible to have the following scenario (assuming firmware 1.32 is installed, VC Enet modules are installed in bays 1,2,5,6,7,8): - one .1q port trunk (port trunk A) is configured using ports on VC Enet bay 1 with a SUS servicing, VLANs A, B and C - a second .1q port trunk (port trunk B) is configured using ports on VC Enet in bay 6 with a SUS servicing VLANs D, E and F - port trunk A is 'active' for VLANs A, B, C but is 'standby' on port trunk B - likewise, port trunk B is 'active' for VLANs C, D, E but is 'standby' on port trunk A I've yet to reference the LAB 201 guide you've mentioned however, through trial and error, I previously attempted to configure such a arrangement under 1.31 but couldn't identify how it's possible. Is this operation/configuration possible? If so, can I also achieve the same scenario as item 1 above with this specific failover operation? <Dave> That configuration isn't possible in the way you are trying to define it. However, that's not to say that you can not accomplish this. I'll make some assumptions here about what you are likely wanting to do within ESX (hopefully I'm close.) Let's say you have a vSwitch with two port groups (PG1 and PG2). PG1 is defined to carry VLANs A, B, and C and is configured with two NICs -- LOM 1 (physically maps to VC-Enet 1) and NIC port 2 of a quad port Mezz adapter that is installed in Mezz slot 2 of a half-height blade (NIC port 2 physically aligns to VC-Enet 6). NIC 1 is configured as the preferred NIC, and NIC 2 is in standby mode. PG2 is defined to carry VLANs D, E, and F. PG2 will use the same NICs as PG1, however the NICs preferred/standby order is reversed from that of PG1. Thus, NIC 2 is the preffered active NIC, and NIC 1 is the standby. So what you end up with is two NICs that are both transmitting and receiving their respective traffic and each of the NICs are able to carry all of the traffic (VLANs A,B,C,D,E, and F) in the event of a NIC failure. In VC, the simplest thing to do is to create two simple vNets (non-SUS) -- we'll call them Prod-1A, Prod-1B. Assign however many uplink ports from VC-Enet 1 that you want to use for your port trunk to Prod-1A. Do the same for Prod-1B using only ports from VC-Enet 6. (NOTE: Be sure to enable Smartlink on both vNets.) Now create a VC Server Profile for your blade(s) server and assign Prod-1A to NIC 1, and Prod-1B to NIC 6. Ensure that your upstream switch port trunk that is connected to our Prod-1A and Prod-1B is enabled for 1.q and can support all of the VLANs from A to F. That's it! You have now created an end-to-end Active/Active configuration that will is both simple and optimal. Thanks, Dave