HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
Showing results for 
Search instead for 
Did you mean: 

Virtual Connect - best way to force traffic to leave enclosure

Trusted Contributor

Virtual Connect - best way to force traffic to leave enclosure

Ron had a customer question for test purposes:




I have a new customer that was looking to do functionality and performance tests on things like vMotion from a blade server in Enclosure A to a blade server in Enclosure B (no stacking).  They are using BL460Gen8’s with 554 FlexLOM and one pair of VC 10/10D interconnects.   However, they now only have 1 enclosure to use.  They are asking what the easiest/best method is to simulate this server to server traffic (ie traffic that will leave the enclosure to the upstream switch and then return back to the same enclosure.)  Is this as simple as using vLAN tunneling instead of having the VC modules handle the mapping?




Input from Cullen:




I think the most straightforward way would be to define two sets of networks, one on the Virtual Connect in bay 1 and the other on the Virtual Connect in Bay 2.  Connect one of your servers to just the networks on bay 1 and the other to just the networks on bay 2.  That should force any traffic out the uplinks.




Also advice from Dan:




I agree.  That’s the method I used before to test almost the exact same idea at a customer site.


Active/Active config, but then each Profile only gets 1 FlexNIC and you keep them on separate sides (different vNets) of the uplink modules.


This will however have one down side.  You won’t be able to test Multi-NIC vMotion which can increase the speed at which vMotions happen by using both NICs at once. http://kb.vmware.com/kb/2007467


In order to do that, you would need 4 uplinks, 2 from each VC module.

Then configure 4 different SUS and vNets , with each getting its own uplink.  No LACP or Port Channel.

Then have Blade A use one pair and Blade B use the other pair.

All traffic will be forced out and you can still do Active/Active from each blade to get a more realistic perf test.




Other input for Ron?