- Integrated Systems
- About Us
- Integrated Systems
- About Us
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
08-03-2015 08:49 AM
Configuration changes for production network traffic
Ramu asked for help:
Need your help, sorry for a long explanation, but this is required to understand.
We are working on a large customer requirement where production network traffic needs to be mirrored to another chassis through Cisco external switch.
Below information provides details on customer requirement. Attached is the network setup, please provide inputs on what configuration change is required to make this work.
There is a production HP chassis and a UAT HP chassis ( 2 separate chassis)
Both have Virtual connect in the Chassis
As can be seen in diagram Production Web layer traffic has 1 port for DMZ that is connected from VC Flex10 to a Cisco switch on port 19
The port 19 of cisco switch is mirrored to port 18 internally on same cisco switch
Port 18 of cisco switch is connected to VC flex10 in UAT Chassis
The UAT Chassis has a blade which VMware ESX installed with Vswitch and 3 VMs ( 1 Vcentre + 1 CA + 1 Analyser)
The traffic on port 19 to the production Chassis is to be sent to the VM running CA in the UAT Chassis from port 18.
The CA VM inturn will send the data to Analyser VM on the same ESX host via Vswitch.
This is the requirement.
Customer is not interested in doing port mirroring on VC level as he needs to send data to CA VM in separate Chassis.
Since the CA sw is heavy it cannot be installed on a desktop grade machine and directly connected to a mirrored port in VC so this is ruled out.
Lastly the Application partner from CA has confirmed that he has multiple setups where port mirroring is done on Cisco switch level and being sent to a rack server with VMware installed with the mentioned VMs and this setup is working fine for analysis.
Since we were not able to implement customer has implemented on rack and the setup is running with data being sent to the rack server.
You already have the proper solution: put the analyzer on a rack server, not on a blade connected to VC modules.
VC will never forward Ethernet unicast frames to a blade server when it knows the destination MAC address in the frame resides elsewhere than the blade server in question. All you could get captured on the blade server is broadcast frames and unicast frames where the destination MAC is unknown to VC, or is the blade server itself.
You just CANNOT put a network analyzer on a blade server connected to VC to capture traffic coming from the outside, there is no configuration that can make it work.
Dan had another suggestionL:
Or they could add a Mezz NIC to a particular blade and then add some 10Gb Pass Thru modules to that same chassis.
This would allow them to wire 1 or 2 links directly into that host without VC while staying on blades.
But based on the cost of PTMs, I would tend to agree with Vincent and just move this to a DL360/380 which allows it to be moved around the DC if needed, for various projects.