- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Network gotcha's for ESX 3.5 & Vconnect
BladeSystem - General
1820655
Members
2425
Online
109626
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-10-2009 01:13 PM
тАО04-10-2009 01:13 PM
Network gotcha's for ESX 3.5 & Vconnect
I have a customer implementing ESX 3.5 with Vconnect. They connect to the network using Brocade switches. We are noticing intermittent connectivity. Has anyone experienced this? Does anyone have any suggestions or gotcha's they have encountered? Thanks, -Kevin
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-17-2009 08:21 AM
тАО04-17-2009 08:21 AM
Network gotcha's for ESX 3.5 & Vconnect
Are you using 10 Gig connections? Are you trunking vlans (aka vlan tagging)? if you are doing 1 Gig connections, are you using etherchannel or bonding ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2009 04:44 PM
тАО06-12-2009 04:44 PM
Network gotcha's for ESX 3.5 & Vconnect
Just completed the configuration with c7000, VC-Enet (1/10GB) and VC-FC (4GB) with few BL460c G1. The enclosure is installed with 3 pairs of VC-Enet(1/10GB) and 1 pair of VC-FC, so each HH blade has 2 ports FC HAB, and 6 ports of NIC. Both WWPN and Mac are using VC assigned ranges, so future blade replacement can be made without any changes to both Ethernet networks and FC Fabric. All of the Blades are installed with ESXi using HP USB Keys, and each of them has three networks (via the network profile). First pair of NIC ports (LOM) are for VMware Vmotion and Network management network (similar to service console in ESX version), and no VLAN trunking or Port trunking. at the external network side (using Cisco 3750), they are simply as "Access port", but with DHCP, PXE helper enabled. Second pair of NIC ports (the first two ports of quad port card on the Mezz 2) are for the virtual machine networks, which has three uplink ports being setup in VC as both VLAN trunking and Port trunking (802.1q and 802.3ad). All of the configuration for VC-Enet and Cisco are very much similar with the VC cooking (scenario #13, and #14). Third pair of NIC ports (the last two ports of the quad port card on the Mezz 2) are for a dedicated Ethernet network for Virtual Machine restore/backup traffic (not VLAN, it has its own external edge switches, and router, backup media server, and tape library on a dedicated isolated Ethernet fabric). On the VC, it has one uplink port being setup as "access port". During the initial configuration, we ran into two technical problems, one for the VC-FC module and one for VC-Enet ones. VC-FC module: the uplink of the FC modules connected to our core SAN switches (Brocade), but none of our Qlogic HBA in BL460c could. We got HP technical support involved, after tried all of routine activies (collecting logs, configuration script for both OA, VC etc, validating the firmware matrix etc), plus pre-zoning each Blade HBA into the shared LUN, using Emulex HBA on the blades, and using the recommended supported version of FOS brocade switch, we still wouldn't be able to connect the blade to the SAN fabric via the VC-FC. Just before we're preparing to escalate to 3rd level, we simply replace this pair of VC-FCs with another pair of ours from different enclosures, everything works, still using Qlogic HBA, no pre-zoning for HBA, still have the FOS on 5.3.18 (instead of the recommended 6.x). HP support has took our fault pair of VC-FC to their own lab and trying to workout the root causes now. On the VC-Enet site, everything worked, except we couldn't make VLAN tagging working (802.1q), after a week's routine activities by HP 2nd level support (checking firmware, collecting configuration script (OA/VC-Enet/Cisco), and using HP Remote access tool (HP Virtual Room), three different teams of HP support (Unix/VMware, Industry Standard System (for Proliant and VC) and Cisco networking support) did two 3 hours online remote session, we found the problem: for some reason, on the Cisco 3750 switch, the VLAN was not activated even it has been specifically set to be activated. We have to assign spare port (on the 3750) to the same VLAN group and connected with a network device to the port, so the VLAN can be activated. Why, don't know yet, but we've log a call with Cisco for this already Both incident took about five days to get resolved, and surely we can see the existing support mechanism HP provided was struggling, not only there is very few knowledge engineers on the field knowing this stuff, they're also ill-equipped with some of very basic tools set too. The four must-to-have main knowledge areas, the Proliant blade systems, VMware VI3 and Cisco and Brocade, make the current service team failed to respond quickly and effectively. We were very lucky that the problem did not take in place in our real production enclousure!! just imaging four c7000 enclosures daisy-chained together, each with 16 HH blades, and each blades with 10 virtual machines, that is 680 intel servers in this configuration. The same powerful VC technology will bring you down even more dramtically.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2009 10:28 AM
тАО07-10-2009 10:28 AM
Network gotcha's for ESX 3.5 & Vconnect
Hi Kevin, It is sad to hear it took much time. However we are running BL 680C with 3 pait of VC - EC and 1 pair of VC-FC on 8 C7000 enclosures with vmware. No issues faced till date. Everything was so smooth.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP