- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- ESXi and Virtual Connect questions
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2010 09:32 AM
10-15-2010 09:32 AM
ESXi and Virtual Connect questions
Chris had a VMware and Virtual Connect question:
**********************************
History:
We originally configured the VC for mapped Vlans and created the networks and split the two 10 gig interface into eight (really 4 with teaming).
Each VC had four had 1 gig fiber connections to a different core switch configure with .1q, however only 2 links were active. I was told they should change to a LACP trunk and this gave the 4 active.
The customer then changed to tunneled since they had too many Vlans for the ESXi 4.1 server to connect to. A new uplink set was configured and the onboard Nics were left to a single 10gig, theses are teamed together inside the ESXi servers and different VM Networks are created for each Vlan.
The main issue is they would like to have all 8 uplinks active. I am not sure if this is possible.
Two main questions.
Is it possible to have all uplinks from both core switches that are part of the same uplink group active at the same time?
With ESXi is it better to split the 10 gig into multiple Nics and connect to different networks or leave it two teamed 10 gig Nics and peel off the Vlans with VM Networks?
Thank you in advance.
**************************
Vincent had some answers:
***************************
You can’t have links going to 2 different VC modules in a single LACP group and be all active. Instead, you should have 2 LACP groups, each with the uplinks from a single module, and do NIC teaming in the servers.
************
Is this how you do LACP groups? Other advice or how you do NIC teaming?