- Community Home
- >
- Networking
- >
- Legacy
- >
- Switches, Hubs, Modems
- >
- HP 1/10G Virtual Connect Ethernet Module - basics ...
Switches, Hubs, and Modems
1819827
Members
3063
Online
109607
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-23-2010 03:49 PM
тАО03-23-2010 03:49 PM
I have a c7000 blade enclosure with two HP 1/10G Virtual Connect Ethernet Modules.
I have 16 blades that will run esx. I am trying to find out basics of the uplink ports. We have 16 uplink ports that we want to be all on the same network (accessible from clients on the same subnet).
The hope is we can make one blade run a large file server we have and have 4Gb of throughput. Other servers will run multiple vmware images and could only need 1Gb total.
Do I need to have the network admins create a trunk to connect to these VC modules in order to have the 4Gb throughput? Can I just plug in the 16 1Gb cables all from the same network and make VC load balance out 4 ports to give me my throughput?
The servers all have dual 10Gb NICs, but we only have 16Gbs of ethernet uplinks. Thats on paper plenty, but it needs to be distributed. Does VC do all this or being a pass-through does the trunking and such fall back on the network team and switches OUTSIDE of the blade and VC?
I have 16 blades that will run esx. I am trying to find out basics of the uplink ports. We have 16 uplink ports that we want to be all on the same network (accessible from clients on the same subnet).
The hope is we can make one blade run a large file server we have and have 4Gb of throughput. Other servers will run multiple vmware images and could only need 1Gb total.
Do I need to have the network admins create a trunk to connect to these VC modules in order to have the 4Gb throughput? Can I just plug in the 16 1Gb cables all from the same network and make VC load balance out 4 ports to give me my throughput?
The servers all have dual 10Gb NICs, but we only have 16Gbs of ethernet uplinks. Thats on paper plenty, but it needs to be distributed. Does VC do all this or being a pass-through does the trunking and such fall back on the network team and switches OUTSIDE of the blade and VC?
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-25-2010 03:49 AM
тАО03-25-2010 03:49 AM
Solution
Hi,
I'm a bit rusty on this as I don't manage the VC side of things but from the network perspective we present LACP trunks to the VCs with various VLANs for the server networks tagged across.
The trunks are then assigned to "pipes" (not sure on the terminology here) on the VCs dependant on their role. For example VLANs 100 and 101 carry client data and are assigned to the "Data pipe", VLAN 200 carrys the VMotion traffic and has it's own pipe and the same for Service Console traffic etc.
So I think one answer to your question is that you can create seperate trunks for the servers, e.g. a 4gb trunk for the file server and a 12gb trunk for everything else.
You'll also want to duplicate this on the secondary VC so that you have failover in case of a VC failure.
However, I am sure there is more than one way to do this as there is a _lot_ of functionality in the VCs and I'm not familiar with all of it.
HTH
I'm a bit rusty on this as I don't manage the VC side of things but from the network perspective we present LACP trunks to the VCs with various VLANs for the server networks tagged across.
The trunks are then assigned to "pipes" (not sure on the terminology here) on the VCs dependant on their role. For example VLANs 100 and 101 carry client data and are assigned to the "Data pipe", VLAN 200 carrys the VMotion traffic and has it's own pipe and the same for Service Console traffic etc.
So I think one answer to your question is that you can create seperate trunks for the servers, e.g. a 4gb trunk for the file server and a 12gb trunk for everything else.
You'll also want to duplicate this on the secondary VC so that you have failover in case of a VC failure.
However, I am sure there is more than one way to do this as there is a _lot_ of functionality in the VCs and I'm not familiar with all of it.
HTH
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2010 05:41 AM
тАО04-21-2010 05:41 AM
Re: HP 1/10G Virtual Connect Ethernet Module - basics of uplinks, trunking, load balancing, etc
We are using the CX4 10g connections and created a trunk to each ESX server with all the VLAN's.
We then use distributed switch in ESX 4.0 and create VLAN's for each subnet.
My issue now is that the Trunk is only connecting at 1gb even though i have it set to 10g, not sure why on this yet. But even with two connections we run many servers and still have no problemw with it connecting currently at 1gb on the trunk.
We then use distributed switch in ESX 4.0 and create VLAN's for each subnet.
My issue now is that the Trunk is only connecting at 1gb even though i have it set to 10g, not sure why on this yet. But even with two connections we run many servers and still have no problemw with it connecting currently at 1gb on the trunk.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP