- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Blade trunking with vmware
BladeSystem - General
1748041
Members
5165
Online
108757
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2007 06:49 PM
тАО03-27-2007 06:49 PM
Blade trunking with vmware
Hi Guys, Looking at blades for vmware and am curious - can we use the 10Gb ports to trunk chassis together to allow vmotion over this? How have you handled the bandwidth issues for LAN and SAN I/O ? If you have n+2 config with 14 blades, if you stacked with 5 VM clients each you have 70 servers running out of the VC module. with trunking etc, how do know the bandwidth is enough? I hope i am correct in saying there are only 4 ports(1GB) for LAN (if using redundancy) so a 4GB trunk for 70 servers seems small ? Am i way off base with this? Thanks Paul
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2007 02:00 PM
тАО03-28-2007 02:00 PM
Blade trunking with vmware
Paul, I have forwarded your question on to our VC experts! Regards, Mike
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2007 04:00 PM
тАО03-28-2007 04:00 PM
Blade trunking with vmware
Paul, For your first question regarding VMware, If you configure an 10Gb link as a data center uplink on one enclosure and then do the same for the second enclosure, you can connect the two and use it for this purpose. The net result would be isolated network shared only between those two enclosures. Regarding bandwidth management for LAN and SAN I/O, the primary benefits of 'stacking enclosures' (not yet available in the current firmware) is cable reduction. We will be provided better ways to simplify VC management of many c-Class enclosures. If you are using VC in a bandwidth intensive environment, it would not make a lot of sense to 'stack' the enclosures, but instead take Ethernet uplinks directly out of each separate enclosure. The VC-Enet module has 16Gb of downlinks to servers and 38Gb of uplink and stacking connections, it is fairly straightforward to create a configuration that provides the appropriate level of over/under subscription of the uplinks for just about any environment. The FC module will support from 4:1 to 16:1 oversubscription of uplinks with HH servers (2:1 to 8:1 with FH servers). Thanks for your inquiries! Mike
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP