- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Blade System Matrix and converged networks
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2009 06:55 AM
тАО08-26-2009 06:55 AM
Does anyone know if the HP Blade Systems Virtual connect modules (VCe and VCfc) would work in a seamless fashion with the HP 2408 FCoE Converged Network Switch?
If so, does the HP Blade System Matrix support this setup? Essentially bringing it inline with the recent Cisco UCS systems...
ta
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2009 07:37 AM
тАО08-26-2009 07:37 AM
Re: Blade System Matrix and converged networks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2009 08:29 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2009 02:18 AM
тАО08-27-2009 02:18 AM
Re: Blade System Matrix and converged networks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2009 07:07 AM
тАО08-27-2009 07:07 AM
Re: Blade System Matrix and converged networks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2009 07:14 AM
тАО08-27-2009 07:14 AM
Re: Blade System Matrix and converged networks
I like the sound of the converged networks that the Cisco UCS offers so naturally am after seeing if the Blade System can do a similar thing.
My main task at the moment is to compare traditional rack mount vs blades vs cisco usc for compute power, energy savings, cable savings etc.
Eg. at the mo a c7000 with BL495G6 can provide me with 384 virtual CPUs (2 CPU x 6 Core x 16 Blades x 2 for vCPU), if that then amounted to 384 vm guests would a 10Gb flex-10 module be enough bandwidth for that many virtual servers.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2009 10:28 AM
тАО08-27-2009 10:28 AM
Re: Blade System Matrix and converged networks
= 124Gb = 300Mb per CPU
The question is - is that enough for you?
And thats "upto" - you probably would not want to configure it that way. And it would be knife edge on the config with no room for a failing VC module.
So more like 100Mb
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2009 03:16 PM
тАО08-27-2009 03:16 PM
Re: Blade System Matrix and converged networks
The uplinks coming out of VC aren't CEE, so while they would connect at 10GbE to the 2408, you wouldn't get CEE.
Quoting from HP's site:
>>When the CEE standard does emerge ├в and HP is helping to shape that standard today ├в you can be assured the Virtual Connect family of products will support it
I think Adrian's configuration is with one (or two) VC Flex-10 modules (am I right?) With more modules (and more mezz cards in the blades), you could boost that bandwidth.
BladeSystem Config: c7000, 6 x VC Flex-10, 16 x BL495 [each w/ 2 dual-port 10GbE mezz]
Uplink bandwidth from enclosure:
6 x (6 x 10Gb) = 360Gb (or 180Gb, redudant)
Using Justin's bandwidth-per-vCPU rule, that's 180Gb/384vm = 468Mb/vm
UCS Config: 5100, 2 x 2100 fabric extender, 8 x B200 servers
Uplink bandwidth:
2 x (4 x 10Gb) = 80Gb (40Gb redundant)
Using quad-core Xeon-based B200 M1 blades, you'd see
2 x 4 core x 8 blades x 2 for vCPU = 128Vcpus
40Gb/128vm = 312Mb/vm
BTW, the "convergence" of Matrix isn't converged *fabrics*, it's converged resource -- compute, storage, management, etc. At the data center level (especially a new data center -- lucky you, Justin!) whether the connection is FCoE or token-ring probably isn't as critical as whether the bandwidth/compute/storage/whatever can be added when needed, carved up and deployed, and done in the most optimal manner within your budget/SLA.