- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- C7000 Networking Help
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-19-2011 06:18 PM
07-19-2011 06:18 PM
C7000 Networking Help
Hello,
We currently have a single C7000 enclosure (12/16 slots filled with BL460c G1s) that houses the majority of our company's servers. Currently we have 4 Cisco 3020 Blade switches in enclosure slots 1-4. Each server has 2 NICs and 2 Qlogic iSCSI HBAs. We are going to have to expand the networking in the blade enclosure to the remaining 4 enclosure slots to take advantage of the 4 additional NICs on the 6 VMware ESXi hosts. We are purchasing a new backbone infrastructure modular switch to replace our existing dual HP ProCurve 2824s. I want to migrate to full passthrough modules as it's easier to manage a single switch for our entire server room even though I have to deal with cabling. We don't have more than 5 standalone servers outside of the blade enclosure, but we do have 2 Equallogic SANs. I don't have enough uplinks on the 3020s to connect both SANs and the standalone servers to the 3020s. For simplicity sake, I want to have everything on the single modular where I can bond NICs, segregate via VLAN for the VMware mgmt, SAN, and production networks without trunking the existing 3020s to the new switch. I'm trying to get rid of bottlenecks in the network, and since we're a small company, there's no way I could get the money for 10GbE or VirtualConnect or new Cisco blade switches. I could barely get the passthrough modules approved and I think overall for granular networking control it's a better solution to wire each NIC available to a single modular switch than keeping the 3020s and doing an 8-port trunk/etherchannel to the ProCurve modular.
If anyone could give me some feedback on how this idea strikes them, or if I'm being a complete idiot, I'm all ears. Thank you in advance!