- Community Home
- >
- Servers and Operating Systems
- >
- BladeSystem
- >
- BladeSystem Virtual Connect
- >
- HP StorageWorks X9720 Network Storage System uplin...
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
08-03-2010 11:29 AM
08-03-2010 11:29 AM
HP StorageWorks X9720 Network Storage System uplink to two Virtual Connect networks
Efren was looking for some Virtual Connect advice:
**********************************************************************
I have a customer that wants to uplink a new X9720 system to two different networks on their network. To do this I expect to configure the following three networks on the two VC modules:
- Customer Data (X9720 user NFS data) Network
- Customer Management Control Network (to access the X9720 Fusion Manager)
- X9720 Cluster and Management (iLO, OA) network (internal, 172.16.0.0, not uplinked)
We will connect ports X1 – X6 on both VC modules using 10GB SR SFPs to the customer’s switches.
Question is: The customer wants to create two 6x10GB uplink trunks (LACP), one for each VC module. I was expecting that we would need two 1x10GB uplinks for the Customer Management Control Network (say on X1) and then two 5x10GB LACP trunks (X2-X6) for the Customer Data Network. The customer can do either, they just want to know which is correct and/or recommended by us.
*********************************************************************
Cullen and Vincent were up to the task:
**************************************************************
Vincent replied:
An LACP trunk can only go from one VC module to one external switch. If the data network and the mgmt network are on 2 different core switches, you have to make 2 separate LACP trunks. If they are on the same core switch with VLANs, you can either do physical separation by making 2 different networks in VC with their own uplinks, or put the 2 networks into a single Shared Uplink Set with all the uplinks and do VLAN tagging. I don’t think we would systematically recommend one over the other, it depends on the customer. For a management network, 10Gb is probably overkill, so bandwidth might be better utilized with the SUS, but if they truly utilize the 60Gb the management traffic could get “drowned” in absence of QoS.
As well as Cullen:
In my simplistic way of thinking of it, if all your traffic is going through the same upstream switch:
- Put the VLANs in the same SUS if you want to get the maximum total IO for the minimum uplinks
- Put the VLANs in separate uplink sets if it’s critical that traffic on one VLAN does not cause delays on the second VLAN and there is a reasonable risk of this occurring.
**************************************************************************
Good info from Cullen and Vincent. Hope this helps in your configurations as well. Any other advice for Efren?
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP