- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Networking considerations with 4*1Gb+1*10Gb in Hyp...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-12-2016 02:54 AM
07-12-2016 02:54 AM
I have 3 physical nodes with 6 NICs on each. Four of those NICs are 1Gb and two NICs are 10Gb. Yet i can use only 1 of 10Gb because of insuffitient number of 10Gb ports on my switches. It is a specific, not common situation. There are two switches, which slitchly differ from each other - one of them have four 10G port and the other one does not. They are gonna be stacked so virtually i'll have one switch with 72 1Gports and 4 10G ports (only in one of them) and if one switch fails i'll still have ports in other one working.
Initially i planned to create:
- two 1G ports for VMs traffic. NIC teaming inside of VMs.
- two 1G ports for iscsi traffic for Hyper-v host (MPIO mode) to get to VSA storage.
- two 10G ports for VSA traffic teamed within Hyper-v host and presented to VSAs as one vNIC.
It is sad that i cannot use 10G ports for all the iscsi traffic, because of deployment guide statement - hosts should use MPIO rather than NIC teaming, but VSA does not make bonding itself. Now that i have only one 10G port per node i would like to create some kind of backup channel for VSAs using 1G port:
- NIC team of 1G and 10G, teamed within Hyper-v host and presented to VSAs as one vNIC.
- two 1G ports for iscsi traffic for Hyper-v host (MPIO) and also shared with VM's traffic. Plus one 1G port for VMs. So VM traffic would have 3 NICs
- Is it ok to mix 1G and 10G ports in team for VSA?
- What type of NIC teaming would suite best for mixing 1G and 10G for VSAs traffic (switch independed, LACP) and what load balancing mode should be used?
- If VSA traffic would be not as active as i expect - could i use team with 1G+10G for VMs traffic that do not support NIC teaming inside (prior 2012 windows server)?
- Are there better ways to utilise NICs?
Solved! Go to Solution.
- Tags:
- Hyper-V
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-12-2016 04:19 AM
07-12-2016 04:19 AM
Re: Networking considerations with 4*1Gb+1*10Gb in Hyper-V VSA StoreVirtual
I should have mention what storage i have, so that network branwidth could be estimated.
Every node have 8 SAS 6G drives (should be up to 16) and some SSD caching. So that every node gives around 3500 IOPS and 200 MB/s with 64k in diskspd (formerly SQLIO). And as i plan it should scale to at least 6000 IOPS and 400 MB/s per node by adding more SAS or SSD drives.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-15-2016 10:01 AM - edited 07-15-2016 10:25 AM
07-15-2016 10:01 AM - edited 07-15-2016 10:25 AM
SolutionHi Roman,
That's an interesting situation. The VSA can have a teamed link to the network. The Hyper-V host initator should have MPIO links, although Microsoft relaxed that requirement in 2012r2. But you don't have enough 10Gb NIC ports on the switches to make redundant connections for all of your servers even if the initiators, targets and applications share capacity on the 10Gb ports.
I cannot recommend bonding 10Gb with 1Gb for anything but an active/standby bond to provide basic redundancy. Even that is a problem. If you have 200GBps of transactional demand and the 10Gb link fails, it will be difficult to manage the VSA because of massive congestion. The cluster will see it as a node failure and rebalance the iSCSI connections to other nodes. This is not what you really want. An active/standby bond should have enough capacity on the standby link to maintain operations, otherwise it can make a situation harder to troubleshoot and recover.
If you will be requiring 400MBps throughput, the best solution I see is to get additional 10Gb switching capacity and do 2x10Gb bonds from each server. At 200 or 300 MBps you could probably make it work by teaming 4x1Gb ports. Don't try to mix 10Gb and 1Gb in an active bond with iSCSI.
edit: clarification after reading the problem again