- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Application Integration
- >
- Re: Best Practices for VMware vSphere 5.1 using vD...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2013 10:58 AM
тАО06-06-2013 10:58 AM
I've read through the VMware vSphere 5 on Nimble Storage Best Practices guide for configuring VMware standard vSwitches, but we've recently migrated all our networking to Virtual Distributed Switches (vDS) and I'd like to use them for iSCSI traffic as well. I'm wondering if anyone has recommendations for configuring the vDS uplinks, port groups etc. We're running Cisco UCS B-series blades, two VLANs for iSCSI traffic, and configured the port groups to only use one vmnic to keep iSCSI traffic for each VLAN on separate FI's.
Thanks,
Jacob
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2013 02:17 PM
тАО06-06-2013 02:17 PM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
You're actually forced to only use one NIC when adding it to the hosts hba. Also as part of that you need to make sure there is only the one nic (uplinks) in the "teaming and failover" setting for the portgroup. Thats likely configured correctly if you've already got it working because I think you're prevented from adding it to the host HBA otherwise.
I havent come across anything too special..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-09-2013 01:56 PM
тАО11-09-2013 01:56 PM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
If using Jumbo 9000 mtu make sure you set it at the VDS level and the port group!, and of course make sure its enabled end-to-end!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2014 09:47 AM
тАО01-16-2014 09:47 AM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
Justin that's actually not correct, I've got a single iSCSI SW Adapter that's using both the physical NICs on different VLANs and load balancing across the two paths. I had heard that the Nimble Product Marketing team had a document around best practices and recommendations that wasn't available publicly yet about 6 months ago, was hoping maybe they'd decide to share it sometime soon...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2014 10:02 AM
тАО01-16-2014 10:02 AM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
hmm when i tried to do that it complained to me, what version are you running?
also its out there, just not in an obvious location, login to infosight, go to downloads (top near logout) then there are tabs, go to the Best Practices tab and you'll find them there.
I had the same problem locating those docs as well!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-23-2014 09:43 AM
тАО01-23-2014 09:43 AM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
How is this working out for you? I have 2 isolated Nexus switches that would effectively be 2 VLANs. I was wondering if this configuration is viable. One of the questions I had was about the SW initiator.. do I use 2 or 1.. sounds like you use 1 and put the vmnics in from different VLANs into the same SW initiator.
" I've got a single iSCSI SW Adapter that's using both the physical NICs on different VLANs and load balancing across the two paths. I had heard that the Nimble Product Marketing team had a document around best practices and recommendations that wasn't available publicly yet about 6 months ago, was hoping maybe they'd decide to share it sometime soon..."
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2014 03:03 PM
тАО06-17-2014 03:03 PM
SolutionWith respect to Best Practices for vDS - Wen Yu has an excellent blog entry that covers off our best practices here http://www.supersonicdog.com/2013/04/24/lacp/
He also has some excellent entries on Cisco UCS Best Practice configuration as well here http://www.supersonicdog.com/2013/07/30/ucsandnimble/
Brian - two VLANs has always been a supported configuration particularly with customers with two non-connected switches as you described. You would simply use 1 iSCSI SW initiator with binding to the two vmks (which map to two distinct VMNICs, one per VLAN). You would expect to see half the number of paths in comparison to a similar config with two switches connected with a single VLAN. Nimble OS 2.x release simplifies path management with VMware specific PSP that sets the optimal pathing policy on detected Nimble volumes.
Nimble OS 2.1 has some enhancements with VLAN tagging that will enable you to define discovery IPs on each of your defined VLANs as opposed to a single discovery IP in Nimble 1.x.
I hope all this information helps.
Eddie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-19-2014 11:36 AM
тАО06-19-2014 11:36 AM
Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic
A similar config (two isolated networks on separate physical switches) is working well for us. We have a single SW HBA with 2 or 4 virtual NICs (10G or 1G), each mapped to a single physical NIC. We use separate subnets to make the connectivity limitations clear.
The one thing we've noticed is regarding the networks being treated slightly differently by the Nimble. If the non-discovery IP network drops things just get handled transparently to traffic with MPIO. However if the discovery IP network drops and the standby controller's interface in that network comes up slightly (even sub-second) before the active it'll trigger a failover. It doesn't appear to interrupt active traffic any more than any other controller failover though. Something to definitely watch when doing planned cabling changes, and possibly to even consider how the switches handle ports/cards coming online after a reboot.