HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Application Integration
cancel
Showing results for 
Search instead for 
Did you mean: 

Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

 
SOLVED
Go to solution
Jacob_Wilde
Advisor

Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

I've read through the VMware vSphere 5 on Nimble Storage Best Practices guide for configuring VMware standard vSwitches, but we've recently migrated all our networking to Virtual Distributed Switches (vDS) and I'd like to use them for iSCSI traffic as well. I'm wondering if anyone has recommendations for configuring the vDS uplinks, port groups etc. We're running Cisco UCS B-series blades, two VLANs for iSCSI traffic, and configured the port groups to only use one vmnic to keep iSCSI traffic for each VLAN on separate FI's.

Thanks,

Jacob

7 REPLIES
jrich52352
Trusted Contributor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

You're actually forced to only use one NIC when adding it to the hosts hba. Also as part of that you need to make sure there is only the one nic (uplinks) in the "teaming and failover" setting for the portgroup. Thats likely configured correctly if you've already got it working because I think you're prevented from adding it to the host HBA otherwise.

I havent come across anything too special..

vdiste57
Occasional Visitor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

If using Jumbo 9000 mtu make sure you set it at the VDS level and the port group!, and of course make sure its enabled end-to-end!

Jacob_Wilde
Advisor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

Justin that's actually not correct, I've got a single iSCSI SW Adapter that's using both the physical NICs on different VLANs and load balancing across the two paths. I had heard that the Nimble Product Marketing team had a document around best practices and recommendations that wasn't available publicly yet about 6 months ago, was hoping maybe they'd decide to share it sometime soon...

jrich52352
Trusted Contributor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

hmm when i tried to do that it complained to me, what version are you running?

also its out there, just not in an obvious location, login to infosight, go to downloads (top near logout) then there are tabs, go to the Best Practices tab and you'll find them there.

I had the same problem locating those docs as well!

bbeaulieu106
Occasional Visitor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

How is this working out for you?  I have 2 isolated Nexus switches that would effectively be 2 VLANs.  I was wondering if this configuration is viable.  One of the questions I had was about the SW initiator.. do I use 2 or 1.. sounds like you use 1 and put the vmnics in from different VLANs into the same SW initiator. 

" I've got a single iSCSI SW Adapter that's using both the physical NICs on different VLANs and load balancing across the two paths. I had heard that the Nimble Product Marketing team had a document around best practices and recommendations that wasn't available publicly yet about 6 months ago, was hoping maybe they'd decide to share it sometime soon..."

etang40
Advisor
Solution

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

With respect to Best Practices for vDS - Wen Yu has an excellent blog entry that covers off our best practices here http://www.supersonicdog.com/2013/04/24/lacp/

He also has some excellent entries on Cisco UCS Best Practice configuration as well here http://www.supersonicdog.com/2013/07/30/ucsandnimble/

Brian - two VLANs has always been a supported configuration particularly with customers with two non-connected switches as you described.  You would simply use 1 iSCSI SW initiator with binding to the two vmks (which map to two distinct VMNICs, one per VLAN).  You would expect to see half the number of paths in comparison to a similar config with two switches connected with a single VLAN.  Nimble OS 2.x release simplifies path management with VMware specific PSP that sets the optimal pathing policy on detected Nimble volumes.

Nimble OS 2.1 has some enhancements with VLAN tagging that will enable you to define discovery IPs on each of your defined VLANs as opposed to a single discovery IP in Nimble 1.x. 


I hope all this information helps.


Eddie

jmooneybethel65
Occasional Visitor

Re: Best Practices for VMware vSphere 5.1 using vDS for iSCSI traffic

A similar config (two isolated networks on separate physical switches) is working well for us. We have a single SW HBA with 2 or 4 virtual NICs (10G or 1G), each mapped to a single physical NIC. We use separate subnets to make the connectivity limitations clear.

The one thing we've noticed is regarding the networks being treated slightly differently by the Nimble. If the non-discovery IP network drops things just get handled transparently to traffic with MPIO. However if the discovery IP network drops and the standby controller's interface in that network comes up slightly (even sub-second) before the active it'll trigger a failover. It doesn't appear to interrupt active traffic any more than any other controller failover though. Something to definitely watch when doing planned cabling changes, and possibly to even consider how the switches handle ports/cards coming online after a reboot.