Array Setup and Networking
1748242 Members
4118 Online
108760 Solutions
New Discussion юеВ

Array and iSCSI Setup Queries

 
SOLVED
Go to solution
Zinc666
Occasional Contributor

Array and iSCSI Setup Queries

Hi All,

We're looking to make a jump to a CS220 array and I've got a couple of queries on its setup that I'd like to run by the community if I may;

iSCSI and MPIO

We'll be looking to attach the array to our iSCSI network using 4 1GbE MPIO connections from each controller and quad NICs on our clustered Hyper-V hosts.  We'll be using two new HP 2920-48G switches in a stacking config to join it all together.

I've already planned that for data volumes (SQL, Exchange etc. set with the appropriate profiles) these will be guest connected in the VMs using dedicated iSCSI connections through additional quad NICs in each host setup as virtual switches in the Hyper-V parent.  I've read previously that if you share a NIC with the parent in Hyper-V performance can suffer in the guests plus we've got plenty of NICs at hand so why not?

Previously I've been of the mind that each MPIO connection should be within it's own subnet with each subnet setup as VLANs on the switches themselves; is this still the right way of doing things as some Nimble documentation I've seen shows the array setup with multiple data IPs all within the same subnet?  I've also checked out part 1 of the Nimble OS 2.0 posts dealing with manual vs automatic networking and if I understand correctly seperate VTIPs are needed for each MPIO subnet used, correct?  Also, when using automatic networking are Discovery IP addresses still a consideration or do VTIP's replace them completely?


VM boot VHDs and Hyper-V CSVs

I've checked the best practices guide for Hyper-V and it recommends to dedicate a CSV volume for each VM boot VHD to take advantage of Zero-Copy Cloning to minimise the space taken up.  I will be migrated existing VM VHD's from our current setup to the Nimble and so have no plans to use Zero-Copy Cloning anyway.  As such I'm planning an creating a single CSV to store all our boot VHD's - I'm guessing this arrangement won't introduce any performance penalties will it?

Of course perfecting the setup will be much easier once I'm in posession of the new kit but I'm trying to get a basic idea going in my head now of how things should be done so appreciate any advice anyone can offer.

Many thanks in advance.

7 REPLIES 7
tmoore106
Trusted Contributor

Re: Array and iSCSI setup queries

Ross

There is no "requirement" to use sperate VLANs for MPIO. I don't have a single customer of mine doing that with Nimble. Come to think of it.... I don't know anybody doing that. I would personally suggest following the BPG for Hyper V from Nimble, and using one subnet, and leverage the windows tool kit for MPIO. If you don't have the BPG just ask your local friend Nimble SE or fee free to email me and I will get it to you.

In regards to the boot VHD's, I don't see a problem with it, and I don't recall MS stating you shouldn't do that. Hope this helps.

Zinc666
Occasional Contributor

Re: Array and iSCSI setup queries

Thanks Todd.  I see a lot of people using separate subnets (with VLANs usually) for iSCSI traffic on other vendor arrays, maybe it's a Nimble thing then?

I am struggling to understand the IP addressing from the Hyper-V hosts when using MPIO and a single subnet though.  In the Nimble OS 2.0 Part 1 Manual vs Automatic Networking post it shows an array with four iSCSI connections using 192.168.50.101-104 but only a single connection, 192.168.50.11, on the host.  How can this use the full multipathed 4 x 1GbE bandwidth from the host to the switches and then onto the array if there's only one NIC?  Are they NICs in a team?  Is it a 10GbE connection?  Am I being stupid?

Some Nimble documentation does mention the use of multiple subnets but don't give any examples of such a setup.  I have checked the BPG for Hyper-V (http://www.nimblestorage.com/docs/downloads/bpg_nimble_storage_hyperv.pdf) and whilst it briefly mentions MPIO is doesn't go into any detail on the IP addressing schemes used.

One thing I do like and used to when using seperate subnets is it makes the IPs easier to setup, understand and track with the last octet of each connection matching that of the machines LAN IP.  For example, I've got a Hyper-V host with its management IP set to 10.0.0.37 and its four iSCSI connections set to 172.16.1.37, 172.16.2.37, 172.16.3.37 and 172.16.4.37.  All kit participating in the iSCSI network follows this scheme with the subnets split over the two switches perhaps with the odd subnets on one and evens on the other.  I'm just trying to understand how this should all be done properly on a Nimble array.

As for the boot VHD query it was more to confirm the setup from the Nimble point-of-view as this is a perfectly legit setup in Hyper-V.

Thanks again.

Nick_Dyer
Honored Contributor

Re: Array and iSCSI setup queries

Hi Ross,

In my blog post the example host has two NICs - 50.11 and 50.12. Each server NIC is bound to a different switch for HA. These NICs are not in a team.

On the Nimble array we have 4x 1GbE ports serving data in a single subnet - with two ethernet ports going to one switch, two to the other. We then have Nimble Connection Manager for Windows (blog post here) which then creates and manages the multiple paths on the fly for you across all server and storage array NICs. You would install this on each Hyper-V host and let that tool manage the MPIO for you.

I concur with Todd - it's incredibly rare for customers to use multiple subnets and VLANs for iSCSI data traffic. The only time you would use this is in XenServer - and that's because it's mandatory for the way XenServer works. At most customers use two subnets for iSCSI data traffic as the iSCSI switches are not stacked.

Finally your description of simplicity and separation of the network is actually a feature we have in the array called IP Address Zones - which allows you to define all odd IP addresses to one switch (typically Switch 1) and then all even IP addresses to switch 2. I think there's a blog post coming out about that shortly.

At the end of the day my mantra is KISS - Keep It Simple, Stupid. A single iSCSI subnet makes things a lot easier to deploy, configure and manage on a daily basis rather than being concerned about a subnet per NIC/switch port etc.

Nick Dyer
twitter: @nick_dyer_
Zinc666
Occasional Contributor

Re: Array and iSCSI setup queries

Thanks Nick.

I've had it in mind that we'll always be running 8 x iSCSI 1GbE connections from our hosts (4 for host access, 4 for guests) which has perhaps made it difficult for me to imagine anything else.  Our reseller has recommended our switches be stacked so we'll forget about two subnets and I do agree that keeping things as simple as possible is the way to go.

So to sum up we'll be running two connections per host for the VHD boot images stored in a CSV and two connections per host for the Hyper-V virtual switches for VM guests connecting directly to the array for SQL, Exchange volumes etc.?

Thanks again.

Nick_Dyer
Honored Contributor
Solution

Re: Array and iSCSI setup queries

Great to hear Ross. What you'll end up doing is running all four connections active with MPIO for all volumes presented to the host, which includes your CSV boot volumes AND data drives. The Nimble Connection Manager software will ensure the right amount of paths are active for each volume and will automatically create new connections for you (on the fly) if it spots it can get better latency and/or lower disk queues by doing so.

The cool thing is NCM does all the heavy lifting and configuration work for you which has made life a LOT easier - previously we've had to run Powershell scripts or manually bind connections for each volume which can be time consuming and frustrating if a mistake is made!

Nick Dyer
twitter: @nick_dyer_
Not applicable

Re: Array and iSCSI Setup Queries

Another item to note: NCM for Windows will only make connections from Windows host initiator ports to Nimble array target portal IPs which are in the same subnet. You no longer have to worry about accidently configuring connections and ending up with performance robbing cross switch traffic. And you can easily verify that all of your target volume connections are balanced across the host initiator ports and to target portal IPs in the same subnet in the 'Properties' dialog on the NCM's Nimble Volumes tab.

Zinc666
Occasional Contributor

Re: Array and iSCSI setup queries

Hi Nick,

I'm pleased to say that I've just completed the roll-out of the CS220 and we're very impressed so far.  Went with a single subnet in the end with an Even /Odd address zone, much easier like you say.

I've just read your comment about presenting all network connections to the hosts for both CSV boot volumes and data drives though; I'm guessing this would utilise Passthrough Disks to the VMs for the data?  I'm interested that you've mentioned this as Passthrough's aren't mentioned in the Hyper-V best practices doc with the guests directly connecting over iSCSI instead.  This is what we've done.

Passthrough vs guest connection is probably an oranges/apples thing; I'd probably use Passthroughs if our NIC count was low thus giving both hosts and VM's an equal shot at the iSCSI data over all possible paths but we are able to dedicate 4 connections to hosts and 4 to VMs if we wanted to.  We currently run 2 and 2 for now but I'm keeping an eye to see if we ever get close to saturating bandwidth.  I'd be eager to hear if you have any other thoughts on this argument?

Thanks.