Array Setup and Networking
cancel
Showing results for 
Search instead for 
Did you mean: 

MPIO or iSCSI MC/S (MCS)

 
SOLVED
Go to solution
Highlighted
Occasional Advisor

MPIO or iSCSI MC/S (MCS)

Since we will be iSCSI only, do you think MC/S would be a better option to connect our File Server to a nimble volume?

12 REPLIES 12
Highlighted
New Member
Solution

Re: MPIO or iSCSI MC/S (MCS)

Sean,

I would two one gig connections multipathed to your nimble could yield some massive I/O if needed.

Don't use MCS but Advanced properties of Windows ISCSI Initiator and multipath away my friend.

cheers,

-sTeve-

Highlighted
Honored Contributor

Re: MPIO or iSCSI MC/S (MCS)

Sean - I'd also refer to the excellent Windows MPIO setup script that Adam Herbert wrote - this will setup all the MPIO settings and paths for you

Check it out here:

Automate Windows iSCSI Connections

Cheers

Rich

Highlighted
Valued Contributor

Re: MPIO or iSCSI MC/S (MCS)

I too would use MPIO rather than MCS. You must, must make sure that you are multipathing correctly (using all paths). Your switching setup will determine the number of paths.

The Nimble monitoring pages (monitor interfaces, in particular - if you can distinguish between normal activity & normal + SQL activity) can help you from the array side, and good ol' task manager can of course give you a steer.

Highlighted
Occasional Advisor

Re: MPIO or iSCSI MC/S (MCS)

Sean -

This is for our SQL servers specifically but may help you regarding your file servers if bandwidth and performance are critical...  we have three 10GB NIC ports (across different NIC Cards) on both of my SQL cluster nodes, going to Arista switches. The connections themselves are split over two Arista switches for additional redundancy. 1 10GB port on each host is going to a different Arista for general network connectivity.

Each Nimble volume we present to our 2 node SQL cluster has 1 paths per portal/NIC IP so 3 paths per volume per host. 6 paths between both nodes per volume. .

Once you setup MPIO ... at least in our setup, MC/S is also setup for each discovery portal session/NIC IP (See screenshot)

Here is what I configured. Under MPIO make sure you have multiple paths, switch to least queue depth as your performance policy, and under MC/s I would switch from round robin to least queue depth as well.

Here are some screenshots.. 3 paths/sessions per volume ... The second shot is the devices tab.. you should see three volumes 1 for each session etc... The third shot is the devices tab / MPIO settings. Least Queue Depth should yield your best "load balanced" policy.  I also changed the MCS portal sessions to least queue depth as well (under the MCS setting under the sessions tab, first screenshot).

Hope this rambling helps in some way. If you have any questions let me know.

FYI for SQL on NIMBLE we are seeing anywhere between 24,000 and f 50,000 IOPS for sequential writes compared to our old HP EVA SAN which yielded about 2,000 - 9,000 IOPS. Even though this setup may be a little overkill for most... on our OLTP system (AX) redundancy and performance are absolutely critical.

LUN.png

MPIO-2.png

MPIO.png

JD

Highlighted
Occasional Advisor

Re: MPIO or iSCSI MC/S (MCS)

Unfortunately The Script is also connecting the Volumes we use to house the VMs. However, I cannot get MPIO to function correctly outside of the Script. It only see the single session outside the Script, over the Multiple paths.

Highlighted
Trusted Contributor

Re: MPIO or iSCSI MC/S (MCS)

Sean, you should limit access via initiator groups on the array. That will keep this connections from being made to the wrong volumes.

Highlighted
Trusted Contributor

Re: MPIO or iSCSI MC/S (MCS)

Sean, you should limit access via initiator groups on the array. That will keep this connections from being made to the wrong volumes.

Highlighted
Occasional Advisor

Re: MPIO or iSCSI MC/S (MCS)

I have done this, but for some reason all of the Datastores created to HOST VMs also show up, when only the ESXi host have access. However I was able to get only the DS setup for the SQLTest by : $_.Target -match "com.nimblestorage:sqltest".

Highlighted
Occasional Advisor

Re: MPIO or iSCSI MC/S (MCS)

Immediately after posting I realized that I created these DS using the Nimble vSphere plugin. Which does not setup Volume Restrictions. I have added the ESXi group to the VM Datastores.