StoreVirtual Storage
1748194 Members
3771 Online
108759 Solutions
New Discussion

vmWare Volumes and Gateway

 
Peter J West
Frequent Advisor

vmWare Volumes and Gateway

Hi,

 

Is it normal when looking at the iSCSI connections for a volume which is assigned to an ESXi host to only see connections to a specific node?

 

I've done a little reading online and it seems there's a trade-off between using volumes provisioned through vmWare and provisioning them directly to the Windows OS and then using the DSM for MPIO.

 

When doing the latter it seems that the host will establish seperate paths to each of the nodes (at least on the same site) which to my mind should boost throughput.

 

Is it really the case that using a vmWare-based volume is going to cause a bottleneck at the gateway?

 

I think our current configuration probably needs rationalising a little if that's the case because we currently use a combination of the above, but we have something like 40 volumes defined on a 6 node setup.  From what i've read such a large number of volumes could indeed prove a bottleneck and we'd be better off reducing the volume count to a few large volumes and then using vmWare to slice the disk up into the required parts.

 

Any comments or views on this would be appreciated.  We don't see massive performance problems on a day-to-day basis but I did note today that copying data from one volume to another was relatively slow, with transfer peaking at around 18 MB/s.

 

Grateful for any feedback on this.  :)

 

Pete

 

22 REPLIES 22
Dirk Trilsbeek
Valued Contributor

Re: vmWare Volumes and Gateway

as far as i know all connections to a specific LUN are redirected to a single node which is gateway for that LUN. But as you probably have more than a single LUN, traffic from and to other LUNs will be redirected to other nodes, so in a production environment this will level out, as long as you don't have LUNs with lots of traffic and others with no traffic at all.

5y53ng
Regular Advisor

Re: vmWare Volumes and Gateway

Are you using a physical appliance or a VSA?

 

Yes this is normal in VMware. As Dirk pointed out, since you have multiple volumes, the load balancing feature of the P4000 will attempt to balance the gateway connections across each node in the cluster. You can also implement MPIO in VMware as well as configure the maximum number of IOPS per path.

 

Some other things I have learned since I started working with the VSA is that you should limit the number of virtual machines per volume to 4 (per HP supprt engineer recommendation). I was able to confirm this recommendation is effective using IOmeter.

 

Another trick to boost performance (VSA only) is to put a virtual machine on the same ESX/ESXi host as the gateway connection for its volume. If you do this your IO stays on the vswitch and you'll see a noticeable boost in throughput. Unfortunately you have to set this manually, but if you have a VM that you need a little better performance from this helps.

 

Using MPIO and limiting the number of IOPS in VMware along with following the maximum number of virtual machines per volume, you can achieve decent performance.

 

The performance you are seeing when copying between volumes is about the same as what I see in a six node cluster.

 

 

Peter J West
Frequent Advisor

Re: vmWare Volumes and Gateway

Thanks for the comments.

 

And we're not using the VSA - we have a total of 6 P4500 nodes.

 

Pete

 

Emilo
Trusted Contributor

Re: vmWare Volumes and Gateway

Is it normal when looking at the iSCSI connections for a volume which is assigned to an ESXi host to only see connections to a specific node?

 

Yes this is normal behavior, the node that has the connection is the gateway connection. It serves as the "host" iSCSi connection and is responsible for obtaining the data from the other nodes in the cluster. This architecture has proved to be very efficient and scales well as it distributes the load. In addition the more volumes you have the better the performance boost. See this document http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03178470&jumpid=em_alerts_us-us_Feb12_xbu_all_all_1639858_105818_storage_routine_006_1

 

You can as has been mentioned implement Native Multi Path for Vmware.

To enhance the availability of iSCSI storage, implement network multi-pathing for the two or more physical network adapters in VMware hosts that are dedicated to iSCSI traffic, . Follow the instructions provided in the VMware document, iSCSI SAN Configuration Guide; refer to the section entitled, “Setting Up Software iSCSI Initiators.”

 

 

I've done a little reading online and it seems there's a trade-off between using volumes provisioned through vmWare and provisioning them directly to the Windows OS and then using the DSM for MPIO.

 

With the P4000 the best performing and most fault tolerant iSCSI solution is implenting the MS Initiator with DSM / MPIO. In order to get the most benefit from this you need to have a minimum of 2 Nics on the host iSCSI server. You should also have a minimum of 3 or more P4000 SAN's. This will need to be a HW based MS host or Hyper-V as there are some incompatibliteis with VMware.

 

Hope this helps.

 

 

Peter J West
Frequent Advisor

Re: vmWare Volumes and Gateway

Thanks for the feedback - it's all very helpful.

 

As we're running vmWare it seems our best option is to not use MPIO within the guest OS but instead just use VMDK volumes provisioned directly from within vmWare.

 

I read an article recently from 5y53ng about configuring the path selection policy to IOPS and reducing the default from 1000.  However this in itself raises a few questions:

 

1. Is it necessary to run the esxcli command against every volume on every ESX host?  If so is there any way to script or automate the process?

 

2. Is there any way to automate the process on newly created volumes or is there a default we can modify to ensure new volumes automatically have these values set?

 

Grateful for any feedback on this one.

 

Regards

 

Pete

 

5y53ng
Regular Advisor

Re: vmWare Volumes and Gateway

Hi Peter,

 

You can script the process. In prior versions of ESX/ESXi the IOPS setting was not persistant, so the script was placed in etc/rc.local, so it would persist across reboots. I am running ESXi 5 now, but I still leave it in rc.local.

 

# fix broken RR path selection
for i in `ls /vmfs/devices/disks/naa.6000eb* | grep -v ':[0-9]$'`; do
esxcli storage nmp psp roundrobin deviceconfig get --device ${i##*/} 2>/dev/null && \
esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=3 --device ${i##*/}

 

Giving credit where credit is due, I got this script, which works very well, from "Matt" in the comments section of the article below:

 

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

Peter J West
Frequent Advisor

Re: vmWare Volumes and Gateway

Haha, thanks.

 

It seems I already had this in place as i've got this (similar) line in my rc.local file already:

 

for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli nmp roundrobin setconfig --type "iops" --iops=1 --device $i; done

 

as far as I can make out it does the same thing albeit with the volumes being parsed in a different way.

 

One thing I didn't realise is that this file is only executed when the ESX host first boots.  Is therea ny way to schedule the job to run periodically?  This would then help also with newly created volumes.

 

Pete

 

5y53ng
Regular Advisor

Re: vmWare Volumes and Gateway

Take a look at the directory, vmfs/devices/disks/ and make sure you are not trying to apply a PSP of IOPS to any local disks or partitions. Using grep for naa.600* might not weed out the local disks and partitions. I think that is really the only difference between what I posted and what you already have in your rc.local file.

 

You might be able to schedule a task using the vCLI or Power CLI along with the task scheduler in Windows, or maybe using cron in ESXi. I haven't played around with Cron in the ESXi environment, so I can't say for sure.

dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

I realize this is an older thread, but given that the connections are made to a single node, we've found that when we've had a node failure / node reboot / etc, those specific volumes (datastores) on that single node (and any running VMs) obviously go offline.

I'm searching for work arounds to keep those volumes online during a node reboot or node failure. Any thoughts?

dcc