HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Array Performance and Data Protection
Showing results for 
Search instead for 
Did you mean: 

Performance, these numbers seem wrong

Go to solution
Trusted Contributor

Performance, these numbers seem wrong

I have a CS460G-x2, its got dual 10GB connections, both connections are active in the VM env.

I was spinning up 4 VMs (esxi 5.1) from a template.

The cache hit was 98%

160 MB/s

2k Read IOPS

3k Write IOPS

Im fairly certain I should have been able to get better performance than this.

Not really sure how i'd go about diagnosing this, kinda new to ISCSI.

Any thoughts?

Occasional Advisor

Re: Performance, these numbers seem wrong

Hi Justin,

you are right - those numbers are way below the system capabilities (over an order of magnitude), which may be due to a configuration or test problem. Quickest way to diagnose this would be to call Nimble support.

Trusted Contributor

Re: Performance, these numbers seem wrong

I wouldnt say it was a "test" so much as it is just normal use, but im guessing its either the network side or the VM side. we dont have jumbo frames on right now because we were told (initially) that it wouldnt make much of a difference for 10gb.

maybe i'll give them a call tomorrow.


Occasional Advisor

Re: Performance, these numbers seem wrong

oh, ok. It's possible that the VMs are simply not generating a lot of IO and as you add more VMs you'll notice system load/IOPS go up proportionally.

If in doubt, support can help check up on the configuration.

Trusted Contributor

Re: Performance, these numbers seem wrong

Justin - do you have two vmkernel ports defined for iSCSI, and bind together in the sw/iscsi configuration?  additionally, are all volumes configured with PSP_RR (round_robin) policy?  please do contact our support team so we could take a look @ the environment holistically and get to the bottom of this.  Check out the following post as well on quick set of checklist for vmw + nimble best practices (only minor correction in that post is you only need to set iops=0 for the PSP_RR policy, not both iops & bytes)


Trusted Contributor

Re: Performance, these numbers seem wrong

Ajay, Its a Clone, so no, it should be going as quickly as possible.


the vmkernal is part of a DVswitch.

There are two port groups, ISCSI-A and ISCSI-B, that point to two different NICs (vmnic2 and 3)

Those each have a VMKernel port (vmk1 and vmk2 respectively)

Those ports are bound to an iSCSI adapter (vmhba32 in this case), the path status is Active for both and the Static Discovery shows that there are 4 paths.\

When I look at the datastore, and do manage paths i see that the path selection is set to RR and that it has the 4 paths listed with a status of Active(I/O) for each.

to throw one more wrench in to the mix, we are using HP C7000 chasis, which has Virtual Connect. So vmnic2 and 3 are actually defined at 5Gbps. The virtual connect has 4 physical 10Gbps connections, two on each module, making vmnic2 the path to the 2x10Gbps connections, allowed to take 5Gbps

given all of that i still feel that 160MBps seems slow. I was working with the network guy, who is going to try and setup some more monitoring, but he said he was only seeing traffic from one port at 600 Mbps.

still need to map this out a bit more to see whats going on.

This shows how its setup, with the assumption of ISCSI presented to the vms (not currently in use), and the actual switch side of it is a tad more complex, in the sense that there are two 4500's in a VSS config with cables x-crossing...


Trusted Contributor

Re: Performance, these numbers seem wrong

So i worked with Bryce LeBlanc this morning and we found that the VM paths were not fully configured for round robin (however this one checked once upon a time, guess i over looked something) so we sorted that out and found that we were able to then utilize both paths. At this point im still getting only 160MB each way (320Gb total) but based on some tests results provided to me by Ben Hass we found that my dual 5GB (we have HP C7000 with Virtual connect) is being reasonable utilized.

Here are the performance numbers provided:

100% Sequential Write – 256KB block size, Queue Depth – 16 630 MB/s

100% Sequential Read – 256KB block size, Queue Depth – 16 1,150 MB/s

doing some research i've found that the majority of folks are seeing around 400MB/s over 10Gb, so the 320 im seeing is pretty decent.