- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: Performance, these numbers seem wrong
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2013 04:28 PM
06-04-2013 04:28 PM
I have a CS460G-x2, its got dual 10GB connections, both connections are active in the VM env.
I was spinning up 4 VMs (esxi 5.1) from a template.
The cache hit was 98%
160 MB/s
2k Read IOPS
3k Write IOPS
Im fairly certain I should have been able to get better performance than this.
Not really sure how i'd go about diagnosing this, kinda new to ISCSI.
Any thoughts?
Solved! Go to Solution.
- Tags:
- performance
- VMware
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2013 04:36 PM
06-04-2013 04:36 PM
Re: Performance, these numbers seem wrong
Hi Justin,
you are right - those numbers are way below the system capabilities (over an order of magnitude), which may be due to a configuration or test problem. Quickest way to diagnose this would be to call Nimble support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2013 04:47 PM
06-04-2013 04:47 PM
Re: Performance, these numbers seem wrong
I wouldnt say it was a "test" so much as it is just normal use, but im guessing its either the network side or the VM side. we dont have jumbo frames on right now because we were told (initially) that it wouldnt make much of a difference for 10gb.
maybe i'll give them a call tomorrow.
thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2013 04:50 PM
06-04-2013 04:50 PM
Re: Performance, these numbers seem wrong
oh, ok. It's possible that the VMs are simply not generating a lot of IO and as you add more VMs you'll notice system load/IOPS go up proportionally.
If in doubt, support can help check up on the configuration.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2013 11:40 PM
06-04-2013 11:40 PM
Re: Performance, these numbers seem wrong
Justin - do you have two vmkernel ports defined for iSCSI, and bind together in the sw/iscsi configuration? additionally, are all volumes configured with PSP_RR (round_robin) policy? please do contact our support team so we could take a look @ the environment holistically and get to the bottom of this. Check out the following post as well on quick set of checklist for vmw + nimble best practices (only minor correction in that post is you only need to set iops=0 for the PSP_RR policy, not both iops & bytes)
-wen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2013 11:17 AM
06-05-2013 11:17 AM
Re: Performance, these numbers seem wrong
Ajay, Its a Clone, so no, it should be going as quickly as possible.
Wen,
the vmkernal is part of a DVswitch.
There are two port groups, ISCSI-A and ISCSI-B, that point to two different NICs (vmnic2 and 3)
Those each have a VMKernel port (vmk1 and vmk2 respectively)
Those ports are bound to an iSCSI adapter (vmhba32 in this case), the path status is Active for both and the Static Discovery shows that there are 4 paths.\
When I look at the datastore, and do manage paths i see that the path selection is set to RR and that it has the 4 paths listed with a status of Active(I/O) for each.
to throw one more wrench in to the mix, we are using HP C7000 chasis, which has Virtual Connect. So vmnic2 and 3 are actually defined at 5Gbps. The virtual connect has 4 physical 10Gbps connections, two on each module, making vmnic2 the path to the 2x10Gbps connections, allowed to take 5Gbps
given all of that i still feel that 160MBps seems slow. I was working with the network guy, who is going to try and setup some more monitoring, but he said he was only seeing traffic from one port at 600 Mbps.
still need to map this out a bit more to see whats going on.
This shows how its setup, with the assumption of ISCSI presented to the vms (not currently in use), and the actual switch side of it is a tad more complex, in the sense that there are two 4500's in a VSS config with cables x-crossing...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2013 10:25 AM
07-23-2013 10:25 AM
SolutionSo i worked with Bryce LeBlanc this morning and we found that the VM paths were not fully configured for round robin (however this one checked once upon a time, guess i over looked something) so we sorted that out and found that we were able to then utilize both paths. At this point im still getting only 160MB each way (320Gb total) but based on some tests results provided to me by Ben Hass we found that my dual 5GB (we have HP C7000 with Virtual connect) is being reasonable utilized.
Here are the performance numbers provided:
100% Sequential Write – 256KB block size, Queue Depth – 16 630 MB/s
100% Sequential Read – 256KB block size, Queue Depth – 16 1,150 MB/s
doing some research i've found that the majority of folks are seeing around 400MB/s over 10Gb, so the 320 im seeing is pretty decent.