- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: iSCSI Latency is Not an Issue for Nimble Stora...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-05-2014 02:55 PM
тАО03-05-2014 02:55 PM
Re: iSCSI Latency is Not an Issue for Nimble Storage
Our latency is troublesome all over the board. We have Brocade VDX 10 GbE switches connected to HP DL360G8 servers and Nimble 460G array... still getting 7000 ms latency from vmware virtual machines running SQL and the likes. No idea what the heck is going on with it. I'm in the process of finally moving vm's to 1 vCPU instead of what was happening before (folks see it slow so they add resources thinking it would speed it up but actually slowed it down more with CPU wait, etc) so I'm interested to see what happens once I'm done with the 176 machines.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-06-2014 06:18 AM
тАО03-06-2014 06:18 AM
Re: iSCSI Latency is Not an Issue for Nimble Storage
Andrew, my gut tells me that your issue is likely in your switch stack somewhere. Are you running standard MTU or Jumbo frames? I see this often when Jumbo frames is desired to be deployed, but somewhere along the network path, a single device is not configured correctly. You need to make sure that every interface is set to the same MTU (Host drivers, switches, array). If that's not it, what about flow control? Is flow control enabled (hopefully bi-directionally) on all of your switching elements? Have you looked done any analysis to see if you are getting excessive re-transmits on any of your switch ports? How about CPU levels of the switching elements? Just a few other things, try turning off STP and Unicast Storm Control features on your switches if you can. The last thing (or maybe the first thing to check) is the quality of the cables. From time to time, I see problems with some manufacturers 10Gb cables (much less issues using fiber SFP+ adapters as provided with the Nimble 10Gb ports).
Rocky, in general, the "Network Latency" of a 10Gb fabric should be 0.1 ms and a 1Gb network will be 1 ms. If you looked at the stats provided in the array UI, and looked at the same stats as provided in your host monitoring tools, you should see those deltas between the two reports if everything is working correctly. If you see more network latency than 2x these numbers ( so .2 ms on 10Gb and 2 ms on 1Gb) something is not configured or performing correctly based on my experience.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2014 07:30 AM
тАО03-13-2014 07:30 AM
Re: iSCSI Latency is Not an Issue for Nimble Storage
Andrew, iSCSI latency is about 70 microseconds slower than FC or FCoE, that would be 0.07ms. Something is hitting you in the network as Mitch was trying to point out. Could you share your network diagram with us?
Just one question as well, the connectivity is that 10GbE all over, or is it 1GbE from the VMware server into the Brocade and then 10GbE to Nimble. Are you using the ISL link port on the switch to connect to Nimble?
Please help us with the questions so that we have a better understanding of your environment and can truly help you out.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2014 11:42 PM
тАО03-13-2014 11:42 PM
Re: iSCSI Latency is Not an Issue for Nimble Storage
Viral,
I don't think this is networking related. Also, write performance is fine so we can rule out CPU. Read latency is somewhat high which can be explained by low cache hit rate of random IO (around 60/80%). It would be interesting to show the Infosight statistics for this array. My guess based on these screenshots is that your cache utilisation is (too?) high.
My advice is to take this up with support, they can do a thorough scan of your environment.
Cheers,
Arne
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-18-2014 04:41 PM
тАО03-18-2014 04:41 PM
Re: iSCSI Latency is Not an Issue for Nimble Storage
Your latency peaks just as your cache hit ratio takes a dive
At the 9am (far right) your at about 75% cache hits, then it dives to about 45%.
Off to the rotational drives at that point = explanation for increased latency?
Or is it not so much a 1:1 ratio.
Hey Nimble folks: we need more command line tools. I want something similar to Sun's ARC or L2ARC cache tool, which shows breakdown of cache hits/misses, and something like an iostat and/or gstat from FreeBSD, which shows my rotational disks getting used by my ssd's not getting used.
Pretty graphs are nice but give me black & green putty & root any day of the week.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2014 02:36 PM
тАО04-21-2014 02:36 PM
Re: isci Latency
I wish we had jumbo running on the network switch stack but unfortunately selling that idea and redoing our entire switch stack has been a rough go so we're sticking with 1500 MTU. I'll have to check on the flow control.
- « Previous
-
- 1
- 2
- Next »