- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Re: High latency issue - nimble storage - brocade ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-18-2023 04:50 AM - last edited on 05-21-2023 09:28 PM by support_s
05-18-2023 04:50 AM - last edited on 05-21-2023 09:28 PM by support_s
High latency issue - nimble storage - brocade fc switches
Dear all,
I would like to bring to your attention an issue that we have been experiencing since Monday, 8th May 2023. We have observed significant latency on all of our hosts, with delays reaching up to 300 seconds. This problem is affecting all six of our ESXI hosts and three Windows servers.
Our hosts are connected to a mirrored Nimble storage via Fibre Channel (FC). We have four Brocade FC switches in total, organized into two separate fabrics that are not interconnected. As a result, the fabrics cannot communicate with each other.
Despite our efforts, the HP support team has been unable to provide a solution thus far. We have involved Nimble, Brocade, and VMware, but each party claims that their respective components are functioning correctly, and they have been unable to identify the source of the problem. This situation is hindering our ability to work effectively on our systems.
Therefore, I am reaching out to this community in the hope that someone may have an idea or suggestion regarding the root cause of this issue. Any assistance or guidance would be greatly appreciated as we strive to resolve this problem and restore normal operations.
To start, the Fibre Channel (FC) team has confirmed that the SAN switches are not logging any errors that could potentially help us pinpoint the issue. Additionally, the Nimble support team has assured us that the internal latency of the Nimble storage system itself is within acceptable limits, and the system is operating healthily.
Here are some important details about our environment:
- We have a mirrored Nimble storage configuration (6.1.1.200-1020304-opt).
- We are utilizing four Brocade SAN switches with the following specifications:
- Fabric OS: v8.2.3b
- Type: 118.1
- Model: 650
- Manufacturer serial number: ***
confidential info erased***
- Our infrastructure consists of six ESXI hosts (ProLiant DL380 Gen10) and three Windows server hosts
Based on the confirmation from the FC team and the Nimble team that the systems are functioning properly, we have undertaken the following measures in an attempt to resolve the problem: These include:
- Daily sharing of current log files from the SAN switches and ESXi hosts with the support teams.
- Updating the ESXi hosts, SAN switches, vCenter, and Nimble storage to their latest versions.
- Investigating slow-drain devices, high values of tim_txcrd_z counters, and errors in the SAN log files.
- Replacing faulty FC cables and transceivers with errors or low power/voltage.
- Trying different connections between the switches within each fabric, including switching from multimode to single-mode fiber.
- Adding two additional lines (E-ports) between the switches in each fabric.
- Changing the line and ports for the Nimble sync (network connection).
- Performing a handover from one Nimble controller to the other.
- Migrating VMs to different hosts and LUNs.
- Switching between the redundant Nimble controllers.
- Shutting down the ESXi hosts one by one to observe any impact on latency.
- Rebooting the SAN switches.
- Temporarily shutting down servers (VMs and Windows hosts) with the highest IOPS.
- Monitoring the Nimble health, where the CPU utilization is around 50 to 60 percent, and the average read and write latencies are within acceptable limits (2.26 ms and 1.25 ms, respectively).
To monitor the response time and latency, we have been using esxtop with the DAVG/cmd metric, IOmeter, and the log files from our ESXi hosts. These monitoring methods have allowed us to observe latency spikes of up to 300 seconds. However, for the majority of the time, we are observing latency ranging from 2 to 20 seconds.
Additionally, our vCenter is generating event logs that may be relevant to this issue. The specific events logged by vCenter are as follows:
- Volume 61126d65-c752f006-a5cc-9440c918333c (vsphere-LUN30-RZ1-RZ2) can no longer be accessed due to connectivity issues. An attempt is made to perform a recovery. The result will be available soon.
- Access to volume 5e186201-114c2458-a1b3-9440c9183ae6 (vsphere-LUN00-RZ1-RZ2) was restored after connectivity issues.
In the logfiles of our ESXI-hosts (vmkwarning.log) I can see that there are high latencies since 04/11/2023.
2023-04-11T21:30:37.507Z cpu0:2097963)WARNING: ScsiDeviceIO: 1498: Device eui.d78e4f372a9ae94e6c9ce9001e4dc482 performance has deteriorated. I/O latency increased from average value of 4071 microseconds to 1799631 microseconds.
We have discovered that older logs on the host were automatically deleted, preventing us from identifying the high latency issue at an earlier stage. While we were aware of general performance issues, we did not realize the extent of the latency problem until it escalated rapidly on 8th May 2023. This revelation suggests that the high latency may have been affecting our systems for an extended period without our awareness.
vmkernel.log:
2023-05-10T16:06:44.435Z cpu2:9730466)HBX: 5760: Reclaiming HB at 4030464 on vol 'vsphere-LUN02-RZ1-RZ2' replayHostHB: 0 replayHostHBgen: 0 replayHostUUID: (00000000-00000000-0000-000000000000).
2023-05-10T16:06:44.436Z cpu2:9730466)HBX: 294: 'vsphere-LUN02-RZ1-RZ2': HB at offset 4030464 - Reclaimed heartbeat [Timeout]:
2023-05-10T16:06:44.436Z cpu2:9730466) [HB state abcdef02 offset 4030464 gen 4443 stampUS 2452223657109 uuid 6436560b-b270609a-f6dc-48df37a25880 jrnl <FB 50331649> drv 24.82 lockImpl 4 ip 172.20.13.213]
2023-05-10T16:06:44.458Z cpu21:2097914)NMP: nmp_ThrottleLogForDevice:3867: Cmd 0x89 (0x45d95204a1c8, 9821137) to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" on path "vmhba1:C0:T1:L30" Failed:
2023-05-10T16:06:44.458Z cpu21:2097914)NMP: nmp_ThrottleLogForDevice:3875: H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0. Act:NONE. cmdId.initiator=0x4308172ebf80 CmdSN 0x6181588
2023-05-10T16:06:44.458Z cpu21:2097914)ScsiDeviceIO: 4161: Cmd(0x45d95204a1c8) 0x89, CmdSN 0x6181588 from world 9821137 to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.458Z cpu13:2097919)ScsiDeviceIO: 4161: Cmd(0x45b970e2b888) 0x89, CmdSN 0x618158c from world 9821139 to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.458Z cpu1:2097913)ScsiDeviceIO: 4161: Cmd(0x45b97862f848) 0x89, CmdSN 0x618158d from world 9819807 to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.458Z cpu13:2097919)ScsiDeviceIO: 4161: Cmd(0x45b97da343c8) 0x89, CmdSN 0x618158a from world 9821134 to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.460Z cpu13:2097919)ScsiDeviceIO: 4161: Cmd(0x45b970ec3688) 0x89, CmdSN 0x618158f from world 9821134 to dev "eui.64f3f777d4d6828b6c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.462Z cpu21:2097914)NMP: nmp_ThrottleLogForDevice:3815: last error status from device eui.6ad5bbb3629a2ec66c9ce9001e4dc482 repeated 4 times
2023-05-10T16:06:44.462Z cpu21:2097914)NMP: nmp_ThrottleLogForDevice:3867: Cmd 0x89 (0x45d954bad288, 9821136) to dev "eui.6ad5bbb3629a2ec66c9ce9001e4dc482" on path "vmhba1:C0:T1:L2" Failed:
2023-05-10T16:06:44.462Z cpu21:2097914)NMP: nmp_ThrottleLogForDevice:3875: H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0. Act:NONE. cmdId.initiator=0x4308177c7780 CmdSN 0xc89af8
2023-05-10T16:06:44.462Z cpu21:2097914)ScsiDeviceIO: 4161: Cmd(0x45d954bad288) 0x89, CmdSN 0xc89af8 from world 9821136 to dev "eui.6ad5bbb3629a2ec66c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.462Z cpu20:2097920)ScsiDeviceIO: 4161: Cmd(0x45d954bfbe88) 0x89, CmdSN 0xc89af5 from world 9821140 to dev "eui.6ad5bbb3629a2ec66c9ce9001e4dc482" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xe 0x1d 0x0
2023-05-10T16:06:44.466Z cpu2:9730466)HBX: 5760: Reclaiming HB at 4030464 on vol 'vsphere-LUN00-RZ1-RZ2' replayHostHB: 0 replayHostHBgen: 0 replayHostUUID: (00000000-00000000-0000-000000000000).
2023-05-10T16:06:44.467Z cpu2:9730466)HBX: 294: 'vsphere-LUN00-RZ1-RZ2': HB at offset 4030464 - Reclaimed heartbeat [Timeout]: 2023-05-10T16:06:44.467Z cpu2:9730466) [HB state abcdef02 offset 4030464 gen 3189 stampUS 2452223688123 uuid 6436560b-b270609a-f6dc-48df37a25880 jrnl <FB 16777217> drv 24.82 lockImpl 4 ip 172.20.13.213]
I appreciate any suggestions or insights from the community regarding the next steps we can take to address the issue. Thank you in advance for your assistance!
- Tags:
- SAN Switches
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-18-2023 11:51 AM
05-18-2023 11:51 AM
Re: High latency issue - nimble storage - brocade fc switches
That's incredibly thorough! Not too many stones left unturned here.
Here's my only thought/question: What version of ESXi and what model Fibre Channel HBA? Some 2-3 years ago the qlnativefc driver+firmware was a tragicomedy. They (Marvell?) eventually got it sorted. It's been quite some time since all this happened so I'm guessing your shop is running newer code by now...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-18-2023 12:33 PM - edited 05-18-2023 10:48 PM
05-18-2023 12:33 PM - edited 05-18-2023 10:48 PM
Re: High latency issue - nimble storage - brocade fc switches
Yes, that is correct. Currently, we don't have many ideas on how to resolve the problem. I should mention that we are not using the latest version on the SAN switches. We have updated the Brocade switches to version 8.2.3b. However, there seems to be a slightly newer version available. We have already requested it from Brocade, but we still haven't received it. This is really frustrating.
The ESXi hosts are running VMware ESXi 7.0.3 Build-21424296 Update 3 Patch 85. Additionally, the following HBA card, HPE SN1100Q 16Gb 2P FC HBA, with firmware version 2.00.01 (Part Number 853011-001) is installed. The identical HBA card with the same firmware version is also installed on the Windows hosts.
We are experiencing minimal to no responses from the support team at the moment. We have to reach out for a response every day, and we feel abandoned with the problem by the support team. Despite our repeated requests, we have been unsuccessful in arranging remote assistance to have someone assess and troubleshoot the system. Currently, we are limited to sending log files to the support team. In addition to this, we are encountering difficulties in fostering effective collaboration and teamwork among the different teams involved.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 07:21 AM
05-19-2023 07:21 AM
Re: High latency issue - nimble storage - brocade fc switches
Regarding the switches, are the error counts zero or very close to zero on all ports, not just the ports carrying your VMware traffic? I had one a few years back where a failing optic module was impacting the entire storage fabric. It sounds like both fabrics are affected so there'd need to be two modules failing in a similar fashion. Seems rather unlikely...
Beyond the ESXi + Windows hosts and the Nimble devices, are there any other devices on the storage network? Tape drives, backup appliances, other arrays? Other hosts which aren't impacted by this?
Is the Fibre Channel zoning soft (WWN-based) or hard (physical ports)?
Has any attempt been made to try to reproduce the problem on-demand? I'm thinking along the lines of stress tests and/or benchmarks running on multiple hosts. If you can reliably reproduce the problem then you can start flipping switches and turning knobs and checking for an immediate result.
Is this an environment that can be taken entirely offline at night or on weekends?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 07:53 AM
05-19-2023 07:53 AM
Re: High latency issue - nimble storage - brocade fc switches
Fibre Channel zoning side note:
WWN-based zoning has been hard since back in FOS v2(?) days.
The only time the a zone is soft (firmware coding) is when a zone has both WWNs and ports. As long as a zone is either all WWN or all port that zone is hard'.
Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 09:08 AM
05-19-2023 09:08 AM
Re: High latency issue - nimble storage - brocade fc switches
Sheldon: interesting to note. With that in mind, will WWN-based zoning completely contain an ill-behaved host? My understanding --probably outdated -- is that soft zones were enforced entirely by protocol and theoretically a malfunctioning host could communicate with (and disrupt) devices outside of the zone. But it sounds like that's only a concern with mixed zones? Or no longer a concern at all?
The context here is whether or not OP might have something outside of his/her VMware+Nimble zones with a bad actor HBA.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 11:35 AM
05-19-2023 11:35 AM
Re: High latency issue - nimble storage - brocade fc switches
@GianlucaKern very sorry to hear your systems have been plagued with problems. Would you be able to reflect on what changed on or before Monday, 8th May 2023? Anything significant or even insignificant information here may help point in the right direction. How long were your storage arrays and ESXi servers happy and in a stable stage before Monday, 8th May 2023?
Thanks for all the details and actions performed along with Nimble support. Did VMware support get involved at any point and suggest anything?
--m
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 03:31 PM
05-19-2023 03:31 PM
Re: High latency issue - nimble storage - brocade fc switches
As you mentioned - We have a mirrored Nimble storage configuration, do you mean it is a Peer Persistence configuration?
For the latency observed, is it for read, write or both?
Thomas Lam - Global Storage Field CTO
I work for HPE

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-20-2023 08:51 AM - edited 05-20-2023 08:52 AM
05-20-2023 08:51 AM - edited 05-20-2023 08:52 AM
Re: High latency issue - nimble storage - brocade fc switches
It's interesting to note that there are periods where the latencies fluctuate between 1 and 200 ms. These quiet phases typically last between 30 minutes and two hours. I can't see that we have less traffic on our systems during such a phase. However, the latencies eventually increase to a level that makes productive work on the systems impossible.
During one of these quiet phases, I conducted benchmarks using six IOMeters. The Nimble storage system was operating at its maximum capacity in terms of IOPS, but this didn't cause an high increase in latencies. When examining the statistics of our systems, there was no unusually high amount of traffic recorded during the periods of high latencies.
I cleared all statistics on the switches yesterday at 10 PM. I have now (about 18 hours later) checked for errors (using the "porterrshow" command), and there isn't a single error reported. I will continue monitoring the situation.
In the vSphere statistics, I can go back to January 1, 2022. From April 3, 2023, there is a clear trend of increasing latencies until it escalated. Before that date, the latencies were consistently low (between 2 to 10 ms). This indicates that the issue started on that specific date. The statistics from the ESXi hosts confirm the same pattern, with all hosts being consistently affected.
At that time, there were no changes made to our system, such as adding new hosts or similar modifications.
Unfortunately, only one ESXi host can be shut down at a time, because the systems need to stay online. However, we conducted individual shutdown tests on all hosts, including the Windows Server hosts. We also shut down our Veeam backup server and disconnected it from the switch, as well as the tape library. Unfortunately, there was no improvement in latencies. All hosts are affected, not just the ESXi hosts.
We have engaged with VMWare support, and they have mentioned that the latest OS version is already installed. They believe that since all hosts are experiencing the issue, it cannot be attributed to the ESXi hosts.
We performed a Nimble update on March 20, 2023, to version 5.2.1.1000, and on April 28, 2023, to version 6.1.1.200. It is a Peer Persistence configuration.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2023 06:25 AM
05-21-2023 06:25 AM
Re: High latency issue - nimble storage - brocade fc switches
Have you tried rolling back any of your ESXi hosts to 7.0.3 k? I feel like 7.0.3I is suspect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-29-2023 07:21 PM
05-29-2023 07:21 PM
Re: High latency issue - nimble storage - brocade fc switches
Since your environment is Peer Persistence, read should be fast as it just served by array locally without traverse the replication links. For write a bit latency is expected as all write requests need to be traverse to remote array for 2 phase commit before ackwonledge the host write.
Therefore, if the latency is only write related, check the replication links latency first.
If the latency is from read, it might be CPU busy... Just one question, before the testing was run, did you made any changes on the Sync volumec collection, such as adding or removing volumes?
Thomas Lam - Global Storage Field CTO
I work for HPE

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2023 05:09 AM
05-31-2023 05:09 AM
Re: High latency issue - nimble storage - brocade fc switches
Hello
We had a similar issue even though in a much more simple setup with only five ESXi hosts with two Brocade FC switches.
Can see you replaced some transceiver modules. Does "porterrshow" indicate any c3timeout tx/rx? We ended up replacing multiple SFP+ modules which solved the problem.
Regards,
Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2023 10:58 AM
06-06-2023 10:58 AM
Re: High latency issue - nimble storage - brocade fc switches
Is SIOC enabled on the datastores?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2023 09:05 AM
07-17-2023 09:05 AM
Re: High latency issue - nimble storage - brocade fc switches
The new Nimble OS release also has changed how it talks to the esxi hosts. Where it was once active/standby to active/active on all paths to the controllers. I'm trying myself to see if this new communication is going to put undo burden on our fibre swithces.
From Nimble support: The 8 paths are based on our setup. 4 to one controller 4 to the other.
Yes, starting with Nimble OS 6.1.x, hosts are able to send data across fibre channel connections to the active and standby controllers. From the host perspective, it's 8 paths to the storage. From the array perspective, connections to the active controller show up has active paths and connections to the standby controller show up as active non-optimized paths.
On the ESXi host, it will not show you which paths are optimized and which are not. So you have to dig to determine which one is which but it also doen't seem to matter as traffic is being sent down all of them. So I have to wonder if what you are seeing is due to doubling of the traffic over your fibre switches than what was there before.. But this is just speculation as I'm trying to determine this fact in our environment as well. To actually see if this new nimble OS version will increase our fibre switch load.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2023 09:30 AM
07-17-2023 09:30 AM
Re: High latency issue - nimble storage - brocade fc switches
"NOTE: When you use FC in a VMware environment, it is a good practice to install the HPE Storage Connection Manager for VMware. HPE Storage Connection Manager automatically performs some advanced configurations, selects the optimal path and evenly balances each I/O request."
-- VMware Integration Guide, VMware Fibre Channel Configuration
Have you installed the SCM on each of the hosts?
Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2023 10:05 AM
07-17-2023 10:05 AM
Re: High latency issue - nimble storage - brocade fc switches
Yes the nimble connection software is on all hosts. As I said I'm still reviewing the the operation. The nimble in question is a HF40 we newally purchased that came with 6.1.1.200 and our older HF40 nimble is not on that OS yet. Thus is why we questioned the active/active paths we saw as we were not expecting this. Our older HF40 this shows active/standby with the older OS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-08-2023 02:14 AM
12-08-2023 02:14 AM
Re: High latency issue - nimble storage - brocade fc switches
Hello
@GianlucaKern I would be interested to know what was the outcome of the issue and if you were able to find a solution
We are currenlty facing some latency issue with a similar setup (Nimble storage - Brocade SAN switch on FC - ESXi host)
On our side we are observing a kind of hourly spikes pattern but we are unable to understand what changes could have caused this appearance.
Many testing at all level have been done to try to find the source of the problem and we have engage both Nimble and VMware support
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-08-2023 04:09 AM - edited 12-08-2023 04:14 AM
12-08-2023 04:09 AM - edited 12-08-2023 04:14 AM
Re: High latency issue - nimble storage - brocade fc switches
After many meetings with HPE support, we still have no idea what the cause was. Before the high latencies were occurring, we had created a clone from a Nimble snapshot and connected the resulting data storage to a Windows host.
We have been told by Nimble support that a cloned snapshot always references the original snapshot. Previously we assumed that the clone would become an independent data storage. Support also told us that despite the fact that the data storage references the snapshot, this cannot lead to such high frequencies.
But after we deleted this data storage, we no longer had high latencies. That is all I can tell you. I can't say for sure whether this was the cause. That's just a guess. The clarification is still open in our case.
Apparently there will be an update in the future that will create an independent data storage from a clone.
In your case, I would also check whether the hourly spikes coincide with the creation of the Nimble snapshots (Protection Schedules)