- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Virtual Connect Latency to EMC SAN
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2019 05:26 PM
01-23-2019 05:26 PM
Virtual Connect Latency to EMC SAN
We've been having high disk latency between C7000 Blade enclosures hosting BL460c to EMC SAN using hardware iSCSI within Virtual Connect on VMware ESXi. This appears to be an issue on 6.0, 6.5, and 6.7 so we've eliminated VMware as the problem. Latency goes away between DL360 and EMC, seems isolated to CClass using Virtual Connect. Wondering if anyone has seen this or has ideas how to fix it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2019 01:47 AM
01-28-2019 01:47 AM
Re: Virtual Connect Latency to EMC SAN
Hello There,
There is no detailed information provided, we need to understand the tpology of your enveiorment.
We also need to understand where the packet lost is hapenning.
Since there are multiple technology involved and no staright forward answer to your query hence would suggest to log a support ticket on a valid s/n to investigate the issue further.
I am an HPE Employee.
Thanks,
Siddhartha_M
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-29-2019 08:30 AM
01-29-2019 08:30 AM
Re: Virtual Connect Latency to EMC SAN
I'll open a case. For the sake of argument here's the scenario for you: I have all the equipment above in place with the exception of the HP CClass chassis. ESX hosts running on Cisco Unity blades. When I hooked up the CClass the latency started. If I shut down the blades on the CClass the latency stops. Boot the Blade up, latency again. I think this eliminates most of the equipment above.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-29-2019 09:16 AM
01-29-2019 09:16 AM
Re: Virtual Connect Latency to EMC SAN
Did you install latest firmware/drivers ? there are some issues like that in
previous versions also
May be sure to be at least to VC 4.50 firmware
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-29-2019 09:19 AM
01-29-2019 09:19 AM
Re: Virtual Connect Latency to EMC SAN
We're already on Firmware version 4.62.
Thanks for the suggestion though.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-29-2019 09:27 AM
01-29-2019 09:27 AM
Re: Virtual Connect Latency to EMC SAN
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 03:14 AM
01-31-2019 03:14 AM
Re: Virtual Connect Latency to EMC SAN
This is a very burning issue, already 400+ view but no useful record seen. Can some one share learning or resolution on this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 03:19 AM
01-31-2019 03:19 AM
Re: Virtual Connect Latency to EMC SAN
case to HPE support
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 05:13 AM
01-31-2019 05:13 AM
Re: Virtual Connect Latency to EMC SAN
Not sure what you mean by HBA firmware. I've updated all associated firmware to the latest released.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 05:14 AM
01-31-2019 05:14 AM
Re: Virtual Connect Latency to EMC SAN
I have opened a case with HPE support and am currently talking to L2 Virtual Connect support. I will update you all with the conclusion of that case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 05:19 AM
01-31-2019 05:19 AM
Re: Virtual Connect Latency to EMC SAN
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 05:25 AM
01-31-2019 05:25 AM
Re: Virtual Connect Latency to EMC SAN
Cisco Nexus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-31-2019 05:57 AM
01-31-2019 05:57 AM
Re: Virtual Connect Latency to EMC SAN
Something like that ?
https://www.cisco.com/c/dam/en/us/products/collateral/storage-networking/mds
-9700-series-multilayer-directors/whitepaper-c11-737315.pdf
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2019 05:19 AM - edited 02-07-2019 05:24 AM
02-07-2019 05:19 AM - edited 02-07-2019 05:24 AM
Re: Virtual Connect Latency to EMC SAN
Which generation BL460c? What FLB adapter/mezzanine are you using in the BL460c? Which VC module(s) are you using in the c7000 and are the enclosures Gen1, Gen2 or Gen3? Did you get any where with HPE technical support?
We're using BL460c Gen9 with 650FLB adapters and VCFF 20/40 F8 modules (v4.63) in c7000 Gen3 enclosures, but I'm using FCoE/FC to connect to our Nexus 5672UP-16G TOR switches and upstream to our arrays.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2019 05:38 AM
02-07-2019 05:38 AM
Re: Virtual Connect Latency to EMC SAN
Hopefully I catch all of these questions. First, this is happening on BL460c gen 8-10. The generation doesn't seem to matter. We're using C7000 chasis. They're less than 2 years old so I would assume the latest generation although I don't remmber there being generations of C7000s. One of them is literally brand new. The VC modules are the same, HP VC Flex-10/10D Modules. I'm guessing they're different generations, but all the same model. The FLB adapter type is an Emulex, but outside of that I'm not sure. We've tried various firmware as well on them.
My current support case with HPE has been escalated to Level 3 - engineering. Currently no resolution has resulted. Waiting for them to go through diagnostics.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2019 01:17 AM - edited 05-21-2019 01:23 AM
05-21-2019 01:17 AM - edited 05-21-2019 01:23 AM
Re: Virtual Connect Latency to EMC SAN
Does anyone have any update on this? We are seeing an issue which seems to be similar - may or may not be the same thing.
EDIT for clarity:
We are running C7K with Virtual Connect Flex-10 into a Cisco Nexus 5K stack. We have SANs with multiple vendors, both delivered as NFS and iSCSI.
We are consistently seeing high read latency much above expected levels, however write latency is very low as expected.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2019 04:26 AM
05-21-2019 04:26 AM
Re: Virtual Connect Latency to EMC SAN
HPE never responded to this thread. We are still seeing this issue. I've opened a case with HPE, Cisco, and EMC and have been troubleshooting for months but am at a point where what HPE are asking me to do is unreasonable and I'm not able to go forward. Per a suggestion from Reddit, I've swapped out some blades for a few DL360 but the problem persist. We are unable to determine if the issue is caused by the blades/CClass/FlexConnect and is being expanded to the DL series, or if there is a problem elsewhere.
Another curiosity is that we installed a CCLass in a different datacenter that was running all Cisco blades. As soon as we connected the blades to the SAN from with in the OS, latency appeared. To clarifiy; same SAN, networking, LUNs, OS. Only differnce was HP blades instead of Cisco. Something is definitely wrong here but it's very hard to pinpoint. I'm leaning heavily towards CCLass/FlexConnect as the issue. Let me know your outcome if you determine a solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2019 04:39 AM
05-21-2019 04:39 AM
Re: Virtual Connect Latency to EMC SAN
What is Blade type including HBA ?
What is Virtual connect Type ?
If you have same issue with rack server, i can say that is not an issue with
blade architecture
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2019 04:55 AM
05-21-2019 04:55 AM
Re: Virtual Connect Latency to EMC SAN
We went over this already earlier in the post. Additionally, it most certainly can be a flex connect issue if the flex connect is causing a problem on the network and/or san that is being seen elsewhere.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2019 07:17 AM
05-21-2019 07:17 AM
Re: Virtual Connect Latency to EMC SAN
If you have some blades running, and then you add a DL, do you get the same poor performance on the DL as well, or just the poor performance and latency exists on the blades? I get that the read latency reports poorly on the array, but if. for example you run a copy operation from a VM on the DL, do you see the same performance as on the BLs, or is performance much better?
Another question - how does you write latency look vs your read latency?
The reason I ask is that I am considering doing the same thing. We have a situation that sounds very similar to yours:
HP BLs (Gen 8/9), C7K, VC Flex 10, Nexus 5K, and WDC Tegile SAN (NFS).
What we are seeing is excellent network latency (sub 1ms), excellent write latency to storage (sub 5ms), but read latency fluctuates anywhere upwards of 50 or even 100ms at times, causing obvious performance issues.
The storage vendor has been troubleshooting this for an extended period but cannot get to the bottom of it. Additionally, we have had our networking review the Nexus and confirm everything appears as it should be.
My next step tomorrow was to introduce a DL to see if this suffers with the same poor read issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2019 08:54 AM
05-22-2019 08:54 AM
Re: Virtual Connect Latency to EMC SAN
When we introduce the DL series into the mix, we really didn't see much of a difference in latency. We are seeing minimal write latency as well; nearly all the latency is on reads although we do occasionally see write latency as well. Spent about 3 hours on the phone with VMware yesterday; one thing we are trying at this point is uninstalling the amsd vib. For background; we use the HPE custom Vmware image and this vib is packaged inside which apparently is a monitoring piece. Something we are trying to rule out at this point is if the amsd service is polling from multiple hosts on a regular basis and causing this latency. This would go along with the theory we have that the more hosts we add to the clusters the more latency we see.
Let me know if you are seeing similar results with your DL series testing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-23-2019 09:16 AM
05-23-2019 09:16 AM
Re: Virtual Connect Latency to EMC SAN
For whoever is still following this; after 2 years of dealing with this issue and troubleshooting it, I can say with some degree of certainty we have determined the cause.
Short version: Remove the amsd service from Vmware ESXi by running the command esxcli software vib remove -n asmd
Long Version: HPE packages in their custom ESXi image the Agentless System Mangement service. More information can be found here https://buy.hpe.com/b2c/us/en/enterprise-software/server-management-software/server-ilo-management/ilo-management-engine/hpe-agentless-management/p/5219980
It's largely useless unless you are specifically needing it for some reason but ultimately from what VMWare told me it causes more trouble than it helps. This service does some sort of polling in which it hits every datastore connected to the host. When this occurs it drives latency through the roof. The problem is exacerbated by multiple hosts in the environment; each will poll at a different time and impact all the hosts in the environemnt. Upon removing this vib and consequently the service, the problem went completely away. Our Datastore read/write latency went from the 100's of ms down to under ~10ms. I would HIGHLY recommend whoever is running iSCSI in VMware on HPE using their custom image remove this service immediately.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2019 01:19 AM
05-26-2019 01:19 AM
Re: Virtual Connect Latency to EMC SAN
Does this vib have a different name when it appears in the list? We are running the HP VMWare image, and the list I get for HPE is below, is it one of these?
esxcli software vib list | grep HPE amshelper 650.10.6.0-24.4240417 HPE PartnerSupported 2018-11-09 conrep 6.0.0.01-02.00.1.2494585 HPE PartnerSupported 2018-11-09 hpbootcfg 6.0.0.02-02.00.6.2494585 HPE PartnerSupported 2018-11-09 hpe-build 650.U2.9.6.7-4240417 HPE PartnerSupported 2018-11-09 hpe-cru 650.6.5.8.24-1.4240417 HPE PartnerSupported 2018-11-09 hpe-esxi-fc-enablement 650.2.6.10-4240417 HPE PartnerSupported 2018-11-09 hpe-ilo 650.10.0.2-2.4240417 HPE PartnerSupported 2018-11-09 hpe-nmi 600.2.4.16-2494575 HPE PartnerSupported 2018-11-09 hpe-smx-provider 650.03.11.00.17-4240417 HPE VMwareAccepted 2018-11-09 hponcfg 6.0.0.4.4-2.4.2494585 HPE PartnerSupported 2018-11-09 hptestevent 6.0.0.01-01.00.5.2494585 HPE PartnerSupported 2018-11-09 scsi-hpdsa 5.5.0.54-1OEM.550.0.0.1331820 HPE PartnerSupported 2018-11-09 scsi-hpsa 5.5.0.124-1OEM.550.0.0.1331820 HPE VMwareCertified 2018-11-09 scsi-hpvsa 5.5.0.102-1OEM.550.0.0.1331820 HPE PartnerSupported 2018-11-09 ssacli 2.60.18.0-6.0.0.2494585 HPE PartnerSupported 2018-11-09
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2019 08:21 PM
05-28-2019 08:21 PM
Re: Virtual Connect Latency to EMC SAN
I'm near positive it's the "amshelper" in your instance. They did tell me it has different names depending on the ISO used but it should always be "AMS" something.