StoreVirtual Storage
1751751 Members
4616 Online
108781 Solutions
New Discussion

StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

 
david11
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

Thanks for letting me know that, it looks like this driver is very unstable for the moment.  I'm looking at vmware compatibility matrix and it listed the mem driver as ok for use under esxi 6.0 with lefthand os 12.0  I wonder if my problem is that I am running lefthand os 12.5 and it is simply unstable with this version of lefthand os.

 

I also notice a new software version for 12.5 came out today or sometime recently which I am updating to now, debating trying to put it back on with vaai disabled just to get out of my latency issue.

 

I am running on 4 procurve 2810's which I thought would be good enough and it seems they are when using lefthand mem driver to split the load evenly across multiple nic ports.  However after removing lefthand mem driver and back to default vmware it seems the traffic is to much for single port on these switches due to the fact that well they are not perfectly suited for iscsi traffic with such low port buffer, I think its like only 750k shared for the entire switch.

 

However I am only a 2 host, 26 vm shop with very little load on each server, things are just split of for seperated services, each server 1 task type of deal.

 

I am looking at just replacing the switches with cisco 4900m switches simply because I was able to get a great price on some refurbished ones with 24 ports of 10gb.   Can anyone chime in here that has used lefthand p4500 g2 with procurve 2810 to confirm these switches are probably my main problem with bad latency issues and constant congestion.  I can confirm shutting down half my VM's seems to relieve most issues and I never see the 1gb ports at full throughput so I'm thinking the issue is simply packet throughput and dropped traffic due to low buffers during burst iscsi traffic.

 

If anyone else finds  a stable way to run the lefthand mem driver please respond back with which version of the hp mem driver you are using and whether or not its paired with hp's newest customized esxi images of 5.5 u3 as its what I'm using.

david11
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

So I had VMware support digging through logs when I had my horrible outage caused by this driver instability.  This kind of confirms the driver issue for me without a doubt.

 

Hopefully HP sees this and its helpful.  Also for anyone experiencing hte same you can check your logs for the same types of messages because even if you think its stable now it can randomly just stop working, what finally causes it to do so is still unknown to me.

 

I know its alot below but I cut alot out to shorten it because these errors repeated for millions of lines because I have multiple VM's which I'm sure most of you do.  VMWare recomendationa nd KB article explaining there findings is at the bottom.  They confirmed I had all the newest drivers for my platform and recomended I contact the storage provider HP to find out why there driver is randomly reporting APD (all paths down) which is what causes the inaccessable message while still showing the path is up and OK.  Hope this helps in HP finding a resolution to the mem driver so they can fix it and give us all the performance we want with stability.  Good luck all!

 

from vmware support:

 

 

Hello , 
  

Greetings!! 

  

I have analyzed the logs,Please find below the log snippet: 

  

ESX build 

========== 

VMware ESXi 5.5.0 build-3029944 

VMware ESXi 5.5.0 Update 3 

  

Host Hardware 

=============== 

ProLiant DL360p Gen8 

  

Hostname 

================ 

vnm00002.amer.dmai.net 

  

VOBD.log 

======== 

15-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.port.hooked); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.port.hooked); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.audit.net.firewall.port.hooked); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.004Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.problem.storage.iscsi.target.connect.error); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.clear.coredump.configured2); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.problem.scratch.partition.unconfigured); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.audit.net.firewall.config.changed); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.audit.dcui.enabled); 2 failures so far. 

2015-10-09T15:40:01.005Z: Failed to send event (esx.audit.ssh.enabled); 2 failures so far. 

2015-10-09T15:40:02.498Z: [iscsiCorrelator] 202292139us: [vob.iscsi.target.connect.error] vmhba34 @ vmk2 failed to login to iqn.2003-10.com.lefthandnetworks:vmware-generic:328:vnmwhatsup because of a network connection failure. 

2015-10-09T15:40:02.498Z: [iscsiCorrelator] 202292585us: [esx.problem.storage.iscsi.target.connect.error] Login to iSCSI target iqn.2003-10.com.lefthandnetworks:vmware-generic:328:vnmwhatsup on vmhba34 @ vmk2 failed. The iSCSI initiator could not establish a network connection to the target. 

2015-10-09T15:40:02.498Z: An event (esx.problem.storage.iscsi.target.connect.error) could not be sent immediately to hostd; queueing for retry. 

2015-10-09T15:40:02.499Z: [iscsiCorrelator] 202293340us: [vob.iscsi.target.connect.error] vmhba34 @ vmk3 failed to login to iqn.2003-10.com.lefthandnetworks:vmware-generic:328:vnmwhatsup because of a network connection failure. 

2015-10-09T15:40:02.499Z: [iscsiCorrelator] 202293667us: [esx.problem.storage.iscsi.target.connect.error] Login to iSCSI target iqn.2003-10.com.lefthandnetworks:vmware-generic:328:vnmwhatsup on vmhba34 @ vmk3 failed. The iSCSI initiator could not establish a network connection to the target. 

2015-10-09T15:40:02.500Z: An event (esx.problem.storage.iscsi.target.connect.error) could not be sent immediately to hostd; queueing for retry. 

2015-10-09T15:40:20.156Z: [netCorrelator] 219950263us: [vob.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set vpxHeartbeats succeeded. 

2015-10-09T15:40:20.157Z: [netCorrelator] 219950834us: [esx.audit.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set vpxHeartbeats succeeded. 

2015-10-09T15:40:20.157Z: An event (esx.audit.net.firewall.config.changed) could not be sent immediately to hostd; queueing for retry. 

2015-10-09T15:40:22.931Z: [netCorrelator] 222725294us: [vob.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set CIMHttpServer succeeded. 

2015-10-09T15:40:22.932Z: [netCorrelator] 222725833us: [esx.audit.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set CIMHttpServer succeeded. 

2015-10-09T15:40:23.442Z: [netCorrelator] 223235987us: [vob.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set CIMHttpsServer succeeded. 

2015-10-09T15:40:23.442Z: [netCorrelator] 223236349us: [esx.audit.net.firewall.config.changed] Firewall configuration has changed. Operation 'enable' for rule set CIMHttpsServer succeeded. 

2015-10-09T15:40:36.194Z: [GenericCorrelator] 235988300us: [vob.user.host.boot] Host has booted. 

2015-10-09T15:40:36.194Z: [UserLevelCorrelator] 235988300us: [vob.user.host.boot] Host has booted. 

2015-10-09T15:40:36.195Z: [UserLevelCorrelator] 235988750us: [esx.audit.host.boot] Host has booted. 

2015-10-09T15:40:36.352Z: [GenericCorrelator] 236146246us: [vob.user.coredump.configured2] At least one coredump target is enabled. 

  

  

vmkernel.log 

============= 

2015-10-09T15:38:59.948Z cpu6:33374)VAAI_FILTER: VaaiFilterClaimDevice:270: Attached vaai filter (vaaip:VMW_VAAIP_LHN) to logical device 'naa.6000eb359ec2cd670000000000000209' 

2015-10-09T15:38:59.968Z cpu6:33374)FSS: 5099: No FS driver claimed device 'naa.6000eb359ec2cd670000000000000209:1': Not supported 

2015-10-09T15:38:59.968Z cpu6:33374)ScsiDevice: 3445: Successfully registered device "naa.6000eb359ec2cd670000000000000209" from plugin "NMP" of type 0 

2015-10-09T15:38:59.970Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:424: In satp_lhn_updatePath setting path state to OK. vmhba34:C0:T7:L0 

2015-10-09T15:38:59.970Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:508: In satp_lhn_updatePath not calling psp_LHPathBack - first time path is being set! 

2015-10-09T15:38:59.971Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:424: In satp_lhn_updatePath setting path state to OK. vmhba34:C1:T7:L0 

2015-10-09T15:38:59.971Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:508: In satp_lhn_updatePath not calling psp_LHPathBack - first time path is being set! 

2015-10-09T15:38:59.971Z cpu6:33374)StorageApdHandler: 698: APD Handle  Created with lock[StorageApd0x41093e] 

2015-10-09T15:38:59.971Z cpu6:33374)ScsiEvents: 501: Event Subsystem: Device Events, Created! 

2015-10-09T15:38:59.971Z cpu6:33374)VMWARE SCSI Id: Id for vmhba34:C0:T7:L0 

0x60 0x00 0xeb 0x35 0x9e 0xc2 0xcd 0x67 0x00 0x00 0x00 0x00 0x00 0x00 0x01 0x24 0x69 0x53 0x43 0x53 0x49 0x44 

2015-10-09T15:38:59.972Z cpu6:33374)VMWARE SCSI Id: Id for vmhba34:C1:T7:L0 

0x60 0x00 0xeb 0x35 0x9e 0xc2 0xcd 0x67 0x00 0x00 0x00 0x00 0x00 0x00 0x01 0x24 0x69 0x53 0x43 0x53 0x49 0x44 

2015-10-09T15:38:59.972Z cpu6:33374)ScsiDeviceIO: 7493: Get VPD 86 Inquiry for device "naa.6000eb359ec2cd670000000000000124" from Plugin "NMP" failed. Not supported 

2015-10-09T15:38:59.972Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.972Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_isManagement:843: In satp_lhn_isManagement returning FALSE. 

2015-10-09T15:38:59.972Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.972Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:985: In satp_lhn_pathFailure status = 5 sense key = 24 and sense code = 0. path vmhba34:C0:T7:L0 

2015-10-09T15:38:59.972Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:986: path=vmhba34:C0:T7:L0 cmd[0]=12 cmdid=465 

2015-10-09T15:38:59.972Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:1132: In satp_lhn_pathFailure unknown failure. 

2015-10-09T15:38:59.972Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.972Z cpu6:33374)ScsiDeviceIO: 6213: QErr is correctly set to 0x0 for device naa.6000eb359ec2cd670000000000000124. 

2015-10-09T15:38:59.972Z cpu6:33374)ScsiDeviceIO: 6724: Sitpua was correctly set to 1 for device naa.6000eb359ec2cd670000000000000124. 

2015-10-09T15:38:59.973Z cpu6:33374)VAAI_FILTER: VaaiFilterClaimDevice:270: Attached vaai filter (vaaip:VMW_VAAIP_LHN) to logical device 'naa.6000eb359ec2cd670000000000000124' 

2015-10-09T15:38:59.992Z cpu6:33374)FSS: 5099: No FS driver claimed device 'naa.6000eb359ec2cd670000000000000124:1': Not supported 

2015-10-09T15:38:59.992Z cpu6:33374)ScsiDevice: 3445: Successfully registered device "naa.6000eb359ec2cd670000000000000124" from plugin "NMP" of type 0 

2015-10-09T15:38:59.993Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:424: In satp_lhn_updatePath setting path state to OK. vmhba34:C0:T2:L0 

2015-10-09T15:38:59.993Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:508: In satp_lhn_updatePath not calling psp_LHPathBack - first time path is being set! 

2015-10-09T15:38:59.994Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:424: In satp_lhn_updatePath setting path state to OK. vmhba34:C1:T2:L0 

2015-10-09T15:38:59.994Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_updatePath:508: In satp_lhn_updatePath not calling psp_LHPathBack - first time path is being set! 

2015-10-09T15:38:59.994Z cpu6:33374)StorageApdHandler: 698: APD Handle  Created with lock[StorageApd0x41093e] 

2015-10-09T15:38:59.994Z cpu6:33374)ScsiEvents: 501: Event Subsystem: Device Events, Created! 

2015-10-09T15:38:59.994Z cpu6:33374)VMWARE SCSI Id: Id for vmhba34:C0:T2:L0 

0x60 0x00 0xeb 0x35 0x9e 0xc2 0xcd 0x67 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xed 0x69 0x53 0x43 0x53 0x49 0x44 

2015-10-09T15:38:59.995Z cpu6:33374)VMWARE SCSI Id: Id for vmhba34:C1:T2:L0 

0x60 0x00 0xeb 0x35 0x9e 0xc2 0xcd 0x67 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xed 0x69 0x53 0x43 0x53 0x49 0x44 

2015-10-09T15:38:59.995Z cpu6:33374)ScsiDeviceIO: 7493: Get VPD 86 Inquiry for device "naa.6000eb359ec2cd6700000000000000ed" from Plugin "NMP" failed. Not supported 

2015-10-09T15:38:59.995Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.995Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_isManagement:843: In satp_lhn_isManagement returning FALSE. 

2015-10-09T15:38:59.995Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.995Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:985: In satp_lhn_pathFailure status = 5 sense key = 24 and sense code = 0. path vmhba34:C0:T2:L0 

2015-10-09T15:38:59.995Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:986: path=vmhba34:C0:T2:L0 cmd[0]=12 cmdid=482 

2015-10-09T15:38:59.995Z cpu4:33106)WARNING: HP_SATP_LH: satp_lhn_pathFailure:1132: In satp_lhn_pathFailure unknown failure. 

2015-10-09T15:38:59.995Z cpu6:33374)WARNING: HP_SATP_LH: satp_lhn_getBoolAttr:879: In satp_lhn_getBoolAttr. 

2015-10-09T15:38:59.995Z cpu6:33374)ScsiDeviceIO: 6213: QErr is correctly set to 0x0 for device naa.6000eb359ec2cd6700000000000000ed. 

2015-10-09T15:38:59.995Z cpu6:33374)ScsiDeviceIO: 6724: Sitpua was correctly set to 1 for device naa.6000eb359ec2cd6700000000000000ed. 

2015-10-09T15:38:59.996Z cpu6:33374)VAAI_FILTER: VaaiFilterClaimDevice:270: Attached vaai filter (vaaip:VMW_VAAIP_LHN) to logical device 'naa.6000eb359ec2cd6700000000000000ed' 

2015-10-09T15:39:00.018Z cpu6:33374)FSS: 5099: No FS driver claimed device 'naa.6000eb359ec2cd6700000000000000ed:1': Not supported 

  

Analysis: 

========= 

We have checked and found that there was a network connection failure and APD issue reported during that time stamp. 

We have verified the drivers and it's upto date. 

  

Recommendation: 

=============== 

Please contact your storage vendor to find out the cause for APD. 

  

Reference KB article: 

  

http://kb.vmware.com/kb/2004684 

  

Please let me know if you have any clarifications. 

 

david11
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

Also please note I have seperate phsyical network jsut for iscsi traffic and it was working fine for hosts which had generic vmware driver, just the hosts with the hp lefthand mem driver had these issues.

 

Vmware even states in their KB article when this happens there is no clean way to reset it as the host will lock up trying to reconnect forever, this is why when you force it down or are able to reboot it through console, it will drag for like 30 minutes coming back up.   I recomend the fastest recovery to be SSH session to your host, unisntall the lefthand mem driver if possible first before forcing it down so when it boots it uses the vmware generic driver.  It will still be a long boot while esxi host clears all the errors it experienced.

 

YMMV.  Hope this is helpful.

miki777
Visitor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

Yes, this driver is a total disaster, I've had a total crash of all servers and all virtual machines ( 30 + vm's ), very stressfull event indeed. I'm suprised that HP still didn't solve this problem so far, as may people are having obvious problems with it, but it seems that they are the only ones that are not having this kind of problems with it :D

slymsoft
Occasional Visitor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

My advice is the same as most of the previous ones : DO NOT USE HP MEM IN PRODUCTION OR YOU WILL REGRET IT !

I installed the latest MEM module shipping with LH 12.5 (HP_StoreVirtual_Multipathing_Extension_Module_for_Vmware_vSphere_5.1_AT004-10523.vib) on 4 ESXi 5.1, Storage cluster was an 8 node P4730 with LeftHand OS 12.0. I did not disable VAAI ATS (as it was not mentionned anywhere in HP's documentation).

The next day I installed the MEM module I went through an upgrade from LH 12.0 to 12.5 => after a few node reboot to apply patches the 4 ESXi servers became unresponsive. Everything was just fine in the CMC, volumes were always online.

It was extremely unstable. Some ESXi were hanging then working for a few minutes then hanging again, one did a PSOD and another one was so unstable I could not use 90% of cli commands on it not even a /sbin/services.sh restart or an esxtop :-/

I checked the VMkernel logs of the 4 ESXi and there was a sh*t load of message "satp_lhn_pathfailure".

It took us a day to go back to normal. We had to uninstall the MEM module and go back to the good old VMware Round Robin @ 1 IOPS (working great for years !).

 HP Team : Please remove the MEM module from official downloads until it is stable

Sorry for the extended use of red + big font size but I think this issue deserves it and every person reading this post should be scared to use this piece of software. This is exactly how you lose the trust of your clients / partners.

Princes
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

I've got it working on a 2 node VSA Cluster v12.5 with ESXi 5.5u3 and it seems stable.

david11
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

I have the exact same setup, let us know how it goes after a few weeks.  LIke most of us have said in our posts it always starts out seemingly stable and then goes south fast for unknown reasons.

Princes
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

For the record, I manually installed the HP_StoreVirtual_Multipathing_Extension_Module_for_Vmware_vSphere_5.5_AT004-10524.vib rather than using Update Manager and the build of ESXi is 5.5u3 Build 3116895. VSA's are v12.5.00.0563.0

david11
Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

On my install we used the manual method as well through direct command line as well on each of our esxi hosts.

AT004-10524.vib lefthand driver was used. (same as yours)

HP Lefthand OS 12.5.00. 0563.0 (same as yours)

We ran this under ESXI 5.5 U2 (dont remember which build but we tried to go to U3 hoping it would help with the issues)

Currently at ESXI 5.5U3 build 3142196 (newer than your build)

With the above setup it always ran fine for a couple weeks before it just blows up.  I really hope this does not happen to you but it was the cause for major downtime during the day and a long road of hours to recovery to get them back online to remove the driver with vmware support.

My hardware is as follows:  (Please let me know if yours is different)

DL360p G8 Servers

Array:  HP P4500 G2 Lefthand arrays. Using 1gb Iscsi ports.

Best of luck and please report back in a month if still running ok and be sure to post if you run into same issues we all have as the more people on this thread reporting it = better chance of HP acknowledgement and fix for this problem.

 

Thanks,

David

 

 

 

 

 

 

rossiauj
Occasional Advisor

Re: StoreVirtual Multipathing Extension Module for vSphere 5.5 missing VMFS datastores

Hi,

I did the same and manual install did not make a difference. We ran smoothly for weeks/months and then it suddenly bit us in the proverbial behind,

So proceed carefully and be aware of the dog.

Kind regards,

Jos