- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: Snapshot leads to guest OS becoming unresponsi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2015 07:27 AM
тАО04-22-2015 07:27 AM
We've been seeing an issue where snapshots periodically fail for typically the same 2 or 3 VMs. 2 of them are almost retired Win2003 boxes but the 3rd is a 2008 R2 web application server. On all 3 of these when a snapshot fails the VM will lockup to the point where either I have to do a hard reset or HA takes care of it because it's lost the heartbeat. Once I moved the VM out of the protected LUN the lockups stopped.
Has anybody else seen this or have pointers? I'm running fully patched vSphere 5.5, both ESXi and vCenter and have a CS-300 with 10 GbE copper connectivity.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2015 08:34 AM
тАО04-22-2015 08:34 AM
Re: Snapshot leads to guest OS becoming unresponsive
Disable vmware tools VSS writers, these are only required for applications such as exchange and SQL and to a degree file servers,hardware snapshots suffice. You can disable VSS writers via vmbackup.conf http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1031200
The issue usually happens when there is a conflict with writers or during the consolidation of the snapshot. Also ensure you have at least 10% free space in the datastores.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2015 09:45 AM
тАО04-22-2015 09:45 AM
Re: Snapshot leads to guest OS becoming unresponsive
In the case of one of these having issues it is a SQL server so right off the bat that isn't helpful. Further Nimble isn't the only thing hitting these servers with VSS requests, Veeam does so everyday as well and disabling the writer will kill large quanties of good features in the product, file level restoration, application level restore, etc. Neither Veeam and natively created snapshots create any issues on these servers, only the Nimble created ones.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2015 10:43 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2015 10:48 AM
тАО04-22-2015 10:48 AM
Re: Snapshot leads to guest OS becoming unresponsive
Veeam does parallel processing as well limited by the number of available cores on the server doing the processing. Not that I'm opposed to moving them to a different volume collection, but is there a way to tweak how many jobs the array processes at a time? I only have a single volume in each collection, but each does have a sizable number of VMs per volume, >10.