- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Failed to Delete vCenter Snapshot
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-16-2014 11:33 AM
тАО07-16-2014 11:33 AM
I've been getting a warning on both our dev and prod arrays about not being able to delete a vcenter snapshot.
Failed to delete vCenter snapshot associated with volume collection ESX-Collection schedule ESX-Daily since the vCenter virtual machine snapshot tasks have not yet completed. |
I found another post here talking about too many vms on a volume with a recommendation of around 10-12 vms per volume and the discussion went on to talk about queue depth extra...
I'm guessing that's my issue I run many more vms per volume.
this is our first time with storage level snaps. Before with our EMC we didn't pay for them.
I don't replicate the nimble snaps I'm just doing them once a day because I can.
In prod I have about 220 vms, so I would need about 22 volumes to keep around 10 vms per volume... That's a lot more to manage
jb
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2014 12:23 AM
тАО07-17-2014 12:23 AM
SolutionHI Jason,
Yes indeed this error may be the case of having to many VMs in a datastore. I think this may be a good point to re-evaluate your storage snapshot strategy, as it may be that only a percentage (say 20% of the 220) only really require consistent snapshots, whereas the rest may be perfectly fine with non-consistent snaps.
By doing the above it will help you create two levels of datastore, which are part of two separate Volume Collections (one which has syncronised snapshots, one which doesnt). Therefore you wouldn't neccessarily need to follow the best practice of 10-12 VMs per datastore, as the majority would be happy doing what you're doing today.
Also the "10-12 VMs Per Datastore" comment isn't just for snapshots, it's an overall Best Practice for ultimate performance & latency. Here's a great blog post by Jason Boche who really went into detail about why this is: VAAI and the Unlimited VMs per Datastore Urban Myth ┬╗ boche.net тАУ VMware vEvangelist
twitter: @nick_dyer_
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-22-2014 11:14 AM
тАО07-22-2014 11:14 AM
Re: Failed to Delete vCenter Snapshot
I don't need SAN level snap shots right now so I just turned off the vcenter integration.
I'll take a look at the article by Jason Boche.
I don't want to have to manage 20 volumes...Right now I just make each volume 2TB and fill it about 75% full before I add another one
jb