- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: [Theorycrafting: VMware + Nimble] Ignoring dat...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2017 04:03 PM
03-24-2017 04:03 PM
For example, does 40 volumes have a noticeable impact on multipath I/O failover vs 6 volumes? (40 initiators hitting the iSCSI target all at once vs 8 initiators)
Does a 4 TB volume with high activity during a snapshot cause a noticeable impact vs the same data in 8 500 GB volumes in a group, snapping at the same time?
I'm not looking for answers I need commitment on, so if you're into theorycrafting, this is your thread.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-31-2017 04:58 AM
03-31-2017 04:58 AM
Re: [Theorycrafting: VMware + Nimble] Ignoring data management concerns, what are the technical ramifications of 40 very specific-purpose volumes vs 8 more general volumes?
I know its not directly what you're asking, but I feel like there is some relevance. Check out this thread (How many VM's per datastore) with links to another discussion and an external blog. In my experience the Nimble snapshot mechanism hasn't been phased by any number of simultaneous storage snapshots. Simultaneous VMware snapshots is where I've seen issues, so I tend to be more aware of that while setting up protection schedules.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-31-2017 06:05 AM
03-31-2017 06:05 AM
SolutionGood questions:
For example, does 40 volumes have a noticeable impact on multipath I/O failover vs 6 volumes? (40 initiators hitting the iSCSI target all at once vs 8 initiators)
There are customers running 4 CS7000's in a scale out pool with 10k volumes with two paths per volume, they survive failovers without an issue. Every model has platform limits, however in the scale between 40 and 6 it will not make any difference. On the lowest model array (CS2x) the concern would only arise around the 250 volume mark.
Does a 4 TB volume with high activity during a snapshot cause a noticeable impact vs the same data in 8 500 GB volumes in a group, snapping at the same time?
Excellent question! The IO pause is as a result of replaying the snapshot back to the parent, a single volume would have more change (larger) therefore the impact would be greater than 8 smaller volumes, it's the final commit where the IO pause occurs. Then again 8 snapshot consolidation jobs on a single host is more host CPU and storage IO intensive than a single volume. (the backend snapshot mechanism is limited to 4 VM's per host in parallel at any one time)
I would go for 8 volumes as it is less likely to affect application timeouts, more granular as such you gain a better idea of performance distribution and easier to reclaim space.
On the same subject, there are vast improvements in VMware 6 with the introduction of mirror drivers, also VMware tools is pretty clunky and inefficient, Veeam initiated snapshots are way better.
Cheers,
Chris