- Community Home
- >
- Storage
- >
- HPE SimpliVity
- >
- HPE Simplivity Questions about the difference betw...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2024 10:32 PM - last edited on 04-10-2024 12:16 PM by support_s
04-07-2024 10:32 PM - last edited on 04-10-2024 12:16 PM by support_s
Hello, I have a question about the capavity calculation method of simplivity.
You are currently using a 3-node cluster SVT.
node1 and node3 are using 88% and 90% capacity, respectively, so the warning is floating.
* * dsv-balance-show command.... calculated used / Estimated remaining
node1 : 88% / 1TB
node2 : 68% / 3.1 TB
node3 : 90% / 900GB
The total capacity available to the 3-node clutser datastore remains approximately 5TB.
If you create a new 4TB VM that is not deduplicated, the NFS datastore has 5TB - 4TB = 1TB of extra capacity, but node1, node2, and node3 have less than 4TB of extra physical capacity.
In terms of total logical capavity, vm generation is possible, but in terms of the physical capavity of the nodes, it seems impossible to generate.
In this case, how is the capacity calculated?
I looked through the documents and found that when the physical capacity of the node is over 97%, it goes into Read-Only mode and vm doesn't work... It's a very important issue for me.
Please understand that I used a translator.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-09-2024 12:31 AM
04-09-2024 12:31 AM
SolutionHi @Jh9
In SimpliVity Datastore is a logical construct . User can create multiple Datastore larger than the size of node and cluster.
If Simplivity Node has enough space to accommodate the VM, User should be able to create a VM or if Node doesn’t have space to accommodate the VM then VM creation/provisioning will fail. It doesn’t work like that user will be able to create the VM logical space but not in physical space.
Yes, once the Index-utilization or HDD/SDD utilization reach to 97%, OVC will not serve any IOPS until it has room to process the IOPS . System reserves some space for internal use which is used for moving the secondary copy and other internal operation because if system consume 100% space, support/customer will not be able to perform migration.
Hope it helps.
Regards,
Jaipal
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
![Accept or Kudo](/html/assets/hpe_banner_SME_signature.png)