HPE SimpliVity
1822551 Members
2992 Online
109642 Solutions
New Discussion юеВ

Physical Capacity on Large Size Models not correct

 
SOLVED
Go to solution
ICPMuenchen
Valued Contributor

Physical Capacity on Large Size Models not correct

Hey folks,

our new Simplivity Nodes which we ordered a few weeks ago are large size models, and not having the physical capacity of 15 TB as expected or as written in all the documents.. Usually a large size model should have 15 TB per node, or 30TB in a two-node-Cluster.

The two new Nodes (380 Gen10 Plus) are configured with a  6 x 3.84TB SSD Kit each. Regarding the Deployment Guide 5.0 you can use FTT1 with 15TB or FTT2 with 11.5TB. We used FTT1 in our Deployment, but after all was set up the Simplivity vCenter Plugin only shows  27.72TB in the Cluster or 13,86TB per Node.

Our old medium Size cluster, which is supposed to have 10TB per Node shows 19,84TB Cluster Capacity or 9,92TB per Node.

Why dont we have ~30TB Cluster Capacity or ~15TB Capacity per Node on the new large model?

Regards

Peter

 

7 REPLIES 7
NeetSat
HPE Pro

Re: Physical Capacity on Large Size Models not correct

Hi Peter,
Good day. 
Could you please help us with the screenshot of what the details are listed under the  Simplivity vCenter Plugin?
Also, i would like to know if you can login to one of the OVC's with the below commands please.

1.  Login on the OVC using SSH.
2. Elevate to root and load the dsv commands

            # sudo su
           # source /var/tmp/build/bin/appsetup

 3. Check the current storage utilization

            # dsv-balance-show тАУshowNodeIp 

Please do paste both the outputs for exact details that can be worked on.
Usually there is a backend calculation that goes on in order to get to the exact capacity of these nodes, so i would request you to help me with the details.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
ICPMuenchen
Valued Contributor

Re: Physical Capacity on Large Size Models not correct

@NeetSat Hello Neetesh,

Screenshots below:

 

OVC_Storage- view.png
SVT-Storage-vCenter Plugin.png

Regards

Peter

NeetSat
HPE Pro

Re: Physical Capacity on Large Size Models not correct

Hi Peter,
Thank you for providing the details..
I would like to let you know that SVTFS is an object store, and we dedupe and compress all data at ingest before it reaches the object store, it is close to impossible to calculate the exact capacity that will be seen from vCenter. 
The deduplication and compression, combined with the HA and the VM mix on each host creates a highly variable mix of 8k objects.  We give rough approximations within the Quickspecs, but the actual capacity seen will vary.  The variance can even be seen between nodes within the same cluster because of the different mixes of VMs and SVT backups that comprise the 8k objects within the SVTFS on each host.

Having said that, i was trying to get to the rough estimation and since you have Large Node, and choosing the FTT1, then basically, you are loosing 1 disk out of the 6x3.84 TB, because, the FTT1 uses Raid 10 and also Raid 5 configuration with 1 drive fail tolerance.
So now, with the remaining 5 disks, you will see some part of the reduction in usable capacity due to the configs and whats finally left over is nearly the storage equivalent to ~14 or 15TB. And thats nearly the usable capacity what one would get which can be more or less as mentioned in the documents. 

I would suggest to further check, you can always reach out to the Solution Architech who helped you with this solution sizing in order to give you more information.
Here is a link for Disk Resiliency and the information related to it: 
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00004276en_us&docLocale=en_US&page=GUID-AEA7C2E4-91BC-4F86-8F87-58AA4EB3D7CF.html

Hope it helps you with some idea on how this actually works or gets configured at backend.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
ICPMuenchen
Valued Contributor

Re: Physical Capacity on Large Size Models not correct

@NeetSat 

Thank you for clarifying, i know the structure with 8k objects etc. nevertheless our medium size Cluster nearly reaches the 10TB as expected.

I dont understand what you mean with RAID10 - there are not enough disks for a Raid 10

Using FTT1, surely 1 Disk is for drive file tolerance.  Then 5 disks are left -> assuming a RAID5 is bult with these, 1 drive is for parity and 4 for capacity -> 4 x 3.84 = 15,36 TB - this would have been round about the size i would have expected from the disk configuration not 13,86 TB, which is a loss of 10% and not fitting in any RAID Configuration.

Please also check the QuickSpec Sheets of Simplivity Servers, because capacity options are always labeld with "usable" capacity!

"Usable Capacity
Notes: Usable capacity is the capacity available after RAID and system overheads have been removed, before deduplication
and compression is applied. In the case of Dual Disk Resiliency the usable capacity will be different"

Hence there must have gone something wrong in configuration/deployment.

NeetSat
HPE Pro

Re: Physical Capacity on Large Size Models not correct

Hi Peter,
Good day. 

The RAID 10 that i am talking about is usually around 20% or so set for the write cache and about 80% set for the RAID 5 capacity storage. 
And when it comes to the deployment, its orchestrated, so any misconfigurations on these storages, would eventually result in a failure during the deployment process as what we have seen, so what gets configured, is exactly the one for the model and the number of drives available in it. 
So i wouldnt doubt the configuration of the nodes during the deployment. 

And for the storage that gets displayed as usable capacity is always an approximate value. I would suggest you have a check once with the Solution Architect, who desiged the configuration for you on these SimpliVIty nodes, since they should be able to help you with the calculations of how much storage is exactly used per system. 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
ICPMuenchen
Valued Contributor
Solution

Re: Physical Capacity on Large Size Models not correct

I'm happy to let you know that we've finally sorted out the issue. After a productive meeting with our Partner and two HPE specialists, we found out that the behavior we were seeing is pretty typical for Simplivity systems without accelerator cards.

It turns out that the OVCs need extra storage, about 1.08 TB per node. This info is also in the Simplivity Sizing Tool. The older deployment guides still show the old capacity info, but that's no longer listed in the current 5.1.0 version because of the change in accelerator cards.

Since we were among the first customers of our Partner to get a new Simplivity system without an accelerator card, assuming this info was too new to be widely known.

Sunitha_Mod
Moderator

Re: Physical Capacity on Large Size Models not correct

Hello @ICPMuenchen,

That's Awesome! 

We are delighted to hear you were able to find the solution and we appreciate you for keeping us updated. 



Thanks,
Sunitha G
I'm an HPE employee.
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo