HPE SimpliVity
1748011 Members
3771 Online
108757 Solutions
New Discussion

Re: Change Simplivity physical disk capacity warning limit

 
SOLVED
Go to solution
guan8
Advisor

Change Simplivity physical disk capacity warning limit

Hello,

We are running a little low on physical space, and have been sitting between 10% and 20% available physical disk capacity for a long time.

This causes the alarm SimpliVity OmniCube Available Physical Capacity 20 Percent or Less (com.simplivity.event.control.phys.capacity.node.warning) to activate, triggering constant warning messages in the event log (every 120 seconds) and a warning symbol on the hosts, which might obscure other, more pressing matters.

We are aware of the low disk capacity, but are unable at the moment to free some disk space, so we would like to lower the warning limit of from 20% to 15%. Is that possible?

Thanks in advance.

//Gustav

10 REPLIES 10
DeclanOR
Respected Contributor

Re: Change Simplivity physical disk capacity warning limit

Hi @guan8 

Unfortunately we can'd surpress those alarms.

I would be more interested in ensuring that you really cannot free up space. There are many steps we can take to ensure you are optimizing capacity usage. Non-vm hives (iso's for example), VM's running snapshots, orphaned trees, etc etc. These all use up space if they haven't been properly cleaned up.

I would recommend you create a support case and we can help in ensuring that as much capacity as possible is freed up. I would be surprised if there was truly nothing more that could be done to free up some.

Thanks,

DeclanOR

Accept or Kudo



guan8
Advisor

Re: Change Simplivity physical disk capacity warning limit

Hello,

Thank you for your suggestion.

We only have 3 ISOs and no VM snapshots. Orphaned trees however, I'm not sure of. I'll take your advice and open a case.

We think Simplivity is having a tough time deduplicating our Microsoft SQL Server which has 5 databases of around 100 GB data each. Much of the data is indexed and data is being written to them throughout the day. We think this reindexing of data is what causes the data to change and not be deduplicated very efficiently.

We back this machine up once every day and keep it for 10 days. We regularly calculate the unique backup size, and it is around 40-70 GB unique data for every backup. Not even close to this amount is being written to the server each day. It's more like 5-10 GB every day.

//Gustav

DeclanOR
Respected Contributor

Re: Change Simplivity physical disk capacity warning limit

Hi @guan8 

Thanks for responding. Busy SQL DB's will generate quite a lot of unique data indeed yes, and will impact our ability to deduplicate. This is not specifically a SimpliVity deduplication issue as such, but rather just how deduplication generally works and how busy SQL DB's will generate a lot of unique, non-deduplicable data in general.

See how the capacity optimization case goes. Hopefully we can free up some additional space for you. 

Thanks,

DeclanOR

Accept or Kudo



guan8
Advisor

Re: Change Simplivity physical disk capacity warning limit

Hello,

I just finished my session with HPE Simplivity and we ran a few commands, like:

dsv-balance-show --shownodeip
dsv-cfgdb-get-sync-status
dsv-tree-find-orphaned
dsv-balance-manual -q

and everything looked fine according to the HPE support engineer. No orphaned trees and the storage was balanced between the nodes. He said the the only option left for us is to delete VMs and to adjust our backup policies. But our backup policy is only backing up our 100 VMs once per day and keeping the backups for 9 days.

That's actually way shorter retention than we would prefer. We would like to keep our backups for a couple of weeks, or even months. We recently reduced our backup retention from 10 days to 9 days because we kept hitting the 90% occupied space mark and had to delete backups manually.

/Gustav

DeclanOR
Respected Contributor

Re: Change Simplivity physical disk capacity warning limit

Hi @guan8 

Thanks for the update. Was a script ran at any stage on each node to cleanup orphans, or was only the dsv-tree-find-orphaned command ran? There is a script which can be ran that will delete orphaned trees and delete expired undeleted backups if they exist. There are also other steps which can be taken to clanup potnetial unnecessary OSC backups.

This should be done on every node in the cluster. NonVM hives and VM's running snapshots should also be checked for.

Once all of this is done, the final step which can be caried out is the use of sDelete. The -z flag will zero out free space, which at times can provide subtantial space gains. I would mention this to the engineer handling your case.

Hope this helps.

DeclanOR

Accept or Kudo



guan8
Advisor

Re: Change Simplivity physical disk capacity warning limit

Hello,

First of all, thanks for all your help.

Below is the complete ouput from the session. The part with "-----------> Command to check Balancing & the GC Value every 5sec" is kind of funny. I asked the support engineer if he really wanted me to run that as a command. He said yes. That resulted in a syntax failure.

Anyway, it looks like there are no orphaned trees to delete, right?

By non-VM hives, are you only referring to ISOs? We only have 3 ISOs which account for about 10 GB data. Otherwise we do not have anything else in our datastore, but VMs.

We do not have any VM snapshots.

But I will open up another case and refer him to this forum post. We'll see how that goes.

/Gustav

 

administrator@vsphere@omnicube-ip162-161:~$ sudo su
root@omnicube-ip162-161:/home/administrator@vsphere# source /var/tmp/build/bin/appsetup
root@omnicube-ip162-161:/home/administrator@vsphere# svt-federation-show
.----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------.
| Federation |
+-------------------------------+------------+------------+--------+--------------+---------------------------+-------+---------------+-----------------+-----------------+-------------------+---------+--------------------------------+-----------+
| HMS | Datacenter | Cluster | Zone | Host | OVC | State | Mgmt IP | Fed IP | Stor IP | Version | Family | Model | Arbiter |
+-------------------------------+------------+------------+--------+--------------+---------------------------+-------+---------------+-----------------+-----------------+-------------------+---------+--------------------------------+-----------+
| SVT-vCenter01.local.advoco.se | AdvocoDC | Backup | (none) | 10.10.100.63 | OmniStackVC-10-10-100-163 | Alive | 10.10.100.163 | 10.10.102.163 | 10.10.101.163 | Release 3.7.9.279 | vSphere | HPE SimpliVity 380 Series 4000 | Connected |
| | | Production | (none) | 10.0.162.61 | OmniStackVC-10-0-162-161 | Alive | 10.0.162.161 | 192.168.102.161 | 192.168.101.161 | Release 3.7.9.279 | vSphere | HPE SimpliVity 380 Series 4000 | Connected |
| | | | (none) | 10.0.162.62 | OmniStackVC-10-0-162-162 | Alive | 10.0.162.162 | 192.168.102.162 | 192.168.101.162 | Release 3.7.9.279 | vSphere | HPE SimpliVity 380 Series 4000 | Connected |
'-------------------------------+------------+------------+--------+--------------+---------------------------+-------+---------------+-----------------+-----------------+-------------------+---------+--------------------------------+-----------'
root@omnicube-ip162-161:/home/administrator@vsphere# dsv-balance-show --shownodeip
.---------------------------------------------------------------------------------------------------------------------.
| Recorded storage utilization for time period 2020-Jul-23 20:04:44 UTC to 2020-Jul-23 20:14:44 UTC |
| |
| Leader 10.0.162.161 (local), Last Updated 2020-Mar-15 01:25:04 UTC |
+----------------+--------------------------------------+------------------------------+------------------------------+
| | | Calculated Used | Estimated Remaining |
| OmniStack host | Node GUID | Space I/O ( Read / Write ) | Space I/O |
+----------------+--------------------------------------+------------------------------+------------------------------+
| 10.0.162.162 | 1a6f2e42-704b-c007-842f-9f47dd71dd6a | 79% 3% ( 391 / 957 ) | 1.15TB ( 22325 / 14187 ) |
| 10.0.162.161 | 9c072e42-9298-00e6-e8a4-a5c46dc2633f | 79% 3% ( 391 / 957 ) | 1.15TB ( 22325 / 14187 ) |
'----------------+--------------------------------------+------------------------------+------------------------------'
root@omnicube-ip162-161:/home/administrator@vsphere# dsv-cfgdb-get-sync-status
.----------------------------------------------------------------------------------------------------------------------------------------.
| Node Sync Status |
+--------------------------------------+---------------------------+-----------------+----------------------+------------+---------------+
| Node ID | Node Name | Node State | Last Transaction Log | Send delta | Receive delta |
+--------------------------------------+---------------------------+-----------------+----------------------+------------+---------------+
| 9c072e42-9298-00e6-e8a4-a5c46dc2633f | OmniStackVC-10-0-162-161 | Ready | 2001959 | 0 | 0 |
| 1a6f2e42-704b-c007-842f-9f47dd71dd6a | OmniStackVC-10-0-162-162 | NodeOnlineReady | 381690 | 9 | 0 |
| 6b622e42-fea2-2f13-be7b-e413739487d9 | OmniStackVC-10-10-100-163 | NodeOnlineReady | 1867004 | 937 | 0 |
'--------------------------------------+---------------------------+-----------------+----------------------+------------+---------------'
* Send and receive delta are estimates and calculated on resync which can take up to an hour
root@omnicube-ip162-161:/home/administrator@vsphere# Every 5.0s: svt-federation-show >/dev/null 2>&1; dsv-balance-show --shownodeip; dsv-mem-show |egrep -B11 -A1 "defrag\-data"; dsv-counter-show |egrep "bitset|gc-objects-collected|os\-free|\-load" -----------> Command to check Balancing & the GC Value every 5sec
.---------------------------------------------------------------------------------------------------------------------.
| Recorded storage utilization for time period 2020-Jul-23 20:14:55 UTC to 2020-Jul-23 20:24:55 UTC |
| |
| Leader 10.0.162.161 (local), Last Updated 2020-Mar-15 01:25:04 UTC |
+----------------+--------------------------------------+------------------------------+------------------------------+
| | | Calculated Used | Estimated Remaining |
| OmniStack host | Node GUID | Space I/O ( Read / Write ) | Space I/O |
+----------------+--------------------------------------+------------------------------+------------------------------+
| 10.0.162.162 | 1a6f2e42-704b-c007-842f-9f47dd71dd6a | 79% 3% ( 394 / 958 ) | 1.15TB ( 22322 / 14186 ) |
| 10.0.162.161 | 9c072e42-9298-00e6-e8a4-a5c46dc2633f | 79% 3% ( 394 / 958 ) | 1.15TB ( 22322 / 14186 ) |
'----------------+--------------------------------------+------------------------------+------------------------------'
.-----------------------------------------------------------------------------------.
| defrag |
| size: 54525952 |
| chunk size: 4194304 |
| chunk count: 13 |
| used: 6 |
| max: 10 |
+-------------+----------+------------+-------------+---------+-----+--------+------+
| name | size | block size | block count | current | max | failed | rate |
+-------------+----------+------------+-------------+---------+-----+--------+------+
| defrag-meta | 7864320 | 262144 | 30 | 0 | 58 | 0 | 5 |
| defrag-data | 15728640 | 262144 | 60 | 17 | 120 | 0 | 22 |
'-------------+----------+------------+-------------+---------+-----+--------+------'
[1] 19956
egrep: unrecognized option '-----------'
Usage: egrep [OPTION]... PATTERN [FILE]...
Try 'egrep --help' for more information.
The program 'the' is currently not installed. You can install it by typing:
apt-get install the
root@omnicube-ip162-161:/home/administrator@vsphere# dsv-tree-find-orphaned
Orphaned Trees:
[1]+ Exit 2 dsv-counter-show | egrep --color=auto "bitset|gc-objects-collected|os\-free|\-load" ----------- to check Balancing > Command
root@omnicube-ip162-161:/home/administrator@vsphere# dsv-tree-find-orphaned
Orphaned Trees:
root@omnicube-ip162-161:/home/administrator@vsphere#
dsvroot@omnicube-ip162-161:/home/administrator@vsphere# dsv-balance-manual -q

Acquiring federation information...

# Datacenter / Cluster
== ========== / =======
1 AdvocoDC / Backup
2 AdvocoDC / Production

Enter the number of the datacenter to work with [default = none]: 1

Analyzing guest virtual machine information...
1/17 virtual machines processed
2/17 virtual machines processed
3/17 virtual machines processed
4/17 virtual machines processed
5/17 virtual machines processed
6/17 virtual machines processed
7/17 virtual machines processed
8/17 virtual machines processed
9/17 virtual machines processed
10/17 virtual machines processed
11/17 virtual machines processed
12/17 virtual machines processed
13/17 virtual machines processed
14/17 virtual machines processed
15/17 virtual machines processed
16/17 virtual machines processed
17/17 virtual machines processed

Guest VM OWNER NODE 1 MvBkups BKUPS NATIVE IO-R IO-W SZ(G) Name
1 [ 1] p 9 100% 0 0 29.1 !CDRMonsterLinux
2 [ 1] p 9 100% 0 1 523.8 !CDRMonsterWin
3 [ 1] p 9 100% 0 1 63.1 !CallMonitor
4 [ 1] p 9 100% 0 1 22.1 !HQ-PRTG
5 [ 1] p 9 100% 0 2 21.3 !HQ-Utilities01
6 [ 1] p 9 100% 0 0 17.4 !LaHoWin10_2
7 [ 1] p 9 100% 0 3 211.4 !PROD-PYRAMID01
8 [ 1] p 10 100% 0 1 2.1 !PoCSBC01
9 [ 1] p 10 100% 0 0 0.6426 !PoCSBC02
10 [ 1] p 9 100% 0 1 25.3 !Pyramid-Jumphost
11 [ 1] p 8 100% 0 0 47.4 !SESTOWS001
12 [ 1] p 9 100% 0 2 33.4 !STAGING-TMi1
13 [ 1] p 9 100% 0 4 27.2 !UniFiController01
14 [ 1] p 0 0 0 0 64.5 !gustav_hq1
15 [ 1] p 9 100% 0 2 34.7 !mipctest1
16 [ 1] p 0 0 0 0 18.8 !w2k19test01-2019-18-02-18h57m51s
17 [ 1] p 6 100% 0 0 78.3 !x_STAGING-TMi01-old

IOPS(R) 0
IOPS(W) 18
SIZE(G) 1220.5
AVAIL(G) 1218.6

Datacenter is AdvocoDC<->Backup

index Node IP Pri Sec Total
node 1 10.10.100.163 17 0 17


NOTE: HA
Virtual machine guest names with (!) indicate virtual machines which
are not in proper HA state. Re-balance action will only be initiated
on elements in HA state. Re-balance action itself causes the virtual
machine to be out of HA momentarily

But if required HA Non Compliant hives which are in DEGRADED/SYNCING state
can be moved by providing --include-non-ha-hives option

NOTE: PLACEMENT OF OWNERSHIP
Highest efficiency is gained if the primary replica is located on the node
which hosts the guest virtual machine. This node is indicated in the square
brackets next to each virtual machine name. A 0 value indicates foreign
host (legacy) or unavailable data, in which case the vsphere client can
be used to determine this information. A star (*) within these brackets
indicates a non-optimal primary distribution and a tilda (~) indicates
shadow hive.

NOTE: POWER
Virtual machine hive information may be collected while virtual machine
power state is either ON or OFF. If virtual machine power is OFF, the
designation 'p' and 's' is arbitrary until the virtual machine is powered
on. As long as one of the replica pair is on the virtual machines cpu
host node, the hive ownership 'p' will automatically migrate to the cpu
host. Thus, when editing the redistribution csv file when virtual
machines are powered off, ensure only the node hosting the virtual
machine has a 'p' or 's' mark.


File /tmp/balance/replica_distribution_file_AdvocoDC.csv
is now available for update to rebalance the hive replicas among
nodes as required. When the file is set as desired, call this
script again with the updated file to initiate the updates.

Example:
dsv-balance-manual --csvfile /tmp/balance/replica_distribution_file_AdvocoDC.csv

root@omnicube-ip162-161:/home/administrator@vsphere# dsv-balance-manual -q

Acquiring federation information...

# Datacenter / Cluster
== ========== / =======
1 AdvocoDC / Backup
2 AdvocoDC / Production

Enter the number of the datacenter to work with [default = none]: 2

Analyzing guest virtual machine information...
1/101 virtual machines processed
2/101 virtual machines processed
3/101 virtual machines processed
4/101 virtual machines processed
5/101 virtual machines processed
6/101 virtual machines processed
7/101 virtual machines processed
8/101 virtual machines processed
9/101 virtual machines processed
10/101 virtual machines processed
11/101 virtual machines processed
12/101 virtual machines processed
13/101 virtual machines processed
14/101 virtual machines processed
15/101 virtual machines processed
16/101 virtual machines processed
17/101 virtual machines processed
18/101 virtual machines processed
19/101 virtual machines processed
20/101 virtual machines processed
21/101 virtual machines processed
22/101 virtual machines processed
23/101 virtual machines processed
24/101 virtual machines processed
25/101 virtual machines processed
26/101 virtual machines processed
27/101 virtual machines processed
28/101 virtual machines processed
29/101 virtual machines processed
30/101 virtual machines processed
31/101 virtual machines processed
32/101 virtual machines processed
33/101 virtual machines processed
34/101 virtual machines processed
35/101 virtual machines processed
36/101 virtual machines processed
37/101 virtual machines processed
38/101 virtual machines processed
39/101 virtual machines processed
40/101 virtual machines processed
41/101 virtual machines processed
42/101 virtual machines processed
43/101 virtual machines processed
44/101 virtual machines processed
45/101 virtual machines processed
46/101 virtual machines processed
47/101 virtual machines processed
48/101 virtual machines processed
49/101 virtual machines processed
50/101 virtual machines processed
51/101 virtual machines processed
52/101 virtual machines processed
53/101 virtual machines processed
54/101 virtual machines processed
55/101 virtual machines processed
56/101 virtual machines processed
57/101 virtual machines processed
58/101 virtual machines processed
59/101 virtual machines processed
60/101 virtual machines processed
61/101 virtual machines processed
62/101 virtual machines processed
63/101 virtual machines processed
64/101 virtual machines processed
65/101 virtual machines processed
66/101 virtual machines processed
67/101 virtual machines processed
68/101 virtual machines processed
69/101 virtual machines processed
70/101 virtual machines processed
71/101 virtual machines processed
72/101 virtual machines processed
73/101 virtual machines processed
74/101 virtual machines processed
75/101 virtual machines processed
76/101 virtual machines processed
77/101 virtual machines processed
78/101 virtual machines processed
79/101 virtual machines processed
80/101 virtual machines processed
81/101 virtual machines processed
82/101 virtual machines processed
83/101 virtual machines processed
84/101 virtual machines processed
85/101 virtual machines processed
86/101 virtual machines processed
87/101 virtual machines processed
88/101 virtual machines processed
89/101 virtual machines processed
90/101 virtual machines processed
91/101 virtual machines processed
92/101 virtual machines processed
93/101 virtual machines processed
94/101 virtual machines processed
95/101 virtual machines processed
96/101 virtual machines processed
97/101 virtual machines processed
98/101 virtual machines processed
99/101 virtual machines processed
100/101 virtual machines processed
101/101 virtual machines processed

Guest VM OWNER NODE 1 NODE 2 MvBkups BKUPS NATIVE IO-R IO-W SZ(G) Name
1 [ 1] p s 9 100% 0 0 96.1 BuilderSQL2012
2 [ 2] s p 9 100% 0 2 23.6 CC-STAGING01-TEMP
3 [ 1] p s 9 100% 0 5 88.9 CC-STAGING01-old
4 [ 1] p s 9 100% 0 0 63.1 DB-test01
5 [ 1] p s 9 100% 0 2 59.3 ENG-AA1
6 [ 1] p s 9 100% 0 295 25.3 Elastic-RMQ01
7 [ 2] s p 9 100% 0 3 24.8 Elastic-RMQ02
8 [ 2] s p 3 100% 0 1 276.6 FileServer1
9 [ 1] p s 9 100% 0 3 336.9 Fullstorage01
10 [ 2] s p 9 100% 0 0 11.5 GetXML01
11 [ 2] s p 9 100% 0 0 2.3 INT-DNS1-Debian
12 [ 1] p s 9 100% 0 0 2.3 INT-DNS2-Debian
13 [ 1] p s 9 100% 0 0 20.7 JumpHost
14 [ 2] s p 9 100% 0 4 22.9 LAB-Mongo01
15 [ 1] p s 9 100% 0 4 22.8 LAB-Mongo02
16 [ 1] p s 9 100% 0 1 22.0 LAB-RDS01
17 [ 1] p s 9 100% 0 3 23.4 LAB-RMQ01
18 [ 2] s p 9 100% 0 3 23.5 LAB-RMQ02
19 [ 2] s p 9 100% 0 5 30.7 LAB-WEB01
20 [ 1] p s 9 100% 0 0 24.7 MS-Template
21 [ 1] p s 9 100% 0 0 24.7 MS-temp-org
22 [ 2] s p 9 100% 0 0 20.5 MongoTemplate1
23 [ 2] s p 9 100% 2 2 153.7 NowInteract1
24 [ 1] p s 9 100% 0 1 18.4 PROD-CBridge01
25 [ 1] p s 0 0 0 0 25.5 PROD-CBridge01-ny
26 [ 1] p s 9 100% 0 4 35.6 PROD-CDR01
27 [ 1] p s 8 100% 29 12 648.6 PROD-CDR01-old
28 [ 2] s p 9 100% 0 1 31.6 PROD-CDR01-test
29 [ 2] s p 9 100% 0 4 28.3 PROD-Mongo01
30 [ 1] p s 9 100% 0 4 24.1 PROD-Mongo02
31 [ 2] s p 9 100% 0 3 23.9 PROD-Mongo03
32 [ 1] p s 9 100% 0 2 85.3 PROD-OTRS01
33 [ 2] s p 9 100% 0 13 79.3 PROD-PRTG
34 [ 1] p s 9 100% 5 2 42.6 PROD-RDS01
35 [ 2] s p 9 100% 0 1 24.7 PROD-RDS02
36 [ 2] s p 9 100% 0 4 23.1 PROD-RMQ01
37 [ 2] s p 9 100% 0 3 23.4 PROD-RMQ02
38 [ 1] p s 9 100% 0 0 25.8 PROD-TMi01-new
39 [ 1] p s 9 100% 0 1 122.3 PROD-Tmi01
40 [ 1] p s 9 100% 0 3 39.3 PROD-WEB01
41 [ 2] s p 9 100% 0 1 23.1 PROD-WEB02
42 [ 2] s p 9 100% 0 1 17.3 PUB-DNS1
43 [ 1] p s 9 100% 0 1 17.2 PUB-DNS2
44 [* 2] p s 9 100% 0 0 8.4 Template1
45 [ 2] s p 9 100% 0 0 13.3 Template2
46 [ 2] s p 9 100% 0 0 15.4 Template3
47 [ 2] s p 9 100% 0 0 24.0 Template6
48 [ 2] s p 9 100% 0 0 25.4 Template7
49 [ 2] s p 9 100% 0 0 14.5 Template_1_1_1
50 [ 2] s p 9 100% 0 0 29.5 Template_2_2_2
51 [* 2] p s 9 100% 0 0 272.3 Template_3_1_1
52 [ 1] p s 9 100% 0 1 29.9 WS-STAGING01-temp
53 [ 1] p s 9 100% 0 0 24.3 WS-Template
54 [ 1] p s 9 100% 0 0 24.3 WS-temp-org
55 [ 1] p s 9 100% 0 1 51.9 X_ms.d.com-old
56 [ 1] p s 9 100% 0 2 44.5 X_ms.d.net-old
57 [ 2] s p 9 100% 0 1 61.3 X_ms.d.se-old
58 [ 2] s p 9 100% 0 0 16.7 bugtrack
59 [ 2] s p 9 100% 33 19 305.4 cc.a2.net
60 [ 1] p s 9 100% 101 31 726.4 cc.d.com
61 [ 1] p s 9 100% 100 26 767.7 cc.d.net
62 [ 2] s p 9 100% 66 18 485.9 cc.d.se
63 [ 2] s p 9 100% 0 2 314.8 cc.n4.net
64 [ 1] p s 9 100% 0 7 158.8 cc.n5.net
65 [ 2] s p 0 0 0 1 22.3 fisk1
66 [ 2] s p 0 0 0 0 24.2 fisk2
67 [ 2] s p 9 100% 0 1 24.0 hMail01
68 [ 2] s p 9 100% 0 0 25.8 hMail02-d-com
69 [ 2] s p 9 100% 0 0 25.6 hMail03
70 [ 1] p s 9 100% 0 3 25.8 ms-staging01
71 [ 2] s p 9 100% 0 1 41.1 ms.a2.net
72 [ 1] p s 9 100% 0 4 51.5 ms.a2.net-ny
73 [ 1] p s 9 100% 0 5 38.2 ms.d.com
74 [ 1] p s 9 100% 0 4 47.9 ms.d.net
75 [ 1] p s 9 100% 0 5 54.5 ms.d.se
76 [ 1] p s 9 100% 0 4 27.5 ms.n5.net
77 [ 2] s p 9 100% 0 17 105.4 svt-vcenter01.local.advoco.se
78 [ 2] s p 9 100% 0 0 312.4 virCorona
79 [ 1] p s 9 100% 0 0 17.2 virHeineken
80 [ 1] p s 9 100% 0 4 41.6 virKaltenberg
81 [* 1] s p 9 100% 0 0 85.2 virKeHa
82 [ 1] p s 9 100% 0 24 371.2 virkaltenberg-old
83 [ 1] p s 0 0 0 2 26.0 w2k19test01
84 [ 1] p s 9 100% 0 1 26.2 ws-staging01
85 [ 1] p s 9 100% 2 2 26.8 ws.a2.net
86 [ 1] p s 9 100% 0 1 27.7 ws.d.com
87 [ 1] p s 9 100% 0 2 29.0 ws.d.net
88 [ 1] p s 9 100% 0 1 28.5 ws.d.se
89 [ 1] p s 9 100% 0 1 26.4 ws.n5.net
90 [ 1] p s 6 100% 0 0 15.9 x_INT-DNS1-old
91 [ 1] p s 6 100% 0 0 55.9 x_LAB-RDS01-old
92 [ 1] p s 6 100% 0 0 43.0 x_LAB-WEB01-old
93 [ 1] p s 6 100% 0 2 30.9 x_MS-STAGING01-old
94 [ 2] s p 6 100% 0 0 74.8 x_PROD-RDS01-old
95 [ 2] s p 6 100% 0 0 18.6 x_PROD-RMQ01-old
96 [ 1] p s 6 100% 0 0 18.3 x_PROD-RMQ02-old
97 [ 2] s p 6 100% 0 0 69.6 x_PROD-WEB01-old
98 [ 1] p s 6 100% 0 0 27.8 x_PROD-WEB02-old
99 [ 1] p s 6 100% 0 8 65.0 x_PRTG_old
100 [* 1] s p 6 100% 0 0 19.2 x_PUB-DNS1-old
101 [ 2] s p 6 100% 0 1 31.9 x_ms.n5.net-old

IOPS(R) 338 338
IOPS(W) 596 596
SIZE(G) 8178.0 8178.0
AVAIL(G) 1179.8 1179.9

Datacenter is AdvocoDC<->Production

index Node IP Pri Sec Total
node 1 10.0.162.161 57 44 101
node 2 10.0.162.162 44 57 101


NOTE: HA
Virtual machine guest names with (!) indicate virtual machines which
are not in proper HA state. Re-balance action will only be initiated
on elements in HA state. Re-balance action itself causes the virtual
machine to be out of HA momentarily

But if required HA Non Compliant hives which are in DEGRADED/SYNCING state
can be moved by providing --include-non-ha-hives option

NOTE: PLACEMENT OF OWNERSHIP
Highest efficiency is gained if the primary replica is located on the node
which hosts the guest virtual machine. This node is indicated in the square
brackets next to each virtual machine name. A 0 value indicates foreign
host (legacy) or unavailable data, in which case the vsphere client can
be used to determine this information. A star (*) within these brackets
indicates a non-optimal primary distribution and a tilda (~) indicates
shadow hive.

NOTE: POWER
Virtual machine hive information may be collected while virtual machine
power state is either ON or OFF. If virtual machine power is OFF, the
designation 'p' and 's' is arbitrary until the virtual machine is powered
on. As long as one of the replica pair is on the virtual machines cpu
host node, the hive ownership 'p' will automatically migrate to the cpu
host. Thus, when editing the redistribution csv file when virtual
machines are powered off, ensure only the node hosting the virtual
machine has a 'p' or 's' mark.


File /tmp/balance/replica_distribution_file_AdvocoDC.csv
is now available for update to rebalance the hive replicas among
nodes as required. When the file is set as desired, call this
script again with the updated file to initiate the updates.

Example:
dsv-balance-manual --csvfile /tmp/balance/replica_distribution_file_AdvocoDC.csv

root@omnicube-ip162-161:/home/administrator@vsphere#

DeclanOR
Respected Contributor

Re: Change Simplivity physical disk capacity warning limit

Hi @guan8 

Thanks for the detail. Maybe I should have asked how many nodes in the cluster firstly

This is a two node cluster. running 101 VM's, many almost 1TB in size, and each VM backing up daily with a retention policy of 9 days. Some of those containing busy SQL DB's. You have all of this, and still 20% capacity remaining!! This actually sounds fine. There is probably not a whole lot of cleanup that can be done in this two node scenario.

Remember, everything in a two node cluster (and 2+) is replicated. So you have a copy of every VM and every backup also for protection. All of this is stored on a two node cluster. I would expect to see the alerts as you do, and there probably isn't a whole lot we can do interms of freeing up space. You could try running sDelete on a/some larger VM's, but may not bring a whole lot of benefit. I believe looking at the output that you really are just running hot.

My personal opinion, would be to add an additional node to the cluster if it was feasible. Given that you are only at 79% capacity, you could also probably afford an additional day or two retention on your backups and you would likely still not be at 90%.

Aplogies if we have come right back to the start here again...........I did want to ensure you were doing all the possible to free space, and "how many nodes in the cluster" should probably have been my first question.

Thanks,

DeclanOR

Accept or Kudo



guan8
Advisor

Re: Change Simplivity physical disk capacity warning limit

No apologies needed!

I just ran /calculate_unique_size through the API on every backup and summed up the unique_size_bytes values. All of our backups  (local and remote) take up a total of 1589 GB (or 1.6 TB) of unique space.

At the same time, if I look at the HPE SimpliVity Storage Efficiency tab for the Production cluster in vCenter, I can see what is supposedly taking up space, and I'm a bit confused.

For the logical space, I see that we have:
132.14 TB local backups
18.72 TB remote backups and 
16.37 TB virtual machines
=
167.23 TB logical data

That means backups are taking up 132.14 + 18.72 = 150.86 TB out of a total of 167.23 TB. That means backups are consuming 90% of our currently occupied storage.

We have 11.09 TB total physical storage and we're occupying 8.68 TB currently. If backups consumed 90% of our occupied storage, that would mean backups consume 8.68 * 0.9 = 7.8 TB.

This math doesn't add up. All our backups are supposed to consume a total of 1.6 TB of unique space as I mentioned before. Shouldn't everything that is not unique space be deduplicated?

Am I misunderstanding something?

/Gustav

storage1.PNG

guan8
Advisor
Solution

Re: Change Simplivity physical disk capacity warning limit

Hello,

After running SDelete on a couple of servers, our available physical storage increased from 22% to 32%. We have only run it on old, already shut down servers so far, and we are planning on running it on the rest of our virtual environment, with some caution and backups made before doing it, to release even more storage. 

We simply downloaded the SDelete binary from https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete and ran ./sdelete64.exe -z D<colon> where D<colon> is the drive letter, and repeated this for all the drives on the servers.

So this is ultimately what helped us, if anyone faces the same issue with low available physical storage.

/Gustav