MSA Storage
1819883 Members
2493 Online
109607 Solutions
New Discussion юеВ

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

 
SOLVED
Go to solution
techdaw
Advisor

MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi i have problem with MSA2050

3 nodes hyper-V 2022 in cluster CSV all ports connect to switch FC NX9000 Cisco

On 1 nodes VMachines in hyper-v  disk usage on 1-30%

but other nodes this same in VM  or second disk usage 100% usage R/W 3000 ms I cant find problem, 

All check Latest firmware on MSA 2050, switch cisco etc...

VM instaled exchange 2019 problem problems work create DAG etc database .

Thanks for sugested what command use to identify problem.

 

 

12 REPLIES 12
techdaw
Advisor

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Change all map to volumes, change Tier afinity Preformance and problem sitll exitst but is smaller

 

R/W in VM machines is 1-2500ms and smaller lag in vm machine.

On cluster HPV warning exist

Cluster Shared Volume 'Volume1' ('Dysk klastrowy 2') has entered a paused state because of 'STATUS_IO_TIMEOUT(c00000b5)'. All I/O will temporarily be queued until a path to the volume is reestablished.

Problem with veeam time out for snapchot created.

Zrzut ekranu 2022-10-09 130439.pngZrzut ekranu 2022-10-09 130530.pngZrzut ekranu 2022-10-09 130554.pngZrzut ekranu 2022-10-09 131419.png

 

 
 

 

 

In logs from nods  

ArunKKR
HPE Pro

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi,

The issue description is not quite clear.

Please provide the following information:

1. Are you facing the performance issue on vms hosted on all MSA volumes (identify whether the issue is on Pool A volumes or Pool B volume) ?
2. Are you facing the latency only when load/vm is on Nodes 2 and 3? I believe any vm hosted on Node 1 performs better compared to vms hosted in Nodes 2 and 3.
3. Have you checked the SAN switch logs for any port error or low RX/TX power on SFP modules?
4. Have you tried shutting down one storage controller at a time to check whether the issue follows any specific controller?

I would suggest getting an HPE support case logged to rule out any issues from MSA storage end.
These type of issues requires extensive log analysis and a support case would help.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
techdaw
Advisor

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

AD1  now performance issue on vms hosted on only volumes PoolA 

AD2. all nodes 

AD3. What is vaule normal RT/XT power ?

AD4. How to turn off controler ?

ArunKKR
HPE Pro

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi,

 

Optimal RX/TX power values should be above 400uW.

Launch an SSH session to MSA controller IP and share the output of the below values:

 

show system

show pools

show disk-groups

show ports

show host-port-statistics (execute this command 3 or 5 times during a period when latency is observed)

show disk-statistics



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
techdaw
Advisor

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Power on switch.

FC1

fc1/1 lane 1, Unknown: 4 3.2901 VDC 2.969999 3.135 3.465 3.63
fc1/1 lane 1, Current 7.396 mA 2.5 2.5 10.5 10.5
fc1/1 lane 1, Temperature 37.039064 C -5.0 0.0 70.0 75.0
fc1/1 lane 1, Rx Power -4.616776 dBm -15.900669 -11.897675 0.0 3.000082
fc1/1 lane 1, Tx Power -2.398792 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/2 lane 1, Unknown: 4 3.2968 VDC 2.969999 3.135 3.465 3.63
fc1/2 lane 1, Current 6.386 mA 2.5 2.5 10.5 10.5
fc1/2 lane 1, Temperature 38.343752 C -5.0 0.0 70.0 75.0
fc1/2 lane 1, Rx Power -2.469534 dBm -15.900669 -11.897675 0.0 3.000082
fc1/2 lane 1, Tx Power -2.121146 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/3 lane 1, Unknown: 4 3.3372 VDC 2.969999 3.135 3.465 3.63
fc1/3 lane 1, Current 8.142 mA 4.0 5.0 10.8 11.8
fc1/3 lane 1, Temperature 36.523436 C -5.0 0.0 70.0 75.0
fc1/3 lane 1, Rx Power -3.330142 dBm -15.900669 -11.897675 0.0 3.000082
fc1/3 lane 1, Tx Power -7.652297 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/4 lane 1, Unknown: 4 3.3328 VDC 2.969999 3.135 3.465 3.63
fc1/4 lane 1, Current 8.134 mA 4.0 5.0 10.8 11.8
fc1/4 lane 1, Temperature 38.703124 C -5.0 0.0 70.0 75.0
fc1/4 lane 1, Rx Power -3.694704 dBm -15.900669 -11.897675 0.0 3.000082
fc1/4 lane 1, Tx Power -2.850837 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/5 lane 1, Unknown: 4 3.3299 VDC 2.969999 3.135 3.465 3.63
fc1/5 lane 1, Current 8.188 mA 4.0 5.0 10.8 11.8
fc1/5 lane 1, Temperature 40.304688 C -5.0 0.0 70.0 75.0
fc1/5 lane 1, Rx Power -3.433269 dBm -15.900669 -11.897675 0.0 3.000082
fc1/5 lane 1, Tx Power -2.677671 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/6 lane 1, Unknown: 4 3.2595 VDC 2.969999 3.135 3.465 3.63
fc1/6 lane 1, Current 6.16 mA 1.0 2.5 10.5 12.0
fc1/6 lane 1, Temperature 40.375 C -5.0 0.0 70.0 75.0
fc1/6 lane 1, Rx Power -3.098039 dBm -15.900669 -11.897675 0.0 3.000082
fc1/6 lane 1, Tx Power -2.754783 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/7 lane 1, Unknown: 4 3.3559 VDC 2.969999 3.135 3.465 3.63
fc1/7 lane 1, Current 8.146 mA 4.0 5.0 10.8 11.8
fc1/7 lane 1, Temperature 36.917968 C -5.0 0.0 70.0 75.0
fc1/7 lane 1, Rx Power -3.036436 dBm -15.900669 -11.897675 0.0 3.000082
fc1/7 lane 1, Tx Power -2.722961 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/8 lane 1, Unknown: 4 3.3495 VDC 2.969999 3.135 3.465 3.63
fc1/8 lane 1, Current 8.136 mA 4.0 5.0 10.8 11.8
fc1/8 lane 1, Temperature 36.929688 C -5.0 0.0 70.0 75.0
fc1/8 lane 1, Rx Power -2.705922 dBm -15.900669 -11.897675 0.0 3.000082
fc1/8 lane 1, Tx Power -2.846653 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/9 lane 1, Unknown: 4 3.301199 VDC 2.969999 3.135 3.465 3.63
fc1/9 lane 1, Current 6.364 mA 2.0 2.0 10.5 10.5
fc1/9 lane 1, Temperature 36.839844 C -5.0 0.0 70.0 75.0
fc1/9 lane 1, Rx Power -2.881093 dBm -17.30487 -13.297542 0.0 3.000082
fc1/9 lane 1, Tx Power -2.621714 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/10 lane 1, Unknown: 4 3.301199 VDC 2.969999 3.135 3.465 3.63
fc1/10 lane 1, Current 5.982 mA 2.0 2.0 10.5 10.5
fc1/10 lane 1, Temperature 38.406248 C -5.0 0.0 70.0 75.0
fc1/10 lane 1, Rx Power -3.910463 dBm -17.30487 -13.297542 0.0 3.000082
fc1/10 lane 1, Tx Power -2.546128 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/11 lane 1, Unknown: 4 3.3 VDC 2.969999 3.135 3.465 3.63
fc1/11 lane 1, Current 5.866 mA 2.0 2.0 10.5 10.5
fc1/11 lane 1, Temperature 34.875 C -5.0 0.0 70.0 75.0
fc1/11 lane 1, Rx Power -3.649185 dBm -17.30487 -13.297542 0.0 3.000082
fc1/11 lane 1, Tx Power -2.550814 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/12 lane 1, Unknown: 4 3.2947 VDC 2.969999 3.135 3.465 3.63
fc1/12 lane 1, Current 6.666 mA 2.0 2.0 10.5 10.5
fc1/12 lane 1, Temperature 37.722656 C -5.0 0.0 70.0 75.0
fc1/12 lane 1, Rx Power -2.161678 dBm -17.30487 -13.297542 0.0 3.000082
fc1/12 lane 1, Tx Power -2.559418 dBm -14.001169 -10.0 -1.30006 1.699975

FC2
fc1/1 lane 1, Unknown: 4 3.2738 VDC 2.969999 3.135 3.465 3.63
fc1/1 lane 1, Current 6.14 mA 1.0 2.5 10.5 12.0
fc1/1 lane 1, Temperature 39.25 C -5.0 0.0 70.0 75.0
fc1/1 lane 1, Rx Power -2.24608 dBm -15.900669 -11.897675 0.0 3.000082
fc1/1 lane 1, Tx Power -2.395021 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/2 lane 1, Unknown: 4 3.2572 VDC 2.969999 3.135 3.465 3.63
fc1/2 lane 1, Current 6.16 mA 1.0 2.5 10.5 12.0
fc1/2 lane 1, Temperature 40.625 C -5.0 0.0 70.0 75.0
fc1/2 lane 1, Rx Power -3.035562 dBm -15.900669 -11.897675 0.0 3.000082
fc1/2 lane 1, Tx Power -2.219211 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/3 lane 1, Unknown: 4 3.2 VDC 2.969999 3.135 3.465 3.63
fc1/3 lane 1, Current 6.16 mA 1.0 2.5 10.5 12.0
fc1/3 lane 1, Temperature 41.5 C -5.0 0.0 70.0 75.0
fc1/3 lane 1, Rx Power -2.860066 dBm -15.900669 -11.897675 0.0 3.000082
fc1/3 lane 1, Tx Power -2.826632 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/4 lane 1, Unknown: 4 3.2143 VDC 2.969999 3.135 3.465 3.63
fc1/4 lane 1, Current 6.2 mA 1.0 2.5 10.5 12.0
fc1/4 lane 1, Temperature 42.0 C -5.0 0.0 70.0 75.0
fc1/4 lane 1, Rx Power -3.083881 dBm -15.900669 -11.897675 0.0 3.000082
fc1/4 lane 1, Tx Power -2.229358 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/5 lane 1, Unknown: 4 3.2309 VDC 2.969999 3.135 3.465 3.63
fc1/5 lane 1, Current 6.18 mA 1.0 2.5 10.5 12.0
fc1/5 lane 1, Temperature 40.75 C -5.0 0.0 70.0 75.0
fc1/5 lane 1, Rx Power -3.295686 dBm -15.900669 -11.897675 0.0 3.000082
fc1/5 lane 1, Tx Power -2.322506 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/6 lane 1, Unknown: 4 3.2643 VDC 2.969999 3.135 3.465 3.63
fc1/6 lane 1, Current 6.18 mA 1.0 2.5 10.5 12.0
fc1/6 lane 1, Temperature 41.625 C -5.0 0.0 70.0 75.0
fc1/6 lane 1, Rx Power -2.521223 dBm -15.900669 -11.897675 0.0 3.000082
fc1/6 lane 1, Tx Power -2.917491 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/7 lane 1, Unknown: 4 3.3366 VDC 2.969999 3.135 3.465 3.63
fc1/7 lane 1, Current 8.208 mA 4.0 5.0 10.8 11.8
fc1/7 lane 1, Temperature 39.058592 C -5.0 0.0 70.0 75.0
fc1/7 lane 1, Rx Power -6.029294 dBm -15.900669 -11.897675 0.0 3.000082
fc1/7 lane 1, Tx Power -2.67928 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/8 lane 1, Unknown: 4 3.3023 VDC 2.969999 3.135 3.465 3.63
fc1/8 lane 1, Current 8.108 mA 4.0 5.0 10.8 11.8
fc1/8 lane 1, Temperature 43.76172 C -5.0 0.0 70.0 75.0
fc1/8 lane 1, Rx Power -3.087416 dBm -17.30487 -13.297542 0.0 3.000082
fc1/8 lane 1, Tx Power -3.296613 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/9 lane 1, Unknown: 4 3.254799 VDC 2.969999 3.135 3.465 3.63
fc1/9 lane 1, Current 6.14 mA 1.0 2.5 10.5 12.0
fc1/9 lane 1, Temperature 39.25 C -5.0 0.0 70.0 75.0
fc1/9 lane 1, Rx Power -3.36959 dBm -15.900669 -11.897675 0.0 3.000082
fc1/9 lane 1, Tx Power -2.446588 dBm -13.001623 -8.999743 -1.30006 1.699975
fc1/10 lane 1, Unknown: 4 3.289999 VDC 2.969999 3.135 3.465 3.63
fc1/10 lane 1, Current 6.928 mA 4.0 5.0 10.8 11.8
fc1/10 lane 1, Temperature 39.878908 C -5.0 0.0 70.0 75.0
fc1/10 lane 1, Rx Power -2.088007 dBm -17.30487 -13.297542 0.0 3.000082
fc1/10 lane 1, Tx Power -3.309622 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/11 lane 1, Unknown: 4 3.2893 VDC 2.969999 3.135 3.465 3.63
fc1/11 lane 1, Current 6.966 mA 2.0 2.0 10.5 10.5
fc1/11 lane 1, Temperature 36.644532 C -5.0 0.0 70.0 75.0
fc1/11 lane 1, Rx Power -3.359224 dBm -17.30487 -13.297542 0.0 3.000082
fc1/11 lane 1, Tx Power -2.656803 dBm -14.001169 -10.0 -1.30006 1.699975
fc1/12 lane 1, Unknown: 4 3.294799 VDC 2.969999 3.135 3.465 3.63
fc1/12 lane 1, Current 6.038 mA 2.0 2.0 10.5 10.5
fc1/12 lane 1, Temperature 38.582032 C -5.0 0.0 70.0 75.0
fc1/12 lane 1, Rx Power -3.529106 dBm -17.30487 -13.297542 0.0 3.000082
fc1/12 lane 1, Tx Power -2.442775 dBm -14.001169 -10.0 -1.30006 1.699975

show system:

HPE MSA Storage MSA 2050 SAN
System Name: BaradasMSA2050
System Location: Olesno
Version: VL270P007
# show system
System Information
------------------
System Name: BaradasMSA2050
System Information: Storage
Midplane Serial Number: Confidential info erased
Vendor Name: HPE
Product ID: MSA 2050 SAN
Product Brand: MSA Storage
SCSI Vendor ID: HPE
SCSI Product ID: MSA 2050 SAN
Enclosure Count: 1
Health: OK
Health Reason:
Other MC Status: Operational
PFU Status: Idle

show pools:

Name Serial Number Blocksize Total Size Avail Snap Size OverCommit Disk Groups
Volumes Low Thresh Mid Thresh High Thresh Sec Fmt Health Reason Action
------------------------------------------------------------------------------------------------------
A 00c0ff3b90ec00001266dc5b01000000 512 8389.3GB 5177.3GB 0B Enabled 1
3 50.00 % 75.00 % 97.44 % 512n OK
B 00c0ff3b917e00004c4cfb5a01000000 512 3594.9GB 3245.7GB 0B Enabled 1
1 50.00 % 75.00 % 94.02 % 512e OK

Show Ports

Ports Media Target ID Status Speed(A) Speed(C) Health Reason Action
------------------------------------------------------------------------------------------
A1 FC(P) 207000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

A2 FC(P) 217000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

A3 FC(P) 227000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

A4 FC(P) 237000c0ff3b8c20 Up 8Gb Auto OK

Topo(C) --------
PTP

------------------------------------------------------------------------------------------
Ports Media Target ID Status Speed(A) Speed(C) Health Reason Action
------------------------------------------------------------------------------------------
B1 FC(P) 247000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

B2 FC(P) 257000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

B3 FC(P) 267000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP
B4 FC(P) 277000c0ff3b8c20 Up 8Gb Auto OK

Topo(C)
--------
PTP

------------------------------------------------------------------------------------------
Success: Command completed successfully. (2022-10-13 10:04:09)

show port statistic

Durable ID Bps IOPS Reads Writes Data Read Data Written Queue Depth I/O Resp Time
Read Resp Time Write Resp Time Reset Time
---------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1 372.2KB 27 4815185 3543006 187.9GB 121.1GB 3 281
575 90 2022-10-07 22:13:41
hostport_A2 668.6KB 18 6132168 3185323 196.3GB 116.6GB 0 128
92 147 2022-10-07 22:13:41
hostport_A3 320.0KB 22 4764121 3543161 185.9GB 121.1GB 0 83
47 99 2022-10-07 22:13:41
hostport_A4 285.1KB 21 6119033 3184643 194.0GB 118.4GB 0 82
52 97 2022-10-07 22:13:41
hostport_B1 2638.8KB 25 1302098 112229 53.6GB 6215.3MB 0 2730
3476 255 2022-10-07 22:25:27
hostport_B2 2590.2KB 22 1246425 112107 64.1GB 6329.4MB 0 3756
4887 163 2022-10-07 22:25:27
hostport_B3 2440.7KB 24 1848683 154736 80.4GB 8943.9MB 0 2986
3742 184 2022-10-07 22:25:27
hostport_B4 2657.2KB 23 1797949 154826 91.1GB 9.0GB 0 2868
3656 182 2022-10-07 22:25:27


Read Resp Time Write Resp Time Reset Time
---------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1 375.8KB 20 4815670 3543399 187.9GB 121.1GB 8 118
128 107 2022-10-07 22:13:41
hostport_A2 392.7KB 20 6132643 3185728 196.3GB 116.6GB 0 153
158 146 2022-10-07 22:13:41
hostport_A3 498.6KB 18 4764511 3543553 185.9GB 121.1GB 0 174
174 173 2022-10-07 22:13:41
hostport_A4 416.7KB 18 6119429 3185040 194.0GB 118.5GB 0 145
139 151 2022-10-07 22:13:41
hostport_B1 243.2KB 6 1302338 112248 53.6GB 6215.7MB 0 316
332 116 2022-10-07 22:25:27
hostport_B2 227.8KB 5 1246639 112131 64.1GB 6330.3MB 0 482
519 149 2022-10-07 22:25:27
hostport_B3 253.4KB 6 1848919 154760 80.4GB 8944.5MB 0 470
505 128 2022-10-07 22:25:27
hostport_B4 271.8KB 5 1798170 154850 91.1GB 9.0GB 0 542
585 145 2022-10-07 22:25:27
---------------------------------------------------------------

 

show disk


Location Serial Number Pwr Hrs Bps IOPS Reads Writes Data Read Data Written Lifetime Read Lifetime Written Reset Time
----------------------------------------------------------------------------------------------------------------------------------------------------------
1.1 WAF0HV87 39937 6769.6KB 28 12195277 1170683 3106.1GB 47.2GB 0B 0B 2022-10-07 22:14:33
1.2 WAF0GWE4 39937 6789.1KB 28 12273375 1239508 3112.0GB 52.9GB 0B 0B 2022-10-07 22:14:33
1.3 WAF0GW3E 39937 6789.1KB 28 12301686 1274618 3112.2GB 53.2GB 0B 0B 2022-10-07 22:14:33
1.4 WAF0HRN1 39937 6786.0KB 28 12343640 1317301 3111.8GB 52.6GB 0B 0B 2022-10-07 22:14:33
1.5 WAF15MFJ 31055 6785.0KB 28 12347783 1322349 3111.1GB 52.1GB 0B 0B 2022-10-07 22:14:33
1.6 WAF0GWXB 39937 6786.5KB 28 12354007 1326429 3112.0GB 52.8GB 0B 0B 2022-10-07 22:14:33
1.7 WAF0HR6C 39937 6783.4KB 28 12308522 1281867 3111.3GB 52.4GB 0B 0B 2022-10-07 22:14:33
1.8 WAF0GX9H 39937 6760.4KB 27 12147550 1120193 3104.2GB 45.3GB 0B 0B 2022-10-07 22:14:33
1.9 WAF0HRMF 39937 6775.2KB 28 12217816 1187218 3109.3GB 50.1GB 0B 0B 2022-10-07 22:14:33
1.10 WAF0GWJT 39937 6777.3KB 28 12255147 1223088 3110.1GB 50.9GB 0B 0B 2022-10-07 22:14:33
1.11 0XKLWUXP 36096 6774.7KB 28 12236516 1206162 3109.8GB 50.8GB 0B 0B 2022-10-07 22:14:33
1.12 0XKX5ASP 36095 6775.2KB 28 12242149 1212606 3110.0GB 50.8GB 0B 0B 2022-10-07 22:14:33
1.13 0XKX8RMP 36095 6782.9KB 28 12283339 1256016 3111.2GB 51.9GB 0B 0B 2022-10-07 22:14:33
1.14 WAF0SKHV 35279 6784.5KB 28 12268708 1242856 3111.2GB 52.3GB 0B 0B 2022-10-07 22:14:33
1.15 WAF0SKEE 35279 6767.1KB 27 12184168 1156028 3106.1GB 47.1GB 0B 0B 2022-10-07 22:14:33
1.16 WAF0SKHJ 35279 6767.1KB 28 12190695 1166726 3106.0GB 46.9GB 0B 0B 2022-10-07 22:14:33
1.17 S3Z0GD4V 53151 19.7MB 19 9218061 98306 9.4TB 13.0GB 0B 0B 2022-10-07 22:26:19
1.18 S3Z0GFJD 53151 19.7MB 19 9216201 98303 9.4TB 13.0GB 0B 0B 2022-10-07 22:26:19
1.19 S3Z0G9YG 53134 19.7MB 19 9232102 89559 9.4TB 12.9GB 0B 0B 2022-10-07 22:26:19
1.20 S3Z0EYCV 53134 19.7MB 19 9234168 89578 9.4TB 12.9GB 0B 0B 2022-10-07 22:26:19

 

ArunKKR
HPE Pro

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi, 

The Majority of the RX/TX power values seems to be with in normal range.

I would suggest engaging Cisco FC switch team to check for their suggestions where the values are below -5dBM

Reference:

https://www.asc.ohio-state.edu/smith.2341/dbm2power.php

 

MSA is not reporting any issues.

There does not seem to be much IOs either on Pool A or Pool B volumes and no latency is being reported in the output shared.

 

The command to shut down controller A would be shutdown a

 

You could restart it after the testing using the below command:

 

restart sc a

 

You would need to log a support case to further review the store logs/debug logs.

 

 

 

 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
techdaw
Advisor

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Durable ID Bps IOPS Reads Writes Data Read Data Written Queue Depth I/O Resp Time Read Resp Time
Write Resp Time Reset Time
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1 1574.9KB 40 4882906 3578909 188.9GB 122.5GB 8 557 662 response time (not too much?) and queue ?
110 2022-10-07 22:13:41
hostport_A2 1199.1KB 44 6200421 3221410 197.3GB 117.9GB 0 815 866 
522 2022-10-07 22:13:41
hostport_A3 1439.2KB 43 4829284 3579001 186.9GB 122.5GB 0 685 705
559 2022-10-07 22:13:41
hostport_A4 1503.2KB 44 6183791 3220659 195.0GB 119.8GB 0 571 564

 

If restarted controler A i need close all connectio to storage ? or can i restarted online ?

Yest I open support case.

Thanks for answers and sugestions.

 

ArunKKR
HPE Pro

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi, The entire load will be on single controller while controller is restarted. It can be performed online. However, it is recommended to be restarted during a period of low IO activity.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
techdaw
Advisor

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

hostport_B3 199.1KB 4 1874489 157278 81.5GB 9.0GB 0 2340 8929 -respon time not too much ?

hostport_B4 488.9KB 5 1824857 157382 92.1GB 9.1GB 0 1959 4758

ArunKKR
HPE Pro

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hi, Those are read response times and any thing below 20000 should be fine.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
techdaw
Advisor
Solution

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Problem is resolved. I had problem is interface between storage on switch.
thank you for the assist and targeting the problem Arun Kumar

screensave 2022-10-18 102209e.png

Sunitha_Mod
Moderator

Re: MSA 2050 3 nodes in nodes 100 disk usage in VM hyper-v slow work

Hello @techdaw

Excellent! We are glad to know the problem has been resolved. 



Thanks,
Sunitha G
I'm an HPE employee.
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo