MSA Storage
1752798 Members
5624 Online
108789 Solutions
New Discussion

Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

 
TigerRool10-1
Advisor

Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Hi There,

Is anyone else out there using a HPE c7000 and MSA 2040 in a software iSCSI implmentation (I have two HPE 5700 switches providing the storage network). One 10GB connection (with a standby) to the storage using SFP-DAC cables.  I'm using VMWARE 6.7. The VMWARE datastore is on SAS 10K disks in a RAID 6 config.  The blades are CL460c's  Gen9.  (Power is set to MAX in the BIOS)

I've built a test VM Windows 2012 server 1 CPU 4GB RAM - One 40GB

I then run this test.

DiskSpd.exe -c15G -d300 -r -w40 -t8 -o32 -b64K -Sh -L c:\temp\testfile.dat

What performance are you getting. I'm not sure what is good for an MSA 2040.

While running the test above I also run performance monitor and specifically look at the counter "Average Disk sec/Transfer" . The average is alway around 0.262 when Microsoft says it should be around 0.005.  Thats whats really bugging me.

Appreciate it if you can run the test (as close to the config above would be great)

fresh
18 REPLIES 18
TigerRool10-1
Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

My results

 


Command Line: DiskSpd.exe -c15G -d300 -r -w40 -t8 -o32 -b64K -Sh -L c:\temp\testfile.dat

Input parameters:

timespan: 1
-------------
duration: 300s
warm up time: 5s
cool down time: 0s
measuring latency
random seed: 0
path: 'c:\temp\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 60/40)
block size: 65536
using random I/O (alignment: 65536)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 8
using I/O Completion Ports
IO priority: normal

System information:

computer name:
start time: 2019/02/20 10:26:53 UTC

Results for timespan 1:
*******************************************************************************

actual test time: 300.00s
thread count: 8
proc count: 1

CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 3.91%| 0.74%| 3.17%| 96.09%
-------------------------------------------
avg.| 3.91%| 0.74%| 3.17%| 96.09%

Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 2384723968 | 36388 | 7.58 | 121.29 | 263.452 | 96.438 | c:\temp\testfile.dat (15GiB)
1 | 2387214336 | 36426 | 7.59 | 121.42 | 263.080 | 97.048 | c:\temp\testfile.dat (15GiB)
2 | 2381905920 | 36345 | 7.57 | 121.15 | 263.822 | 97.947 | c:\temp\testfile.dat (15GiB)
3 | 2390556672 | 36477 | 7.60 | 121.59 | 262.768 | 96.274 | c:\temp\testfile.dat (15GiB)
4 | 2384592896 | 36386 | 7.58 | 121.29 | 263.433 | 97.782 | c:\temp\testfile.dat (15GiB)
5 | 2380398592 | 36322 | 7.57 | 121.07 | 263.954 | 97.094 | c:\temp\testfile.dat (15GiB)
6 | 2386821120 | 36420 | 7.59 | 121.40 | 263.202 | 97.589 | c:\temp\testfile.dat (15GiB)
7 | 2378235904 | 36289 | 7.56 | 120.96 | 264.204 | 97.470 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 19074449408 | 291053 | 60.64 | 970.17 | 263.489 | 97.208

Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1430781952 | 21832 | 4.55 | 72.77 | 248.969 | 90.696 | c:\temp\testfile.dat (15GiB)
1 | 1434255360 | 21885 | 4.56 | 72.95 | 248.666 | 93.539 | c:\temp\testfile.dat (15GiB)
2 | 1427243008 | 21778 | 4.54 | 72.59 | 248.341 | 92.967 | c:\temp\testfile.dat (15GiB)
3 | 1431175168 | 21838 | 4.55 | 72.79 | 248.458 | 92.297 | c:\temp\testfile.dat (15GiB)
4 | 1436876800 | 21925 | 4.57 | 73.08 | 248.435 | 93.917 | c:\temp\testfile.dat (15GiB)
5 | 1425080320 | 21745 | 4.53 | 72.48 | 248.747 | 91.595 | c:\temp\testfile.dat (15GiB)
6 | 1426587648 | 21768 | 4.53 | 72.56 | 248.319 | 94.191 | c:\temp\testfile.dat (15GiB)
7 | 1420689408 | 21678 | 4.52 | 72.26 | 248.630 | 90.786 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 11432689664 | 174449 | 36.34 | 581.49 | 248.571 | 92.510

Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 953942016 | 14556 | 3.03 | 48.52 | 285.175 | 100.626 | c:\temp\testfile.dat (15GiB)
1 | 952958976 | 14541 | 3.03 | 48.47 | 284.774 | 98.193 | c:\temp\testfile.dat (15GiB)
2 | 954662912 | 14567 | 3.03 | 48.56 | 286.967 | 100.603 | c:\temp\testfile.dat (15GiB)
3 | 959381504 | 14639 | 3.05 | 48.80 | 284.116 | 98.112 | c:\temp\testfile.dat (15GiB)
4 | 947716096 | 14461 | 3.01 | 48.20 | 286.173 | 99.128 | c:\temp\testfile.dat (15GiB)
5 | 955318272 | 14577 | 3.04 | 48.59 | 286.639 | 100.576 | c:\temp\testfile.dat (15GiB)
6 | 960233472 | 14652 | 3.05 | 48.84 | 285.314 | 98.356 | c:\temp\testfile.dat (15GiB)
7 | 957546496 | 14611 | 3.04 | 48.70 | 287.309 | 102.341 | c:\temp\testfile.dat (15GiB)
-----------------------------------------------------------------------------------------------------
total: 7641759744 | 116604 | 24.29 | 388.68 | 285.808 | 99.758

 

total:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 3.384 | 2.193 | 2.193
25th | 204.069 | 255.489 | 214.836
50th | 230.440 | 281.170 | 254.376
75th | 269.826 | 314.423 | 295.324
90th | 338.094 | 352.563 | 347.093
95th | 408.343 | 386.474 | 398.219
99th | 622.458 | 750.348 | 654.924
3-nines | 925.564 | 1079.021 | 1047.529
4-nines | 1345.373 | 1172.517 | 1276.759
5-nines | 1696.298 | 1193.568 | 1695.573
6-nines | 1822.909 | 1218.004 | 1822.909
7-nines | 1822.909 | 1218.004 | 1822.909
8-nines | 1822.909 | 1218.004 | 1822.909
9-nines | 1822.909 | 1218.004 | 1822.909
max | 1822.909 | 1218.004 | 1822.909

fresh

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

@TigerRool10-1 .....Checked your query and test output but this is done at the VM level with the help of Diskspd tool which is Microsoft specific.  This forum is exclusively for MSA queries and you need to check Performance for MSA at the block level.

In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.

Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.

The type of workload will affect the results of the performance measurement.

Check this below Customer Advisory and disable "In-band SES" ,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564

You can check the below Customer Advisory as well.........in many situations this helped to improve performance,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698

If you have specific requirement and you want only SSD pages to deal with your IO then use 'Tier Affinity' on the particular volume.

If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 


I work for HPE
Accept or Kudo
TigerRool10-1
Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Hi Subhajit

Very kind of you to respond.  The issue is this - I don't know if I have a performance issue - the MSA is not in production yet and I'm simply stress testing it myself.

However you have seen my other postings about extending the size of a pool (and extending a volume)  and the problems it is causing me.  You and a colleague kindly pointed out that the MSA2040 firmware is not supported with VMWARE 6.7.

I'm following this up with my vendor and vmware (very very slow).

If they do not get back to me in 24 hours I'll downgrade to ESXi 6.5 (which is supported) and run all my tests again (extending a Pool, extending a volume and performance). 

ta,

p.s. Please note my MSA 2040 was purchased with only 10 disks (6 SAS 4 SSD)- so it will be extended in the future - no doubt.

fresh

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

@TigerRool10-1 ......Yes you are correct as of now MSA2040 not yet tested with ESX6.7 so any outcome will be unpredictable.

I do agree with your stress testing but we understand and do the analysis for MSA only in this forum. So application level or OS level testing only Microsoft expert can help you.

If you downgrade  to ESX6.5 and use with MSA2040 then kindly follow the instruction that I have suggested, this will help you for sure when you use this MSA for production.

If you still look for more information then kindly mention the same so that MSA experts can help you or else you can close this forum as of now.

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************

 

 


I work for HPE
Accept or Kudo
Brice13
Occasional Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

I see that vsphere 6.5 U2 is not supported right now by MSA 2040, is that correct ?

What about compatibility with the new firmware GL225P001  ? (6.5 U2 and 6.7, U1 ?)

Thanks

Brice

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Yes ESX 6.5 U2 not yet tested by HPE with MSA2040

Infact even VMWare website also shows MSA2040 supported upto ESX 6.5 U1

Attached are the screenshots for your reference

Yes GL225R003 is supported with MSA2040 and ESX 6.5 U1

There is no firmware called GL225P001

Please find the link to check all firmware details,

https://h41111.www4.hpe.com/storage/msafirmware.html

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


I work for HPE
Accept or Kudo
Brice13
Occasional Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Thanks, 

In SPOCK i found this firmware, wich seems not be available right noxw :

https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=16122&typeId=1&lang=en&cc=us&hpappid=117135_SPOCK_PRO_HPE

 

I think it's the next release.

TigerRool10-1
Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Hi Brice13

I can see that this link is referring to FC, Fibre Channel.....

I've gone through SPOCK and cannot even find the link you sent!  would you know if there is a firmware update for the 2040 with iSCSI controllers?

 

 

fresh
Brice13
Occasional Advisor

Re: Disk Performance - VMWARE 6.7, MSA2040 SAS and software iSCSI

Sorry, i thought firmware are the same for iSCSI and FC MSA,

https://h20272.www2.hpe.com/SPOCK/Content/ComponentDetail.aspx?CompId=16122

Is this the next release for the MSA2040 FC ?