ProLiant Servers (ML,DL,SL)
1753845 Members
7780 Online
108806 Solutions
New Discussion

Re: ML350 Gen10 slower read performance with BBU

 
Passeri
Occasional Visitor

ML350 Gen10 slower read performance with BBU

Hello,

I've configured a new ML350 Gen10. The server was updated with the latest updates by HP. The server is running with Windows 2016. The driver installation was done with Service Pack ISO.

I have tested the HDD performance with ATTO benchmark. This tool shows me on some file size a slower read performance with BBU cache.

for example

2GB BBU cache is disabled
2GB BBU cache is enabled, 50% Read/50% Writehttps://i.ibb.co/JnsSPkS/ATTO-First-Run-LW-C-Cache-50-50.png
I did not see such fluctuations with the Dell or Fujitsu servers.
Can these values be trusted?

HPE Smart Array P408i-a SR Gen10 + 2GB BBU
Size 279.37 GiB (299.97 GB)
RAID 1 > 2x 300 SAS 10K
Legacy Disk Geometry (C/H/S) 65535 / 255 / 32
Strip Size / Full Stripe Size 256 KiB / 256 KiB

1 REPLY 1
AshutoshM
HPE Pro

Re: ML350 Gen10 slower read performance with BBU

Short answer- No these numbers cannot be taken as conclusive, more detailed testing needs to be done.

 

 

 

 

Is this testing being done to optimize performance or is it being done to compare performance with other hardware?

 

In both cases the test parameters have to be identical in terms of:

  • whether the read/write operations are sequential or random
  • percentage of read operations to write operations
  • size and type of data
  • time to test (run time+ ramp up time)

 

Optimization parameters that can be tested at  different values:

  • Stripe size
  • Stripe width/array type
  • Cache read to write percentage.
  • HPE SSD smart path/ HPE SSD smart cache

 

If you are comparing different hardware then work with the different test parameters and that should give you a fair idea of performance differences.

Sequential read or  sequential write with data size matching stripe size and with caching appropriately configured at 100% read or write should provide best outputs.

Random reads mixed with 30% to 50% random writes with read cache set to between 10% to 50%  should provide close to real world performance.

 

Note: The numbers are indicative and not definitive.

 

If you are looking to optimize performance the starting point would be to identify the kind of data used in production operation.

Mostly read or mostly write, random or sequential, large files or small files.

 

Then use something like Iometer to generate a representative load and test at different values for optimization parameters.

 

 

You can find some HPE test data based on drive model that can be used for reference.

https://h20195.www2.hpe.com/v2/getpdf.aspx/a00001287enw.pdf

 

More detailed information regarding testing with Iometer.

https://sourceforge.net/p/iometer/svn/HEAD/tree/trunk/IOmeter/Docs/Iometer.pdf?format=raw

 

 

Link to current SPP version (in case newer than what was applied):

https://techlibrary.hpe.com/us/en/enterprise/servers/products/service_pack/spp/index.aspx

I am an HPE Employee