- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- Re: ML350 Gen10 slower read performance with BBU
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-23-2019 07:26 AM
10-23-2019 07:26 AM
ML350 Gen10 slower read performance with BBU
Hello,
I've configured a new ML350 Gen10. The server was updated with the latest updates by HP. The server is running with Windows 2016. The driver installation was done with Service Pack ISO.
I have tested the HDD performance with ATTO benchmark. This tool shows me on some file size a slower read performance with BBU cache.
for example
2GB BBU cache is disabled
2GB BBU cache is enabled, 50% Read/50% Write
I did not see such fluctuations with the Dell or Fujitsu servers.
Can these values be trusted?
HPE Smart Array P408i-a SR Gen10 + 2GB BBU
Size 279.37 GiB (299.97 GB)
RAID 1 > 2x 300 SAS 10K
Legacy Disk Geometry (C/H/S) 65535 / 255 / 32
Strip Size / Full Stripe Size 256 KiB / 256 KiB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-24-2019 01:07 AM
10-24-2019 01:07 AM
Re: ML350 Gen10 slower read performance with BBU
Short answer- No these numbers cannot be taken as conclusive, more detailed testing needs to be done.
Is this testing being done to optimize performance or is it being done to compare performance with other hardware?
In both cases the test parameters have to be identical in terms of:
- whether the read/write operations are sequential or random
- percentage of read operations to write operations
- size and type of data
- time to test (run time+ ramp up time)
Optimization parameters that can be tested at different values:
- Stripe size
- Stripe width/array type
- Cache read to write percentage.
- HPE SSD smart path/ HPE SSD smart cache
If you are comparing different hardware then work with the different test parameters and that should give you a fair idea of performance differences.
Sequential read or sequential write with data size matching stripe size and with caching appropriately configured at 100% read or write should provide best outputs.
Random reads mixed with 30% to 50% random writes with read cache set to between 10% to 50% should provide close to real world performance.
Note: The numbers are indicative and not definitive.
If you are looking to optimize performance the starting point would be to identify the kind of data used in production operation.
Mostly read or mostly write, random or sequential, large files or small files.
Then use something like Iometer to generate a representative load and test at different values for optimization parameters.
You can find some HPE test data based on drive model that can be used for reference.
https://h20195.www2.hpe.com/v2/getpdf.aspx/a00001287enw.pdf
More detailed information regarding testing with Iometer.
https://sourceforge.net/p/iometer/svn/HEAD/tree/trunk/IOmeter/Docs/Iometer.pdf?format=raw
Link to current SPP version (in case newer than what was applied):
https://techlibrary.hpe.com/us/en/enterprise/servers/products/service_pack/spp/index.aspx