- Community Home
- >
- Storage
- >
- Legacy
- >
- HPE ProLiant Storage Systems
- >
- Write performance issues with SSD RAID 10 on P840/...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2017 02:24 AM - edited 04-18-2017 06:09 AM
04-18-2017 02:24 AM - edited 04-18-2017 06:09 AM
Hi!
I have a new system with 4x512GB Samsung SSD 850 in a RAID10 array using P840/4gb fbwc RAID controller. When i download a file from network (10 Gbit/s) write speed is not that good. I tried downloading a single 1000MB file using wget:
Saving to: ‘1000mb.bin’ 1000mb.bin 100%[=========================================================================================================>] 1000M 383MB/s in 2.6s
Write speed 383Mbyte/s
EDIT: When downloading the same file to /dev/null I get full 10Gbit/s speed. The use case is downloading and storing files of that size.
Also when i try to write a file using dd, speed is the same with block size 512byte:
dd if=/dev/zero of=bench.bin bs=512 count=10000K 10240000+0 records in 10240000+0 records out 5242880000 bytes (5.2 GB) copied, 14.6632 s, 358 MB/s
However block size 4k gives better performance:
dd if=/dev/zero of=bench.bin bs=4k count=1000K 1024000+0 records in 1024000+0 records out 4194304000 bytes (4.2 GB) copied, 3.02447 s, 1.4 GB/s
So i tried various different settings concerning cache, SSD smart path, etc. for the raid controller. But i didn't see much difference. Any ideas how to increase write speed?
Current settings for the controller:
Smart Array P840 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: Cache Serial Number: RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: B Firmware Version: 4.52 Rebuild Priority: High Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Parallel Surface Scan Supported: Yes Current Parallel Surface Scan Count: 1 Max Parallel Surface Scan Count: 16 Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Cache Ratio: 10% Read / 90% Write Drive Write Cache: Disabled Total Cache Size: 4.0 GB Total Cache Memory Available: 3.8 GB No-Battery Write Cache: Disabled SSD Caching RAID5 WriteBack Enabled: True SSD Caching Version: 2 Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Spare Activation Mode: Activate on physical drive failure (default) Controller Temperature (C): 44 Cache Module Temperature (C): 37 Number of Ports: 2 Internal only Encryption: Disabled Express Local Encryption: False Driver Name: hpsa Driver Version: 3.4.4 Driver Supports HP SSD Smart Path: True PCI Address (Domain:Bus:Device.Function): 0000:06:00.0 Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s) Controller Mode: RAID Controller Mode Reboot: Not Required Latency Scheduler Setting: Disabled Current Power Mode: MaxPerformance Host Serial Number: Sanitize Erase Supported: False Primary Boot Volume: logicaldrive 1 Secondary Boot Volume: logicaldrive 2
Physical Drives physicaldrive 2I:1:1 (port 2I:box 1:bay 1, Solid State SATA, 512.1 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, Solid State SATA, 512.1 GB, OK) physicaldrive 2I:1:3 (port 2I:box 1:bay 3, Solid State SATA, 512.1 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, Solid State SATA, 512.1 GB, OK) None attached Array: A Interface Type: Solid State SATA Unused Space: 0 MB (0.0%) Used Space: 1.9 TB (100.0%) Status: OK MultiDomain Status: OK Array Type: Data HP SSD Smart Path: disable Logical Drive: 1 Size: 953.8 GB Fault Tolerance: 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 256 KB Full Stripe Size: 512 KB Status: OK MultiDomain Status: OK Caching: Enabled Unique Identifier: Disk Name: /dev/sda Mount Points: /boot 487 MB Partition Number 2, / 14.0 GB Partition Number 7 OS Status: LOCKED Logical Drive Label: Mirror Group 1: physicaldrive 2I:1:1 (port 2I:box 1:bay 1, Solid State SATA, 512.1 GB, OK) physicaldrive 2I:1:2 (port 2I:box 1:bay 2, Solid State SATA, 512.1 GB, OK) Mirror Group 2: physicaldrive 2I:1:3 (port 2I:box 1:bay 3, Solid State SATA, 512.1 GB, OK) physicaldrive 2I:1:4 (port 2I:box 1:bay 4, Solid State SATA, 512.1 GB, OK) Drive Type: Data LD Acceleration Method: Controller Cache
Any help appreciated :)
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-17-2017 05:26 AM
05-17-2017 05:26 AM
SolutionHi!
I was in a similar position a couple of years back.
I'd start looking for and installing latest controller fw and possible backplane drivers, etc on your server.
At that time I used "sob" as a filesystem test program, instead of "dd" . Normally I am in favour of the dd command.
The I/O bandwith testing using files substantially larger than the server memory to avoid caching. Also, add more sob proesses to verify and avoid possible limitations. This, simultainiously running processes, is also what could be done using the dd command to view the limitatons of dd.
Adjusting queu_depth and max_sectors didn't do any miracles on my systems.
Good luck!