- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- DL380G7:Abmysal Raid6 ADG Performance in RHEL5.5x6...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-07-2010 05:26 AM
09-07-2010 05:26 AM
DL380G7:Abmysal Raid6 ADG Performance in RHEL5.5x64
Hi. Ive have this strange problem with my new Splunk server. Problem is the storage performance.
DL380G7, 2x6Core,36GB Ram, 2x120GB-SSD,14X300GB-SAS, RHEL 5.5 X64
1.Ive configured 2x120SSD as RAID1 for HOT and WARM datapools + OS
2.IVe configured 14X300SAS as RAID6ADG+2Hotspares for COLD datapools
3.Using 2 LVM VG with the underlying LV's.
4.Using EXT with no extra options
Hot and Warm is roughly 95% reads and 5% with our license model and Cold is 100%Reads
Using randomio to check the IO on the FS.
[root@splunk randomio-1.4]# ./randomio /dev/VolGroup00/LogVol01 8 0.05 1 4096 10
total | read: latency (ms) | write: latency (ms)
iops | iops min avg max sdev | iops min avg max sdev
--------+-----------------------------------+----------------------------------
7488.0 | 7103.3 0.0 1.1 202.2 3.3 | 384.7 0.0 0.1 0.5 0.0
7416.7 | 7050.4 0.0 1.1 263.9 4.7 | 366.4 0.0 0.1 0.6 0.0
7544.9 | 7165.8 0.0 1.1 204.5 3.8 | 379.1 0.0 0.1 0.3 0.0
7343.9 | 6971.2 0.0 1.1 160.7 3.6 | 372.8 0.0 0.1 0.2 0.0
7419.3 | 7050.2 0.0 1.1 226.0 4.3 | 369.2 0.0 0.1 0.6 0.0
8104.9 | 7698.0 0.0 1.0 131.1 2.2 | 406.9 0.0 0.1 0.3 0.0
[root@splunk randomio-1.4]# ./randomio /dev/VolGroup01/lv_cold 8 0.0 1 4096 10
total | read: latency (ms) | write: latency (ms)
iops | iops min avg max sdev | iops min avg max sdev
--------+-----------------------------------+----------------------------------
921.5 | 921.5 0.0 8.7 52.4 5.8 | 0.0 inf nan 0.0 nan
941.7 | 941.7 0.0 8.5 49.8 5.6 | 0.0 inf nan 0.0 nan
895.1 | 895.1 0.0 8.9 136.4 7.4 | 0.0 inf nan 0.0 nan
928.2 | 928.2 0.0 8.6 62.9 5.8 | 0.0 inf nan 0.0 nan
a. The random IO is from my view very dissapointing both for the SSD's and SAS-raid. Are these numbers wrong in ure views?
b. Is there anything i can add to mount options to increase performance ? I tried noatime and the performance plummeted from 900iops to 100iops..
c. Ive tried to set blockdev --setra to no avail
Apriciate any information ppl can provide!:smileytongue:
br TE