ProLiant Servers (ML,DL,SL)
cancel
Showing results for 
Search instead for 
Did you mean: 

DL380G7:Abmysal Raid6 ADG Performance in RHEL5.5x64

dragmore
Occasional Visitor

DL380G7:Abmysal Raid6 ADG Performance in RHEL5.5x64

Hi. Ive have this strange problem with my new Splunk server. Problem is the storage performance.

DL380G7, 2x6Core,36GB Ram, 2x120GB-SSD,14X300GB-SAS, RHEL 5.5 X64

1.Ive configured 2x120SSD as RAID1 for HOT and WARM datapools + OS

2.IVe configured 14X300SAS as RAID6ADG+2Hotspares for COLD datapools

3.Using 2 LVM VG with the underlying LV's.

4.Using EXT with no extra options

Hot and Warm is roughly 95% reads and 5% with our license model and Cold is 100%Reads

 

Using randomio to check the IO on the FS.

[root@splunk randomio-1.4]# ./randomio /dev/VolGroup00/LogVol01 8 0.05 1 4096 10
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
 7488.0 | 7103.3   0.0    1.1  202.2    3.3 |  384.7   0.0    0.1    0.5    0.0
 7416.7 | 7050.4   0.0    1.1  263.9    4.7 |  366.4   0.0    0.1    0.6    0.0
 7544.9 | 7165.8   0.0    1.1  204.5    3.8 |  379.1   0.0    0.1    0.3    0.0
 7343.9 | 6971.2   0.0    1.1  160.7    3.6 |  372.8   0.0    0.1    0.2    0.0
 7419.3 | 7050.2   0.0    1.1  226.0    4.3 |  369.2   0.0    0.1    0.6    0.0
 8104.9 | 7698.0   0.0    1.0  131.1    2.2 |  406.9   0.0    0.1    0.3    0.0

 

[root@splunk randomio-1.4]# ./randomio /dev/VolGroup01/lv_cold 8 0.0 1 4096 10   
  total |  read:         latency (ms)       |  write:        latency (ms)
   iops |   iops   min    avg    max   sdev |   iops   min    avg    max   sdev
--------+-----------------------------------+----------------------------------
  921.5 |  921.5   0.0    8.7   52.4    5.8 |    0.0   inf    nan    0.0    nan
  941.7 |  941.7   0.0    8.5   49.8    5.6 |    0.0   inf    nan    0.0    nan
  895.1 |  895.1   0.0    8.9  136.4    7.4 |    0.0   inf    nan    0.0    nan
  928.2 |  928.2   0.0    8.6   62.9    5.8 |    0.0   inf    nan    0.0    nan

 

a. The random IO is from my view very dissapointing both for the SSD's and SAS-raid. Are these numbers wrong in ure views?

b. Is there anything i can add to mount options to increase performance ? I tried noatime and the performance plummeted from 900iops to 100iops..

c. Ive tried to set blockdev --setra to no avail

 

Apriciate any information ppl can provide!:smileytongue:

 

br TE