1838893 Members
3657 Online
110131 Solutions
New Discussion

stripe with 7 disks

 
SOLVED
Go to solution
M.Doets
New Member

stripe with 7 disks

I have a system, with hpux 11i on a rp7410 with 4 cpu's ans 6GB of memory, with a extent based stripe over 7 disks. The lvols in this stripe are mirrored on seven other disks. The 7 mirror disks and the 7 source disks are on separate scsi controllers (ultra 160).All 14 disks are 36.4GST336753LC, 1500rpm, 36GB.

The Unix buffercache is dynamic and at the moment 2,5 GB is allocated (also the max).

If I do a lot of io, for example a tar (without any compression) of a couple of 2GB datafiles, i only get a iorate of 10K block/second/disk. So 10K * 1/2K blocksize * 7 disks = 35 MB/second. I think this not what is should be. On my own pc, with only two 7200rpm disk in a stripe I can do more io's in the same time.

At first I did suspect tar, but with a copy of a 2GB file a did get the same io rates.
The disk are not 100% busy (average is 15%-20% busy)

Who can tell me whats wrong. What is the bottleneck in this case?
5 REPLIES 5
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: stripe with 7 disks

Often when people turn to extent-based stripping they do so expecting great performance benefits; however, these benefits are very seldom realized. The reason is that the smallest possible PE (1MB) is still much too large to be a good stripe size. The ideal stripe size will vary but something in the range of 64KB to 256KB is generally optimal --- and its typically 64KB. At that stripe size, the I/O is spread very nicely in a round-robin way and the disks are almost continuously busy. I suspect that you can actually see the disks being hit sequentially.

Also, you need to get the disk (or tape) read io out of the picture when you do this sort of testing so use /dev/zero as an input device. In general tar, being single threaded, is not going to give you peak i/o for backups. You need something like fbackup with multiple readers.
If it ain't broke, I can fix that.
Todd McDaniel_1
Honored Contributor

Re: stripe with 7 disks

I have all mine striped 8 way with h/w mirror... and 128k stripe size...

How did you create your stripe?


here is an example of my setup. I also have these on an EMC frame with caching redirected to the frame.

delaylog,largefiles,nodatainlog,mincache=direct,convosync=direct

# lvdisplay /dev/vgoradt13/oradt_fs102
--- Logical volumes ---
LV Name /dev/vgoradt13/oradt_fs102
VG Name /dev/vgoradt13
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 69024
Current LE 17256
Allocated PE 17256
Stripes 8
Stripe Size (Kbytes) 128
Bad block NONE
Allocation strict
IO Timeout (Seconds) default
Unix, the other white meat.
Tim D Fulford
Honored Contributor

Re: stripe with 7 disks

Hi

OK basic calculation here..
15 krpm disk does about 6ms service time. If you transfer 8kB each IO at 100 % util you can get 1.3 MB/s for each disk.. 7 disks you get 9MB/s.... This is reasonable for a really random stripe.

I'm slightly confused you say you do 10,000 BLOCKS per second/disk, is this IO/s or 0.5kB disk blocks? I will assume it is disk blocks & the OS is writing 8kB IO's (I assume you are using VxFS) so 625 IO/s, this is VERY good as it gives 0.625 ms service time IF THE DISKS ARE FLAT OUT.. (which you say they are not) the Seagate site says 0.4ms is the traso if they are 20% utilised this is 0.125 ms!!

Basically the 35MB/s figure to me seems fine.. I am suspicious that the disks are only running at 15-20%.. my guess is
o Buffer cache.. 2.5 GB is enourmous, if you are doing a read test then the system will sacan the buffer cache before going to disk. 400MB is usually ample..
o I would guess the SCSI interface, but you say it is U160. Check the syslog etc and make sure it has not re-negotiated down to a lower level.
o I assume you are connecting to a 2x DS2300.. If only 1xDS2300 is it in full bus mode or split bus mode?

anyway good luck

Tim

http://www.seagate.com/support/disc/specs/scsi/st336753lc.html

SPINDLE SPEED (RPM) ______________15k
AVERAGE LATENCY (mSEC) ___________2.0
INTERFACE ________________________Ultra-SCSI Wide
Low Voltage Differential ______Ultra3-SCSI Wide
AVERAGE ACCESS (ms read/write)____3.6/4.0
Drive level without controller overhead
SINGLE TRACK SEEK (ms read/write)_0.20/0.40
MAX FULL SEEK (ms read/write) ____6.5/6.9
-
M.Doets
New Member

Re: stripe with 7 disks

Thanks for the suggestions!!

I have tried some different way's to do IO. Also I used dd to read a 3,5 GB file and write it to /dev/null. It is faster, but not much. In this way I only do reads, so that was expected.

So, maybe Stephenson is right, and the extend based stripe is the bottleneck. Because the size of the FS's I use 4MB extent size, I configured the database (with blocksizes and multblockreadcount) so that every IO is 1MB.

Then I have looked at the scsi controllers. I have 2 DS2300's with full bus. I do not know how I can see if the scsi adapter are really running on 160MB/s. But because I only have the DS2300 with the disks on this strings, I think it's ok. In the test below you see that I can read with 120GB/s, so thatâ s also ok.

I also have done some testing with a real stripe with a 16K stripesize over three disks. I see that actions that do a read and write on the stripe are a little bit faster than the extent based stripe.
If I do only reads from the stripe to /dev/null the real stripe is more then 2 time as fast. I can read 120MB/s on a stripe over three disks. For the extend based stripe I read 55MB/s.

So, because read and write at the same time is the most production like, I am still a little bit disappointed. But I didn't get any complaints,â ¦â
Todd McDaniel_1
Honored Contributor

Re: stripe with 7 disks

M.Doets,

I cant stress enough...

If this is for an Oracle or other similar DB, on attached disks, then I would definitely use the mount options I suggested.

Using UNIX buffer cache, for DB i/o, will greatly reduce the performance of your OS and also your app....
Unix, the other white meat.