Databases
cancel
Showing results for 
Search instead for 
Did you mean: 

Any idea about improve I/O performance?

SOLVED
Go to solution

Any idea about improve I/O performance?

I've an ORACLE db across 4 fc60 LUNS (data, idx, rbs, etc balanced across all the LUNS).

Each LUN has 4 disks on RAID-5, but I'm planning to change to RAID-0/1 (as soon as posible).

Any idea about fc60 stripe size of LUNS? Now is 8 Kb.

Any idea about filesystem block size? Now is 8 Kb.

Thanks in advance.
11 REPLIES

Re: Any idea about improve I/O performance?

Hi Rafael,

according the documentation is "The optimum stripe segment size is the smallest size that will rarely force I/Os to a second stripe." If you're using 4 disks to create RAID 0/1 striping, i.e. 2 disks for the original data and 2 disks to hold the mirror. And if all I/Os are 8KByte, then a stripe size of 4KByte would be the optimum. But if the vxfs driver would group multiple 8KB I/Os into 1 64KB I/O, then the optimum would be 32KB. Because all stripes needed for the I/O request can be read (or written) simultaneously. Please note that the largest blocksize vxfs supports is 8KB.

HTH, cu l8r, Edgar.
Alexander M. Ermes
Honored Contributor

Re: Any idea about improve I/O performance?

Hi there.
You talk about LUN's. If you use several controler interfaces, psread it thru the different interfaces as well. Otherwise your your controler will be the bottleneck.
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"

Re: Any idea about improve I/O performance?

Hi Alexander.

I use 2 controlers.

thanks.

Re: Any idea about improve I/O performance?

Hi Edgar.

Should I understand that ...?

1.- The fc60 stripe_size must be block_size / (number of data disks).

2.- Vxfs has an 8 kb bs. I think JFS On_line bs may be larger than 8 kb. I read somewhere "vxfs (with JFS On_line) likes bs of 64 kb".

In any case I haven't JFS On_line.

Thanks.
Bill McNAMARA_1
Honored Contributor

Re: Any idea about improve I/O performance?

Use two controllers to access the luns.
ie don't put all data io traffic down just one FC60 controller.
When creating the LUN, make sure that you create it across enclosures (or split busses)

Split bus mode will be faster, only 5 disks/bus.
Full bus mode will have 10 disk/bus.

What does amdsp -a show

Is the FC60 the only thing on the fc loop?

Later,
Bill
It works for me (tm)
MARTINACHE
Respected Contributor

Re: Any idea about improve I/O performance?

Hi,

If you want to improve IO perf, you should purchase Online JFS.
With this soft, you mount oracle filesystems with "-mincache=direct" option.
This will give perf such as raw-device datafiles.

Regards,

Patrice.
Patrice MARTINACHE

Re: Any idea about improve I/O performance?

Thanks Patrice.

I don't know if I have understood so well, sorry, but I'm going to try giving you more information.

There are 5 HP9000 "against" 1 fc60.

In the fc60 there are 52 disks, 16 of them are used by the machine of the question (ORACLE, etc).

These 16 are accross the 6 scsi channels.
scsi 1 - 2 disks
scsi 2 - 2 disks
scsi 3 - 4 disks
scsi 4 - 4 disks
scsi 5 - 2 disks
scsi 6 - 2 disks

Every LUN was created across different SCSI.

Is enought or you need more information?

Thanks.
Phil Miesle
Occasional Contributor
Solution

Re: Any idea about improve I/O performance?

Hi there,

Oracle has a methodology called "SAME" (Stripe and Mirror Everything). You may be able to find the whitepaper on this (it was presented at Open World last November); if not let me know and I'll post a copy.

Basically, you should create a single LUN which is striped and mirrored; in your case, it will be 8 disks plus 8 disks. Keeping the controllers balanced it will be the disks on channels 1-3 mirrored to the disks on channels 4-6.

You asked about stripe depth. The SAME methodolgy suggests as large a stripe depth as possible (up to 1 MB) in order to optimise multiblock operations.

I will suggest that if you archive you should create two LUNs, one for archives and one for the rest of the database. It would be sufficient to allocate 2 of the 16 disks to your archives (I'd mirror them), this depends of course on how big your archive area needs to be compared with your disk size.

This configuration should allow for good performance (90-95% of optimal in most situations). In order to squeeze the last 5-10% out of the system, you will need to I/O profile your application, but you'll be limited only having 16 disks.

You can make things a lot more complicated and try to fine-tune the performance, but unfortunately as the nature of your application usage changes over time it is likely your disk configuration would need to change as well.

Using the SAME methodology eliminates a lot of the administrative nonsense that you're trying to go through to achieve good performance.

hth,
:-Phil
Stefan Farrelly
Honored Contributor

Re: Any idea about improve I/O performance?


Weve recently being doing the same tests on an FC60 with 3 SC10's for an Oracle DB.

After much testing the fastest throughput we got was; 166 MB/sec

This was using RAID1 on the FC60 (not RAID0/1) with 4k stripe size, and using lvm striping with a 64k blocksize. Both stripe sizes are the optimum. The FC60 manual quotes 170MB/s as the max throughput so were getting very close!

Once you setup your lvols this way test them with;

time dd if=/dev/vgXX/rlvolYY of=/dev/null bs=1024k count=50

and see if you can get the time down to 0.3 sec (50MB/0.3=166 MB/s). For some as yet unknown reason using RAID0/1 was a bit slower (just over 120MB/s).
Im from Palmerston North, New Zealand, but somehow ended up in London...

Re: Any idea about improve I/O performance?

Hi Phil.

First, thank you.

Second, I was looking for the SAME methodology at ORACLE site but I couldn?t find anything (what a "same" ...). Please send me the whitepaper if you can.

Thanks.
Phil Miesle
Occasional Contributor

Re: Any idea about improve I/O performance?

Rafael (et. al.),

Attached is the SAME methodology paper presented at OpenWorld. I hope you find it useful!

:-Phil