Operating System - HP-UX
1824218 Members
3774 Online
109669 Solutions
New Discussion

db block size,file system block size,default I/O block and lvm

 
Jeong Jin-wook
Contributor

db block size,file system block size,default I/O block and lvm

I wonder that how db block size(default is 8k),file system block size(if vxfs, 8k),default I/O block(512k) and lvm stripe size are related?

For instance,
1.db block size=8k
2.file system block size =8k
3. default I/O block =512k
4. lvm stipe size =64k with 4 disks

if oracle wants to write 1Mb data on lvol1, how many I/O does occur?
When one I/O occurs, 64k*4(256k) is written. so I think 4 times. Is it right?

If I am right, what does default I/O block do?

and another question.

I red docs below
http://wtec.cup.hp.com/~perf/related_links/oracle/oracle_same.pdf

It says that lvm stripe size 1Mb recommanded.
if oracle need to read only 256k at a time, is 1Mb too big?
Becaust 1Mb stripe with 4 disks means 4Mb read at a time.
Am I right?

I know you are so busy. Could you explain ?
1 REPLY 1
Frank de Vries
Respected Contributor

Re: db block size,file system block size,default I/O block and lvm

Interesting topic.

1) First of all Oracle will scan more block in a read action. (Regardless whether it needs one record or many records out of that one read action)
see parameters:
db_file_multiblock_read_count
sort_multiblock_read_count
hash_multiblock_io_count
Use this to align with your striping or the other way around.

2) It is more important for Oracle data data is contiguous (and this goes well with striping) than the number of i/o actions.
Don't forget that striping works out better mathematically on a series of i/o especially when there is a lot of it.

3) The best way to tune i/o is to actually write sql code that uses less i/o.
Thus clever use of indexes and datamodel is worth 10x more then the best ever configuration you can do at unix/disk level.

So to come back to your question.
The default i/o block is the phyiscal unit in the Oracle datafile that contains up to a number (40%) of records.

On a OLTP system large block are not benificial as you need short but many hits .
However on a DATAWAREHOUSE where you have a few but long transaction queries , large block size will pay-off a benifit. But again,
this benifit is only 10% of the total performance picture. Good coding and well formalised datamodel count for 70%.
And avoiding contention on your sga/pga and archive logs count for about 20%.


Of course that doesn't mean you should not configure your disk layout and blocksizes badly, but the performance penalties are minor compared to other areas of performance.

Look before you leap