Showing results for 
Search instead for 
Did you mean: 

db_block_size and db_file_multiblock_read_count and VxFS

Go to solution
Occasional Contributor

db_block_size and db_file_multiblock_read_count and VxFS

VxFS stripe unit??s size is 64k by default , and db_file_multiblock_read_count is 8 by default , Shoud I set db_block_size to at least 8k ( for OLTP) or 16 to 32 K for DSS to increase performance? Is VxFS stripe unit??s size is the one I/O size ? Thank you !!!
Marcelo De Florio
Frequent Advisor

Re: db_block_size and db_file_multiblock_read_count and VxFS

You can create a logical volume with -I option and specify the stripe size for this logical volume, the size of stripe is 4k..32768k.

Volker Borowski
Honored Contributor

Re: db_block_size and db_file_multiblock_read_count and VxFS


this parameter influences Oracles Optimizer Decision for full scans. It represents the value of OS-Blocks to be delivered from the OS with a single IO-call.
So i.e. you set this to 1.000.000 Oracle will think "wow, one milion Blocks with a single IO... -> Full Scan will give me the entire table in one IO!".
So I think, you have to adjust this one mostly acording to the size of your database and the size of the most frequently used objects, and the amount of memory, you have for your db-buffer cache.
In SAP-Oracle-DBs they mostly reduce the 32 default to 8 and I saw a few SAP-Early-Watch-Reports suggesting 5!
It may be right, from IO-view to adjust this one technically, but if you destroy your db-buffer all the time, by doing seq-scans, you will have better IO-access, but worse buffer-quality.

Do not know if this helps
Steve Slade
Frequent Advisor

Re: db_block_size and db_file_multiblock_read_count and VxFS

I think that you need to look at what the application will do, as you are right, that what might be right for OLTP my not be right for DSS.

The db_block_size parameter just states how big an Oracle data Block is. It is usually set to the same size as an OS block. It is set at database creation and can only be changed by a full export, re-create database, import cycle. The data_buffers segment of the SGA, used for caching, is defined in db_blocks, therefore this parameter affects memory usage too.

The db_file_multiblock_read_count states how many blocks are read into the data buffers, within Oracle, on a full table scan. Also, it limits how many of the data buffers are affected by a table scan - the buffers are there for performance, you do not want a table scan to wipe them completely with the contents of one table.

Therefore, with an 8k block size and a db_file_multiblock_read_count of 8, each logical read is taking 8 * 8k. Re-using the same 8 data buffers for each read.

If your system is such that it has to scan a lot of tables, you need to increase the size of your db_buffers in the Oracle SGA, and increase the db_file_multiblock_read_count parameter. Think also, about other Oracle performance features such as partitioning, and Parallel Query.

If your app is OLTP, then it may be configured such that table scans are quite rare and only on small tables, and that the majority of the work is via index retrieval, in which case the db_file_multiblock_read_count will not make any difference.

I've Rambled on too long, hope this helps, I have forgotten the question.
If at first you do not succeed. Destroy all evidence that you even attempted.