Operating System - HP-UX
1753797 Members
7879 Online
108799 Solutions
New Discussion юеВ

Re: Selecting Oracle db_block_size with VA7110 array

 
twang
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

Guidelines for DB_BLOCK_SIZE:
=============================

The setting of the parameter DB_BLOCK_SIZE depends on the type of data you are
looking at, the application and implementation.

The main considerations are as follows:

1. Balance with Operating System Block Size

For good performance, Oracle Block size should be made equal to or a
multiple of the operating system blocksize. If you do not do this then the
operating system may be doing many reads and writes to process Oracle
blocks. This is inefficient and wastes CPU cycles.

If your rows are small and you use a large blocksize, when you fetch a block
you may get lots of rows that you are (probably) not interested in. The
operating system has to do more work to get the row(s) that you are
interested in.

On the other hand, if you were interested in the extra rows then altough
this may waste CPU on the initial fetch then you have already loaded the
block containing the rows in to the buffer cache.

Larger blocks also mean more rows cached for the same number of
DB_BLOCK_BUFFERS (taking up more memory).


2. Index Branches

Larger Oracle block sizes may give better index balancing with more rows in
each branch.


3. Locking/Block Contention

Remember that with more processes looking at the same block you are more
likely to get block contention.


4. Row Length

If your rows are comparatively large then you may need a large blocksize to
(possibly) prevent chaining.

Yogeeraj_1
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

hi,

allow me post my findings:

Here are a couple of pros and cons (some of them will be both pros and cons -- depends on your perspective)....

going from 2k to Xk where X > 2...

o the amount of block overhead in your database will decrease. There is a fixed amount of overhead per block. If you have a block that is 4 times LARGER you will have less then 1/4 the block overhead after you change over.

o you will have less row chaining and perhaps less row migrations. Row chaining happens when a row is too large to fit on one contigous block -- this is much less likely in an Xk block then a 2k block. Row migration happens when we attempt to update a row but discover there is insufficient space on the block to do so. We leave the "head" of the row on the original block (cannot change the rowid) and put the row itself on another block entirely. Going from 2k with a 10% pctfree (default) to an Xk block with 10% free means we have perhaps more free space on each block to accomidate this (but we also have more rows per block so it might not matter all of the time).

o you will have more people contending for the same block perhaps. We do lots of stuff at the block level -- you may notice more contention in this area.

o you will have the same amount of data cached (volume) but the granularity of the data will be N times larger. For every row you want, you might get N times as many rows pulled along. This can be good -- this can be bad. It really depends on what you are doing. In a heavy duty OLTP system with lots and lots of scattered reads all over the place -- this might be bad as no one is going after the same blocks (eg: I read a row on a block and no one else ever needs that block -- i cached lots of stuff no one is going to use and I may have bumped out some other data that we could have used). In a query intensive system (DSS, Data Warehouse), it may be very good as I was going to read the blocks right around my data anyway. And - people are going to be needing that data again and again.

o you'll get more rows / block in indexes. This will result in indexes that require less maintenance as they tend to need to reorganize themselves less frequently.

o larger block sizes allow for larger index keys (although whether or not THIS is a good thing is in question as well, there is something to be said for reasonably sized index entries). The size of the index keys is a little less then about 40% of the block size. In an 8k database for example, you'll get:create index t_idx on t(x,y,z)
*
ERROR at line 1:
ORA-01450: maximum key length (3218) exceeded

Ok a 2k block -- it would be less.




Also -- try to find the native OS blocksize of your system and make sure to use a block size that is at least as large as this AND is a multiple of this if possible. Some of the old considerations (eg: pre 7.3 -- you had to consider the blocksize would limit the maxextents of a table) do not apply.



hope this helps too!

regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Hein van den Heuvel
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

re: 2KB blocks mean 6KB of an IO is wasted
... Depends on the file system. NOT on Raw devices.

re: 32KB is optimal for a VA storage subsystem
... So what? is it optimal for the application?
While 2KB is probably sub-optimal as pointed out (too much overhead, row fragmentation), I believe 32KB is unlikely to be optimal for anything but Data Warehouse style usage.
- While each 32KB comes in over the wire in 'no time' at all these days, once you start hitting
1000 IO/second the bandwidth requirements start to play a role.
- Contention risk as pointed out earlier.
- cache granularity! if you have a limited SGA, say 4GB, that maps to 131K buffers, but at 8KB it give you half a million distinct pages to cache. What is the access pattern of your application? Lots of little blobs all over (OLTP) or large chunks of rows probably placed close together (BW).

With a 'middle of the road' 8Kb or 16KB you
are more likely to be in the right ballpark.

And don't forget, Oracle now allows you to mix and match multiple block sizes in a single DB!

Good luck,
Hein.





Jean-Luc Oudart
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

Joanne,

I don't what happen with the second document.
"SAME and the HP XP512".

I don't think that the VA configuration should drive the choice of your DB_BLOCK_SIZE but your application should.

You mentioned, and this is right, you can replicate long batch run, but not the online users transactions unless you use the specific tools to simulate this kind of workload.

We had to do this exercise recently to statisfy ourselvers with the performance of the (future) database for both Bach and Online.
We went to a benchmarking center with transactions and database and run different simulations.
Of course, benchmarking does not come cheap.

I attached another document on VA performance , that you may already have.

Regards,
Jean-Luc
PS : let us know how you get on with your performance tests.
fiat lux
Jean-Luc Oudart
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

Sorry,
I don't know what is happenning with the attachment. I use Acrobat 5.0 here with no pb.
Any pb during upload ???

The document is HP :
"disk array performance guidelines with application to the hp StorageWorks
virtual array"

Rgds,
Jean-Luc
fiat lux
Alexander M. Ermes
Honored Contributor

Re: Selecting Oracle db_block_size with VA7110 array

Hi there.
Just my 2ct to this.
We are running a database block size of 8k on 95 percent of our databases.
Less requests to much reading, more is wasting space.
As storage we use :

Jamaica box
FC30 Galaxy
SC10
FC10
EMC 8530

On all systems we have satifying results in performance.

Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"