LVM and VxVM

Re: Using 2k block size instead of 8k?

Francesca Watts
Occasional Advisor

Using 2k block size instead of 8k?

We're changing our database block sizes to improve performance on RAC. And are using a 2k block size instead of 8k. However, I have read that for some volume managers (not sure about veritas) the operating system block size is set to 8k by default. Is this the same with VxVM?

This is the blurb on block sizes:
Oracle recommends that your database block size match, or be multiples of your operating system block size. One can use smaller block sizes, but the performance cost is significant. Your choice should depend on the type of application you are running. If you have many small transactions as with OLTP, use a smaller block size. With fewer but larger transactions, as with a DSS application, use a larger block size. If you are using a volume manager, consider your "operating system block size" to be 8K. This is because volume manager products use 8K blocks (and this is not configurable).

Does anyone have any docs or advise on whether we can reduce VXVM to 2k blocks, or if it cant do we know of any problems/perf prob on VXVM writing 8k blocks, when RAC is sending 2k??
Bill Hassell
Honored Contributor

Re: Using 2k block size instead of 8k?

The "blocksize" is virtually useless as a setting for today's systems. The kernel will coalesce I/O requests into much larger blocks to reduce the overhead and improve performance. 20 years ago with very old filesystems, the block size made a difference. Today, you won't be able to measure any differences except on a very slow computer (less than 100 MHz). Simple logic says that a small block will create a lot more overhead just from 4x more I/O for the same amount of data.

Bill Hassell, sysadmin
Emil Velez
Honored Contributor

Re: Using 2k block size instead of 8k?

YOu might want to review the OS parameters in the vxtunefs command and the tunefstab file that you can configure. Depending on whether this is 11.11 or 11.23 or 11.31 will determine what parameters the filesystem uses for block sizes and read ahead and write behind and how buffer cache handles buffering blocks of files.
doug hosking
Esteemed Contributor

Re: Using 2k block size instead of 8k?

I make no claims to be anything remotely resembling an Oracle or VxVM expert, but my experience with similar issues makes me believe that it would be difficult to give you a terribly helpful answer without knowing a lot more about your specific environment and needs.

This seems like a simple question with a more complicated answer. I can think of many overlapping variables that could affect the performance. Some depend on hardware, some on software, some on workload, some on physical motion. Another consideration is how much you might be willing to trade off increased CPU/memory usage for better overall throughput.

Factors such as how common database writes are relative to database reads and how your database is organized on physical media might also influence some of these decisions.

Caching at various levels of hardware and software can further complicate the evaluation. Modern disk drives and controllers do a lot more of this than they used to, sometimes to the detriment of performance. Controllers/drives typically have no concept of the meaning of the bits they shovel, so what's a big optimization for one type of use can seriously hurt peformance for another use.

Taking the factors above in consideration separately, it's often not too hard to make a reasonable guess about what might be best. But the interactions between them can be surprising, even to experts. Sometimes a little actual benchmarking under real-world conditions can be a lot more illuminating than a lot of theoretical analysis.