Databases
cancel
Showing results for 
Search instead for 
Did you mean: 

vol_maxio Effect on large "dd" Operations and Database Performance

Alzhy
Honored Contributor

vol_maxio Effect on large "dd" Operations and Database Performance

HP-UX 11.11 default is 256. We're users of VxVM for DB Storage where we have eitehr EVA or XP 12000 Storage. The latter we use 8-way stripes of 32 to 64 K width.

We recently discovered that increasing vol_maxio from 256 to 16384 allowed us a 5 to 8 fold fold increase in dd throughput using 1MB blocks. Our dd-writes appear to have jumped from doing 256K write blocks to 2MB Write blocks.

Will this cause certain Oracle/DB Transactions to improve? Will it affect certain OLTP like transactions?


Hakuna Matata.
12 REPLIES
Eric Antunes
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

The answer is No. OLTP transactions happens in buffer cache, not I/O related.

Datawarehouse may benefit.
Each and every day is a good day to learn.
Alzhy
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Eric, what about OLTP Transactions that behave like batch? Our "OLTP" environment during the day always feel the burden of excess baggage processing that seems to behave like batch. Theer are bursts that each read or write as I have observed them from the volume manager layer (after the kernel paramater change) approaches 2MB...

We're still gathering comparative stats before and after the change. BUt the dd performance alone is amazing! We use dd a lot to perform hotbackups..

Hakuna Matata.
Steven E. Protter
Exalted Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Shalom,

I don't think this will help at all.

Oracle does as much work as it can in the SGA. Changing the SGA settings will have the greatest impact on ORacle performance.

Once outside the SGA, Oracle does not respond to changes in the buffer cache for databases on cooked filesystems. Bill Hassell mentioned to me that on 11.23 HP-UX a larger buffer cache has improved oracle performance on databases on cooked filesystems.

I've found changing this setting has in the lab had no impact on OLTP Oracle databases.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Eric Antunes
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Hi Nelson,

From my experience there are "only" 3 possible reasons for OLTP transactions behaving like batch. I'm going to point them in the order I think they should be addressed :

1 - In a Cost Based Optimizer, like the ones in RDBMS 9i onwards, the absence of statistics is ugly for the overall performance:

select dt.* from dba_tables dt
where owner not in ('SYS','SYSTEM')
and not exists( select 1
from dba_tab_col_statistics dtcs
where dtcs.table_name = dt.table_name)
order by dt.owner
2 - Bad SQL. Running statspack will identify the heavy ones, if they exists;

3 - Tablespace fragmentation. I'm still struggling with this one... :-(



Each and every day is a good day to learn.
Volker Borowski
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Well,

at least I do not think it will hurt.
And your backups might run like hell.

Volker
TwoProc
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Nelson,

I'd bet that if you increased the multiblock read count in Oracle it would benefit - HOWEVER, this pretty much would only be fun for testing as increasing the multiblock read count increases the servers affection for running full table scans for most anything/everytyhing. As someone once said, for data warehouse applications maybe, but maybe not for OLTP.
We are the people our parents warned us about --Jimmy Buffett
Eric Antunes
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Hi Nelson,

The previous sql about statistics isn't the better one...

Instead run this 2 to check for table and indexes without estimated statistics:

/*** Tables ***/

select table_name, owner, num_rows, blocks, avg_row_len,
to_char(last_analyzed, 'mm/dd/yyyy hh24:mi:ss')
from dba_tables
where owner not in ('SYS','SYSTEM')
and last_analyzed is null;


/*** Indexes ***/


select index_name, owner, num_rows, distinct_keys, leaf_blocks, clustering_factor, blevel, avg_leaf_blocks_per_key
from dba_indexes
where owner not in ('SYS','SYSTEM')
order by index_name;

Here pay attention to high clustering_factor/num_rows ratios because the higher this ratio, the less likely the optimizer is to select this index.

To give you an idea, I have a Materialized view with 330K rows and a clustering factor of 67% and it takes 3 days to refresh!! I'm struggling with it...

Finally, I agree with TwoProc about db_file_multiblock_read_count: 8 or 16 at maximum is the best for most OLTP environments.

Best Regards,

Eric Antunes
Each and every day is a good day to learn.
Alzhy
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Greetings...

A week into this change on one server. The change is dramatic during onlines (where we have lots or batch-like processes going on) but more during batch.

Our DFMR is set at 8. Do you think there's still good increasing this value to 16?

These DB is about 1.5 TB large, on 16 cpu x 128GB RAM. Cooked Filesystems but with DirectIO enabled (db_writers = 1, and surprisingly disk_asynh_io=true???! coz our DBA sez it will just be ignord..).. 22+GB of SGA...

I attach our entire Oracle param file if any of you see something of interest.

Thanks!

Hakuna Matata.
Alzhy
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Forgot the attachment..

Here it is.. our Oracle Param File
Hakuna Matata.
Volker Borowski
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Hi Nelson,

well a few things are remarkable.
They might as well be good adjusted for your system, but I throw in my thoughts anyway.

log_buffer looks rather small to me.
We had a change on a 16TB DB on Friday to
bump this one up from 8MB to 28 MB.
I think 512K is way too low. Change it to 2M as a starter and watch v$session for "log buffer" waitevents to change. Value must be a multiple of 512!

You have some duplicates:
disk_async_io (which is "false" as the second value !)
optimizer features_enable (which is 9.x as the second value)

I think with autmatic pga_ enabled, sort_area_size should not be set. Not sure, what happens if both are set. I would recomend to use automatic pga_.

4,5GB for shared pool is quite a lot. If you have tons of stored procedures an lots of non-bind-variable-statements, you'll probably ok.

db_cache_size...
since you have a lot of RAM, I'd recommend to bump this one up beyond "real" 16GB (not 16000000000), cut down shared pool in case you can not spend the RAM. Both are dynamic within the borders of sga_max_size, so you can downsize shared pool and increase db_cache_size "on the fly" without bouncing the DB.
I do not find the reference right now, may be I can deliver this a bit later, at least I'll try.
There are a couple of internal parameters, that depend on db_cache_size like internal number ob allocation latches and else. These values increase if you raise db_cachesize above certain thresholds, an I mind to remember the 16GB was such.

optimizer_index_cost_adj with 1 can be discussed. In SAP Environment (OLTP) this one is recomended to be between 10 and 20.
Twist this one with care. Quite a lot of execution plans may change all at once if you twist this screw.

As you intend to provide better big-io performance, db_file_multiblock_read_count might be set to a higher value (16 ?) to tell oracle about you better capabilities to do large ios. Carefull: increasing this one tends to tell Oracle to favor full table scans, esp. for small tables. That might not be a fault, if the io can really been done fast.

If you have such a big db_cache, check, if you really need oracles db_cache_advisor.
That is a good tool to monitor, when something changes, but after things are set up and stable, that guy is costy.
Check out db_cache_advise.
I'll tend to set this one to "READY" so it is possible to activate it, if needed.
If set to OFF and the db is started with this setting, it might not be possible to activate it without restart. "READY" allocates the memory that this funtion needs, but disables the statistic collection for the advisor.

Good luck
Volker

Eric Antunes
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Hi Nelson,

Yes, Oracle will ignore disk_asynch_io=TRUE if you don't have raw devices (TRUE is the default).

As Volker already said, there are some duplicates (I don't know how Oracle handles this) that your DBA should eliminate in the init.ora.

Now, adding some considerations:

parallel_max_servers should be at maximum x*2*, since you have 16, it should be decreased to 32.

About optimizer_index_cost_adj and optimizer_index_caching, Oracle recommends to remove them from the init.ora for 9i and 10g, at least for their apps 11i (I don't have them in mine init.ora).

fast_start_mttr_target is to low and thus generating more checkpoints: Oracle recommends 1800 for EBS 11i which is the value I also have.

pga_aggregate_target is way too low: it is set to 640M in my little 100Gb database... Start with 1Gb and monitor the SQL workarea histogram in statspack or EM: multipass executions must be avoided.

Finally, you can see what are the most waited non-idle events in your database with the following sql:

select *
from v$system_event
where event not like 'rdbms%'
and event not like 'SQL%'
and event not like '%time%'
and event not like '%pipe get%'
and event not like '%dle%'
and event not like '%queue%'
and event not like 'jobq%'
order by time_waited desc

Best Regards,

Eric Antunes
Each and every day is a good day to learn.
Volker Borowski
Honored Contributor

Re: vol_maxio Effect on large "dd" Operations and Database Performance

Hi once again,

I found my Reference for the number of latches. It was SAP-Note 767414.

#Latches =
Next largest power of 2
(Size of the buffer pool/1048576)

which results in the latches being doubled at 16,515625 GB :-( not 16 GB.

Buffer Pool >= 8875147264 Byte -> 16384 Latches
Buffer Pool >= 17733517312 Byte -> 32768 Latches

So in your case, since you are close to the 16GB, I would tend to go for the double number of latches, even if you do not need 1G of the space in general.

Volker