cancel
Showing results for 
Search instead for 
Did you mean: 

9i optimizer

SOLVED
Go to solution
mangor
Occasional Advisor

9i optimizer

Hello All,
We just migrated to 9.2.0.5 from 8.1.7.4. I created a new index. Wrote a simple query:

select claim_number
from claim
where data_source = 4000;

Oracle used the index perfectly. I used dbms_stats to analyze the tables and indexes. Now Oracle no longers uses the index. It does a full table scan. The claim table is a 9 gig table. With the index it runs in seconds. Without it runs in minutes. Does anyone have any experience with the 9.2.0.5 optimizer? I don't want to use the RULE hint. We have other queries that are having the same problem. This is the simpliest one. I've heard that the 9.2.0.6 has some fixes for the optimizer. But I'd like to work this out before having to apply a patch if possible.

Things I've tried. Using gather_stats instead. Changed the optimizer to 8.1.7. Just analyzed the tables not the indexes. None of these worked. It only uses the index right after creation. Once the table is analyzed then oracle no longer uses the index.

Thanks in advance.
11 REPLIES
Hein van den Heuvel
Honored Contributor

Re: 9i optimizer


Wow, that almost sounds like too big a bug to be! Surely the world would be screaming?
I wonder what extra trigger there might be in your application to make it do that. Or, maybe indeed many more customers are suffering but did not dril down as nicely as you did.
Did you clean out your initXXX.ora in case it gathered a setting with this surprise effect?

At any rate, this is very unlikely to be an hpux specific problem. Sure it could be port specific, but more liekly it is generic. Therefor I would recommend to ask the question in an Oracle newsgroup, forum, asktom, ...

Hope this helps a tiny bit,
Hein.

mangor
Occasional Advisor

Re: 9i optimizer

I'm not expecting it to be just on HP-UX. I've asked this question here because my questions always are answered here. I've tried oracle's forum and haven't had nearly as much luck. As for the init.ora file I'm reading up on all the different parameters. There's many new ones. I've opened an iTAR with Oracle and as usual they want me to apply the patch before going any farther. They tell me there's been a lot of optimizer fixes in the latest patch. The patch wouldn't be an immediate fix. But I am downloading it and testing it on another server.
Brian Crabtree
Honored Contributor

Re: 9i optimizer

How many rows is the 'claim' table?

How many of those rows are associated with the 'data_source' column equal to 4000?

Brian
TwoProc
Honored Contributor
Solution

Re: 9i optimizer

On Metalink there is a patch that came out recently that addresses this issue. I've noticed this same behavior at times.

On another not - The optimizer really pays attention to how much data it needs to read. Have you looked at your MULTIBLOCK_READ_COUNT? Is this the standard setting of 8?

I've found that the higher you set this - the more full table scans you get.

Also run
Select count(*), claim_number
from claim
group by claim_number;

What's your spread look like?

Do the same and tell me how many rows have a data_source of 4000.

Did you gather stats / anaylze the index after you created it? If not - the cost
optimzer is not going to use it. While you're at it do an analyze of 25 or 30%
or so of the table too.

OK - one more (just an fyi - its not going to
make a diff whether or not you're full table scanning).

Be careful using the feature that will manage your user's PGA. While this worked fine 9.2.0.4 - it will reduce 9.2.0.5 to about half speed for 9.2.0.5 if you're under a substantial load. This is a real monster to watch for. Please check and make sure that it's turned off. If you need I can post back later with the exact settings on that. Just let me know.
We are the people our parents warned us about --Jimmy Buffett
mangor
Occasional Advisor

Re: 9i optimizer

Here's the layout of the data_source column:

COUNT(*) DATA_SOURCE
---------- -----------
18534528 4000
16376369 4001
2727251 4002
1273410 4003
6024692 4004

I tried setting multiblock read count down from 32 to 8. That didn't do anything. Since this cardinality of this data is so low maybe I should use a bitmap index instead.
mangor
Occasional Advisor

Re: 9i optimizer

I do know that patch 9.2.0.6 came out and is supposed to fix a lot of optimizer issues. But this company won't want to install that for another 6 months or so.
TwoProc
Honored Contributor

Re: 9i optimizer

Be careful with a bitmap index. This should only be created for tables that get very little in the way of updates during the day. The reason is that the *WHOLE* table and INDEX will be locked in their entirety until the calculation of the new bitmap is complete.

I like bitmap indexes but, they can usually only be used on lookup tables, charts of accounts, etc - things that don't change often.

We are the people our parents warned us about --Jimmy Buffett
Brian Crabtree
Honored Contributor

Re: 9i optimizer

Well, short of using bitmap indexes, delete the statistics for the index. That should force the query to use the index as it will default to the RULE optimizer settings.

Brian
Steve Lewis
Honored Contributor

Re: 9i optimizer

I think you could also try partitioning that table on the data_source column. Try it on a test system first, since the optimizer can go completely mad with partitioned tables. But if it works you may be able to remove the index altogether. Also, I know its tempting to be critical of Oracle support but they may be right.
Patti Johnson
Respected Contributor

Re: 9i optimizer

Take a look at your settings for optimizer_index_caching and
optimizer_index_cost_adj.
You can use these to influence the 'cost' assigned to index reads. Metalink note 243269.1 has more details but basically
setting optimizer_index_cost_adj to a value lower than the default of 100 will make index read appear less expensive.
These parameters are session modifiable so you can play with then in sqlplus without bouncing the db.

Patti
mangor
Occasional Advisor

Re: 9i optimizer

Thanks Pat. I was thinking about that feature also. I'm just having different results on all my instances. I run the same gather stats on all instances. But some instances are using nested_loops for this particular query while other instances are using HASH_JOIN. I don't understand what could be causing this.