- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: optimize performance during oracle11i upgrade
Operating System - HP-UX
1754106
Members
3392
Online
108811
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2006 03:40 AM
тАО08-04-2006 03:40 AM
Re: optimize performance during oracle11i upgrade
Thanks for all the comments - I don't mind the extra comments Eric et al - after all this IS a forum for discussion and
I like to see the different sides of issues). BTW - in the 11i upgrade, even though all tablespaces other than system are converted to local extent management - existing structures are NOT rebuilt (at least under the existing OFA model) - so fragmentation can still be an issue. IMHO - I think the real issue in space management for us (OLTP db) is archiving old data - instead of dragging around boatloads of data that will never see the light of day...
On my current upgrade test, I did increase the block buffers (now db_cache_size) from 1GB to 5GB. I also increased redo log size to 250M and increased log_buffer from 4M to 30M. Everything went great until the phase that processes orders (ontup231) - and it was crawling MUCH slower than in prev runs. Found out that my associates decided to try this upgrade without closing any orders - which we have done in all prev tests. I looked at what was taking so long - about 12GB of stuff had been inserted into 2 workflow tables and their indexes (and this phase was only half done). We killed the upgrade, restored backup and we are doing it over correctly. Exciting eh? At least this time I expect to see the tuning pay off...:)
I like to see the different sides of issues). BTW - in the 11i upgrade, even though all tablespaces other than system are converted to local extent management - existing structures are NOT rebuilt (at least under the existing OFA model) - so fragmentation can still be an issue. IMHO - I think the real issue in space management for us (OLTP db) is archiving old data - instead of dragging around boatloads of data that will never see the light of day...
On my current upgrade test, I did increase the block buffers (now db_cache_size) from 1GB to 5GB. I also increased redo log size to 250M and increased log_buffer from 4M to 30M. Everything went great until the phase that processes orders (ontup231) - and it was crawling MUCH slower than in prev runs. Found out that my associates decided to try this upgrade without closing any orders - which we have done in all prev tests. I looked at what was taking so long - about 12GB of stuff had been inserted into 2 workflow tables and their indexes (and this phase was only half done). We killed the upgrade, restored backup and we are doing it over correctly. Exciting eh? At least this time I expect to see the tuning pay off...:)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2006 05:03 AM
тАО08-04-2006 05:03 AM
Re: optimize performance during oracle11i upgrade
Dave, thanks for the response on how the upgrade is going. I agree with your posting about archiving as I'm working on that very topic this week. Good luck with the upgrades, and in trying to keep the testing team consistent (on closes).
Also, Yogeeraj, I agree with the using LMT with uniform spacing - I converted to it many years back, and enjoy it. In regards to Dave's comment about not doing it for past data - we ran across the same thing. I fixed all of that when we first went to database version 9i, and did a full import of everything onto LMT uniform tables for all tablespaces. And yes, we don't worry about fragmentation nearly as much anymore.
Good luck,
John
Also, Yogeeraj, I agree with the using LMT with uniform spacing - I converted to it many years back, and enjoy it. In regards to Dave's comment about not doing it for past data - we ran across the same thing. I fixed all of that when we first went to database version 9i, and did a full import of everything onto LMT uniform tables for all tablespaces. And yes, we don't worry about fragmentation nearly as much anymore.
Good luck,
John
We are the people our parents warned us about --Jimmy Buffett
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-08-2006 05:37 AM
тАО08-08-2006 05:37 AM
Re: optimize performance during oracle11i upgrade
Dave,
if you already know your "big-brother-tables", that cause trouble for conversion, did you consider to increase parallel-degree on these prior conversion.
Since a "conversion" usually needs to read all data of a table, a full table scan should be the plan of choice (well mostly), and in this case it can be more speedy when doing it with parallel query. It would be of help, if you could track down the statements that are operated in these phases, although I am not sure if it is permitted to runs some statspack or SQL-Trace while this one is on the way.
(And check then documentation in addition, if PQ [parallel query] is permitted in this conversion, not sure about this)
alter table .... parallel 8;
will use 8 PQ procs doing the FTS.
This scales almost linear up to several GB tablesize.
I use it often to create secondary indexes on big tables (
CREATE INDEX ... parallel 24
alter index ... noparallel
).
Needs some parameters
parallel_max_servers...
parallel_threads_per_cpu...
and may be some more, which I do not have handy right now.
Check it out:
set timing on
select count(*)
from bigtable;
-- run it twice to be fair for caching
select /*+ parallel (bigtable, 8) */ count(*)
from bigtable;
-- run it twice as well (caching) an check
-- "ps -ef | grep ora_p" while running
-- to see PQ-procs showing up
If this gives you significant speedup, set the parallel_degree of the table in question. Be carefull, setting 3 tables to "parallel 8" might need 24 PQ-procs when they are operated at the same time, which has additional impact on need for TEMP and UNDO, so be sure to be single in conversion or otherwise try to use a lower value.
Check old degree value before (column degree in dba_tables / dba_indexes) and reset afterwards or set "noparallel".
Volker
if you already know your "big-brother-tables", that cause trouble for conversion, did you consider to increase parallel-degree on these prior conversion.
Since a "conversion" usually needs to read all data of a table, a full table scan should be the plan of choice (well mostly), and in this case it can be more speedy when doing it with parallel query. It would be of help, if you could track down the statements that are operated in these phases, although I am not sure if it is permitted to runs some statspack or SQL-Trace while this one is on the way.
(And check then documentation in addition, if PQ [parallel query] is permitted in this conversion, not sure about this)
alter table .... parallel 8;
will use 8 PQ procs doing the FTS.
This scales almost linear up to several GB tablesize.
I use it often to create secondary indexes on big tables (
CREATE INDEX ... parallel 24
alter index ... noparallel
).
Needs some parameters
parallel_max_servers...
parallel_threads_per_cpu...
and may be some more, which I do not have handy right now.
Check it out:
set timing on
select count(*)
from bigtable;
-- run it twice to be fair for caching
select /*+ parallel (bigtable, 8) */ count(*)
from bigtable;
-- run it twice as well (caching) an check
-- "ps -ef | grep ora_p" while running
-- to see PQ-procs showing up
If this gives you significant speedup, set the parallel_degree of the table in question. Be carefull, setting 3 tables to "parallel 8" might need 24 PQ-procs when they are operated at the same time, which has additional impact on need for TEMP and UNDO, so be sure to be single in conversion or otherwise try to use a lower value.
Check old degree value before (column degree in dba_tables / dba_indexes) and reset afterwards or set "noparallel".
Volker
- « Previous
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP