- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Low performance and time responses
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-09-2001 07:47 AM
тАО07-09-2001 07:47 AM
Re: Low performance and time responses
- You?re right, we?ll turn off archive log mode. We assume loosing recovering abilities.
- I?ve got some statistics from utlbstat and utlestat. They cover few days when application was performing slowy. (view attach)
- If I understand you well, you think that tkprof's stats are older and we should "tkprof" again?
******************************
Volker:
- Answer to your questions is "both". Problem is related with both things, when database load increases response times grow up, same occurs with concurrent users. As user increases as response time grows up, it seems that processes keep waiting for disk responses...
- You ask me to create new indexes, we have 5,000 transactions (inserts / updates) each day, could a excess of indexes create a response time decrease in transactions?
- Related to tablescan, poblem with tables is located in three of them that support all the application. When I said "Decreasing amount of data improves
response times" I want to say "Decreasing amount of data in three tables".
- We?ll try to "tkprof" code again for refresh our stats.
Thank you very much
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2001 12:52 AM
тАО07-10-2001 12:52 AM
Re: Low performance and time responses
Yes if you turn off archive log mode. You will only be able to restore your database - if a failure occured - to the time of your last backup. Any activity between the time of the last backup and when the failure occured, will be lost. So if you backed up your system on Saturday, and lost a disk on Tuesday. You will only be to recover the database to Saturday - any activity on Sun & Mon will be lost.
Looking at the report, I can see that you have the default optimiser setting of choose. This means that if you have statistics - can be seen in all/dba/user_tables, _indexes, _ histograms etc, then Oracle will use the cost based optimiser and decide on whether to use an index based on those stats. If those stats were collected when the system contained a little amount of data, then Oracle will tend to go for scan - more data then an index retrieval. (There also a load of other rules, but suffice to say, the more acurate your stats, the more infmored Oracle will be about which method to retrieve your data.
Loking at the stats and doing a quick analysis - I would suggest the following : (NB. this is a quick once through, its worth checking the stats against Oracle performance docs, also I may have mis interpretted things.)
1. LIbrary Cache Stats. The pin hit ratio should be as near 1 as possible. The Body and Index stats are quite low. The body stats suggest that should increase the size of your Shared Pool, maybe. Alternatively, you could pin common code into the Shared Pool. You could use the Shared Pool reserve parms to allocate space to specific, commonly called code? The index stats indicate that perhaps your buffer cache is too small, however, if you are scanning multiple tables, then index blocks will be dumped out.
2. The second load of stats contains a heap of info. I do not have time look at it all, but here a few things which seem 'interesting'
a. There seems to be a lot of check pointing - this can be confirmed in the alert.log file. Looking at the init.ora parms given, I notice that you have the default for the db_block size - this could be crucial. I would expect this to be around 8k in size, but I do not know the default value. If this is a low value, then your database blocks are too small. This is a big problem, as this parameter can be set at the 'create database' stage.
b. You are performing a large number of consistent gets - 2700 per second !. This is where the data is read from the rollback segment rather than the table, and takes therefore extra time. This again could be becuase of all the tablescans.
c. physical reads/writes. These show that you do not actually write much. But you read a hell of a lot.
d. table fetch continued row - this indicates the amount of data that is spread of more than one data block - requiring an additional read. Maybe issues with the storage parms on your tables?
Looking at the buffer busy waits - it seems that the database spends most its time waiting for data blocks and rollback. Again, this could tie in with a small buffer cache, or one that it being cleared quite quickly. If your buffer cache is quite small, and you cannot affored the memory, you could try splitting it in two (using the buffer pool keep/recycle parms - not sure if this is in your version.) This way any tables which you have to scan can be placed in the recycle area, and will not affect the performance of the buffer cache.
There are probably a number of things that I have missed (as I only have limited time), so again I would recommend looking at the Oracle Ref. Doc for guidance on the stats.
Hope this helps, I am not sure how awake I am this morning. It might help if you sent your init.ora settings. (In svrmgrl spool myfile; show parameter; spool off - this should produce a neat file.)
Good luck
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2001 06:04 AM
тАО07-10-2001 06:04 AM
Re: Low performance and time responses
I?ll attach you database parameters this evening (in Spain)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2001 08:21 AM
тАО07-10-2001 08:21 AM
Re: Low performance and time responses
Concerning to your question about additional indexes cost write performance: This is true, but look at your read/write ratio. I guess you do not have a write problem!
You mentioned three specific tables. Check out Bills Script for bad SQL-Statements and see, if it shows something up with them. Bad Statements have a high read-count compared to a low number ob records. Those statement can be optimized by an index mostly (Let us have them).
High read count and high record count is worse, because it might be (although unlikely) that the application needs all that data ?!
db_file_multiblock_read_count is at 32. Would try a lower value, to cause the optimizer to favor index-scans.
shared_pool_size=9M (increase if box does currently not do swaps)
Icrease the size of the online redologs, to soften the checkpoint incomplete intervalls (but I give this no big effect), or add one or two groups.
Now you said you have a high number of transactions. What do they do to you three big tables ? Is this a
- insert / delete - Activity
- update - Activity
- mixture
Check PCT_FREE and PCT_USED, if you have a heavy update_activity. If you have haevy update activity and the table contains a lot of fields, that are initialized NULL or short VARCHARS and get filled later, you should preserve freespace in the blocks for those updates. If you do not, the updates will lead to chained or migrated rows, which will cause additional read upon select. If there is a problem, you need to reorganize those tables.
Check Oracle guides on chained rows.
If you have insert/delete profile, your indexes might be fragmented fast (do index-rebuilds).
In general I like to know from the SQL-Statistics, how many of your consistent gets do make sense (means comparrison to the number of records read)
Last big thing:
Your application does quite a lot of sorts.
Check if really needed, and get rid of this if possible. If not, check if it will be of profit to serve the sort with an index and not the WHERE-clause (rare case, but if a hit, big effect). Since most of the sorts are done in memory, I suspect only small tables being sorted. If this are only small lookup tables, check if they can be converted to index-organized-tables.
Volker
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2001 11:35 PM
тАО07-10-2001 11:35 PM
Re: Low performance and time responses
- Yesterday we were reviewing OS block size and Oracle block size.
- We use "ll -s" for getting OS block size and it seems to be 512 bytes. Do you know a direct way (command) to get it?
- Oracle block size is 2048 bytes.
- Could little size of OS block size cause a low I/O data transfering from disk to memory?
- Coult it be a problem the difference between Oracle and OS block size?.
- I attach you Oracle parameters.
- Thanks a lot guys.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-12-2001 03:51 AM
тАО07-12-2001 03:51 AM
Re: Low performance and time responses
I have had a look at you init.ora parms, and there are few things of note.
1. Your block size. The Oracle db_block_size is basically the unit of currency for Oracle. It completes all of its work in units of this size. A larger Oracle block size will improved efficiency in I/O in some cases. (Consider how large your row data is - you do not want a row to span multiple Oracle blocks, as each additional block is an additional read/overhead. - stat - table fetch continued row)
We have D250, D270 machines with HPUX 10.20, with a block size of 8K, so I imagine you should be able to do the same. (Its listed in the Oracle Documentation for HPUX - I do not have it handy so I cannot confirm.) However, changing you block size is NOT easy.
To change the blocksize you need to re-create the database. This means that you have to basically export all of your user data, code, objects, etc. Totally trash your database and then re-create it, and in the create database command specify the blocksize to be the new size, (4K or 8K). Once created you then have to re-import all of you user stuff. Also, you have to beaware, that some of your init.ora parms (those such as db_block_buffers, db_file_multiblock_read_count, etc), which are specified in units of Oracle buffers. Like I said, not for the faint hearted. I would recommend that you try it on a development type instance just so that you can verify the process, changes required, implications, validate that it can be done, and check for any performance enhancements/degredations.
2. I also notice that of your total shared pool size - 9000000, you are reserving 450000). I do not know the size of your database code, packages, queries etc, but you need to be aware that the shared pool is segregated based upon 4k (shared_pool_reservered_min_alloc) - half each. Depending on the weighting of your code size you wish to change this 4k value. You can check your object sizes using v$db_object_cache - once they have been loaded into the SGA.
3. Your compatible parm is set to 8.0.0, which means you are not utilizing any options specific to 8.0.4.
4. Its a good idea to set the parm log_checkpoints_to_alert to true, as you can then use your alert log to confirm if your database is performing a large amount of checkpoints. Whether this is a good thing, depends on how you value performance/against recovery.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2001 10:53 AM
тАО07-17-2001 10:53 AM
Re: Low performance and time responses
- We were reviewing sar statistics during last days. We used sarcheck to help us.
- What sarcheck shows is that we have a very high I/O bottleneck with our external discs (those on the disk array).
- Bottleneck may be attacked by:
1) Reviewing sql statements for reducing I/O requests. We are working on it but it?s hard to do because app. code is already finished and we can generate some problem if we try to change a lot.
2) I?ve read in some forums about arrays configuring, stripping and something else. Could a new disk, logical volume, ... help us to improve performance. Charles, you advanced something in your last messages ...
- I can show our configuration if needed.
Thanks a lot.
- « Previous
- Next »