1748150 Members
3552 Online
108758 Solutions
New Discussion юеВ

Re: PERFSTATreport

 
Giada Bonf├а
Frequent Advisor

PERFSTATreport

Hi,

I did a PERFSTAT report with spreport and I have attached it.
I would like to know if my database is working fine. I would like to learn understanding this kind of report so it would be useful if someone could explain to me how to do it and which are the parameter that I have to look.
I'm quite new to Oracle tuning so every advice is useful for me
Thank you


6 REPLIES 6
Yogeeraj_1
Honored Contributor

Re: PERFSTATreport

hi,

Your window size is too large for this type of STATSPACK analysis.

A Statspack report generated with a 15 minutes window is as valid as one with a one-hour window (or an even larger timeframe). In fact, as the window gets larger, it may become difficult to come to definitive conclusions given the raw numeric data.

If you take a level 10 every 15 minutes for a day. Then, pick a representative 15 minute window when performance was "felt to be at its worst", we can try to analyze that one.


Anyway, if i analyze the first part of the report:
=======================================================================
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 1,138.73 30,173.91
Logical reads: 1,052.11 27,878.63
Block changes: 2.21 58.52
Physical reads: 63.58 1,684.71
Physical writes: 63.71 1,688.20
User calls: 4.33 114.80
Parses: 1.04 27.46
Hard parses: 0.04 1.12
Sorts: 0.45 11.81
Logons: 0.06 1.70
Executes: 1.67 44.38
Transactions: 0.04

=======================================================================

In this section, we get a great deal of information, in a very small amount of space. We can see how much REDO is generated on average every second and for every transaction.

here, we can see that there are about 1 KN of redo per second. The average transaction generates 30KB of redo!

Very difficult to interprete these values.

Hence, i recommend that you take other snapshots and post again.

Best Regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Andy Beal
Frequent Advisor

Re: PERFSTATreport

You Db doesn't look very active tuning probably won't yield much of a result. Basically the way to start out is to look at waits, then go do research on the specific wait and find out what you can do to alleviate the wait. For instance your top wait is Control file parallel write, but a statspack will usually always show this. Are you really experiencing performance problems? The database looks fairly non-intensive and has a huge buffer cache/sga. There's a very good book you can get to help you get the most out of statspack and tuning: "High Performance Tuning with STATSPACK" by Donald Burleson, I recommend it for anyone wanting to know more (everything) about statspack tuning. Glance is also very helpful to see if you have cpu/disk/network limitations, and then finding out what process is bumping into the resource problem.

-A
Brian Crabtree
Honored Contributor

Re: PERFSTATreport

Doing a quick run through, your block buffers and shared pool size are really large. Unless your database is in the 500g-1t range, they are completely out of bounds. You might want to make sure that your system isn't swapping memory if you are having a performance problem.

Looking at the database, the rollback segments are very large for what is being put through them. The average active is around 3m, while the segement size is 52m, and your optimal is 52m. I would suggest keeping the optimal at 50m, and setting the segment initial/next extent to 5m with a min extent of 2-4.

For future reference, a good performance analysis tool is http://www.oraperf.com that allows you to upload statspack reports.

Thanks,

Brian
Yogeeraj_1
Honored Contributor

Re: PERFSTATreport

hi,

As Brian mentioned above, your cache size parameters seems to have been "oversized".
Comparing your cache sizes with mine. watch the huge differences!
Mine is from an 24x7 OLTP with 400 concurrent users at peak period!

Cache Sizes
~~~~~~~~~~~
db_block_buffers: 50000 log_buffer: 262144
db_block_size: 8192 shared_pool_size: 209715200

Also, concerning the rollback segments:
I would recommend that you move to LMT. I like to create my rollback segments with around 20-25 minextents and set maxextents to 255 or so. You should place individual rollback segments into their own tablespaces with INITIAL = NEXT = 1M and allow each tablespace to have 1 autoextendable datafile allocated to it.

In this fashion, the rollback segments will grow as needed, if space is available on the system -- upto 255 extents. Periodically (typically before backups) I manually offline and then drop each rollback segment, shrink the datafile associated with its tablespace back to the right size for 20 extents and recreate the rollback segment. This allows for the infrequent "large" transaction to dynamically grow the rollback segment without having to interject a "set transaction use rollback segment BIG_RBS" into it.

E.g.
create tablespace RBS_01
datafile '/xxx/xxx/xxx/rbs_01.dbf'
size 25m reuse
autoextend on
maxsize 100m
extent management local uniform size 1m;

Then:

Create rollback segment rbs_01 tablespace rbs_01;
alter rollback segment rbs_01 online;


why all this?:
o rollback segments must have equi-sized extents.
o rollback segments should be allocated large to avoid 1555's (which I've never hit).
o rollback segments should be plentiful for the amount of transactions you do.

The above rules of thumb to be a good starting point for databases in general. Some need more -- few need less. It is easy to understand, trivial to manage. I never use optimial, prefering to resize them manually before a backup or something.


hope this helps too!


Regards
Yogeeraj
PS. Don't forget to upload your statspack report - 15 minutes window/at High load.
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Brian Crabtree
Honored Contributor

Re: PERFSTATreport

On that note, you can also create undo tablespaces in 9i now. Once this is done, you can set "undo_manamagement = auto" or set "undo_tablespace" and have the rollback segments be automaticlly maintained. I will note however, that we have not done this in a major production enviroment yet, so I cannot state it for that. However most of the systems that we have set it on have had no ill effects.

Brian
Yogeeraj_1
Honored Contributor

Re: PERFSTATreport

hi,

to add to Brian's note above, a few info about 9i and undo tablespaces:

An undo tablespace is a different way of managing rollback in 9i. Instead of you figuring out you need N rollback segments each of M bytes/extents in size -- you just create one undo tablespace. In this one undo tablespace Oracle manages all undo. You specify how long you want undo to be retained (eg: if your longest running query is about 15minutes, you might say "please retain undo for 20minutes before overwriting it"). Oracle will create, manage, and grow this undo space for you.

So, an undo tablespace is an alternative to a rollback segment.

As for the non-standard block sizes -- that refers to 9i's ability to have more then one block size in a database. When you create the database, SYSTEM will have a block size. That is the "standard" block size for that database. All other block sizes found in that database are "non-standard".

Rollback segments should have equi-sized extents. Automatically sized extents are by definition not equi sized.


hth
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)