Operating System - HP-UX
1748139 Members
3577 Online
108758 Solutions
New Discussion юеВ

Re: database storage capacity planning

 
Yogeeraj_1
Honored Contributor

Re: database storage capacity planning

hi,

can you try tom kytes's show_space procedure to calculate the current table sizes and extrapolate?

===============
create or replace
procedure show_space
( p_segname in varchar2,
p_owner in varchar2 default user,
p_type in varchar2 default 'TABLE',
p_partition in varchar2 default NULL )
authid current_user
as
l_free_blks number;

l_total_blocks number;
l_total_bytes number;
l_unused_blocks number;
l_unused_bytes number;
l_LastUsedExtFileId number;
l_LastUsedExtBlockId number;
l_LAST_USED_BLOCK number;
procedure p( p_label in varchar2, p_num in number )
is
begin
dbms_output.put_line( rpad(p_label,40,'.') ||
p_num );
end;
begin
for x in ( select tablespace_name
from dba_tablespaces
where tablespace_name = ( select tablespace_name
from dba_segments
where segment_type = p_type
and segment_name = p_segname
and SEGMENT_SPACE_MANAGEMENT <> 'AUTO' )
)
loop
dbms_space.free_blocks
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
freelist_group_id => 0,
free_blks => l_free_blks );
end loop;

dbms_space.unused_space
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
total_blocks => l_total_blocks,
total_bytes => l_total_bytes,
unused_blocks => l_unused_blocks,
unused_bytes => l_unused_bytes,
LAST_USED_EXTENT_FILE_ID => l_LastUsedExtFileId,
LAST_USED_EXTENT_BLOCK_ID => l_LastUsedExtBlockId,
LAST_USED_BLOCK => l_LAST_USED_BLOCK );

p( 'Free Blocks', l_free_blks );
p( 'Total Blocks', l_total_blocks );
p( 'Total Bytes', l_total_bytes );
p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );
p( 'Unused Blocks', l_unused_blocks );
p( 'Unused Bytes', l_unused_bytes );
p( 'Last Used Ext FileId', l_LastUsedExtFileId );
p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );
p( 'Last Used Block', l_LAST_USED_BLOCK );
end;
/
==================

Example output:
exec show_space('TEMP');
Free Blocks.............................
Total Blocks............................523
Total Bytes.............................4284416
Total MBytes............................4
Unused Blocks...........................8
Unused Bytes............................65536
Last Used Ext FileId....................1
Last Used Ext BlockId...................27559
Last Used Block.........................5

You need have other "internal" information on the predictable growth...

hope this helps!

kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Edgar_8
Regular Advisor

Re: database storage capacity planning

Hi,

We have worked on a possible formula as follows:

Table Size= (Projected Rows per day x Physical Block Size)/Rows per block

Comments Please.

Thanks in advance!
Eric Antunes
Honored Contributor

Re: database storage capacity planning

Hi Edgar,

If you know the past, I'll know the future! ;)

That was the base idea of my last post: you can only estimate the future rows if you have considerable sample of the past. And you should do this calculation in a year basis because you will avoid sasonal effects...

Best Regards,

Eric Antunes
Each and every day is a good day to learn.
Jean-Luc Oudart
Honored Contributor

Re: database storage capacity planning

Edgar,

I was never able to work these formulas and I found that the best way (for us) was to upload (good size) sample data and retrieve information from the database regarding space used for both DATA and INDEX.

Then, if you knwo how many rows you will get you can work out the storage requirement.
Obviously, based on this "row" information you may give yourself some room for nb row "mis-evaluation".

Regards
Jean-Luc
fiat lux
Jean-Luc Oudart
Honored Contributor

Re: database storage capacity planning

Edgar,

As this is a datawahouse DB you may investigate the possibility to use data compression.

cf. attachment

Regards
Jean-Luc
fiat lux
Jean-Luc Oudart
Honored Contributor

Re: database storage capacity planning

sorry forgot the attachment

Regards
Jean-Luc
fiat lux
Jean-Luc Oudart
Honored Contributor

Re: database storage capacity planning

Ariel Cary
Frequent Advisor

Re: database storage capacity planning

Edgar,

Yes, table compression, available in Oracle since 9iR2, can help you a lot and may cut down a great deal your estimations if your haven't considered compression yet.
In a pure DW environment I would set PCTFREE=0 and enable compression at the table level. I've seen interesting ratios in my experience, as high as 80%. Table compression plus partitioning is a nice combination for DW.
But if you ever need to update such a table, be advised that UPDATE is not the way to go. That would most likely make your table to grow in size, and even worse it will take a way long time. I would pre-create a table with the desired column changes, and then exchange partitions instead.

Then you can re-estimate your numbers. As for your formula, it sounds reasonable as baseline, but as stated above, I would be more aggressive. You can add a certain percentage, 20% to say one number, to that estimation to count for aside variations. For example, you decide to create a new index on a "fat" table, which you did not consider in your estimations.

Regards,

Ariel
Volker Borowski
Honored Contributor

Re: database storage capacity planning

Uh, wait, I'll get my crystal view ball ...

Very difficult, but here goes:

Adding to your calculation

You need to add additional space for
- Indexes (surely used)
- PCT_FREE freespace in case your table gets later updates, which will fill NULL-fields or longer strings.
- SNAPSHOT-Tables (Matrialized views)(if used) based on the calculated table will grow about the same amount of space or more
- Additional SORT (TEMP) Space may be needed

The key is the dictionary structure of the table in charge and the knowledge of how many rows you expect per day.
Create a dummy, fill 100K rows into it and take measurements esp. of the additional stuff like indexes and snapshots.

Two tips:
1) If you an take influence of the DB_BLOCK_SIZE, bigger blocks will give you less block overhead, so your net storage will be a higher percentage. Since Oracle 9 you can use difrent blockszies per tablespace
2) Do a hard calculation on you expected PCTFREE. 10% (default in many cases) PCTFREE on a 300GB table will give you 30GB of wasted space if you do not really need it.

Good luck
Volker