- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: database storage capacity planning
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 07:01 PM
тАО09-06-2005 07:01 PM
database storage capacity planning
automated process to calculate table storage requirements/sizes? We have reviewed the Oracle Note:10640.1
but the results do not give the correct values.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 09:20 PM
тАО09-06-2005 09:20 PM
Re: database storage capacity planning
Here are some simple rules I remember now:
- A different tablespace for tables and indexes. Also consider a pair of tablespaces for each different module.
- The size of the datafiles you should use depends on several factors and it is very difficult to estimate. But since you are talking about a datawharouse, you should size each datafile with 2Gb for starting...
- About tables, the important aspect to keep few large extents. I have a little script that helps me to control this for all tables:
select s.owner, s.segment_name, s.extents, s.max_extents
from dba_segments s
where s.extents >= 10 and
s.tablespace_name != 'SYSTEM' and
s.owner != 'SYS' and
s.segment_type = 'TABLE' and
not exists (select 1 from dba_tblexts e
where nvl( e.extents,0) + nvl( e.extents_tratados,0) + 2 >= nvl( s.extents,0) and
s.segment_name = e.segment_name and
s.owner = e.owner);
I've created this dba_tblexts table to know how many extents I've verified and considered normal for each table: if an extent for a table is created each month it may be normal but if this happens each week, it may be because the "next extent" size in the table storage definitions is too small and you should increase it...
Anyway, if you have table with many extents, you cannot recreate it but you can always export and import it with the "merge" option to create a single extent at import time.
For indexes, it is much more easy because you can recreate them: I normally recreate indexed with more than 10 extents.
That's what I remember for now. Hope this'll help you!
Best Regards,
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 09:28 PM
тАО09-06-2005 09:28 PM
Re: database storage capacity planning
We need to establish storage capacity for tables and therefore require a formula to determine a tables size in MB/GB.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 11:24 PM
тАО09-06-2005 11:24 PM
Re: database storage capacity planning
select segment_name, sum(bytes) ebytes
from dba_extents
where owner not in ('SYS', 'SYSTEM') and
segment_type = 'TABLE'
group by segment_name
order by sum(bytes) desc
Best Regards,
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 11:27 PM
тАО09-06-2005 11:27 PM
Re: database storage capacity planning
select segment_name "Table Name", round( sum(bytes) / 1024 / 1024) "Table Size (Mb)"
from dba_extents
where owner not in ('SYS', 'SYSTEM') and
segment_type = 'TABLE'
group by segment_name
order by round( sum(bytes) / 1024 / 1024) desc
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2005 11:38 PM
тАО09-06-2005 11:38 PM
Re: database storage capacity planning
Thanks for the feedback. We already know how to determine the current size a table is taking. What we are attempting to determine is to project what a tables size would be say in 2006 given that say the number of records in that table is "x". Is there know way of projecting a tables size using the number of expected rows;average row length/size; and db block size?
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 12:12 AM
тАО09-07-2005 12:12 AM
Re: database storage capacity planning
Hmmm, I would simply assume a linear relation of row-count and used-space.
x rows now, taking y GB? Then for 2x rows anticipate 2y GB.
For datawarehouse applications I would go much more aggressive on the file sizes then Eric suggest. How about starting at 20GB?
Also, be sure to try to have Oracle manage as much as possible: reduce the number of tablespaces, reduce the number of device, Stripe And Mirror Everything (SAME).
KISS: if a whole set of disks / storage quantity is going to be dedicated for the Oracle DB, then just hand it over. Don;t try to help to much by dividing it up in morcels.
I assume you are not yet running 10g, but you may want to look ahead at 10g new features to understand which direction Oracle is going. (which I'm tempted to summarize as 'give oracle the keys to the castle and it will take over from there' :-).
Just an opinion,
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 12:19 AM
тАО09-07-2005 12:19 AM
Re: database storage capacity planning
For example, I've a table A with 1097651 rows sizing 264 Mb. This gives 4158 rows per Mb.
I know the average rows per year with the following query:
select max(trunc( creation_date)) - min(trunc( creation_date)) "Table Days",
count(*) "N.├В┬║ Rows",
round( count(*) / (max( trunc( creation_date)) - min(trunc( creation_date))) * 365) "Avg. Rows / Year"
from A
For table A, I get the following result:
Table Days N.├В┬║ Rows Avg. Rows / Year
1753 1097651 228547
So, the table will increase: 228547 (Row/Year) / 4158 (Row/Mb) = 55Mb!!
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 12:35 AM
тАО09-07-2005 12:35 AM
Re: database storage capacity planning
just use estimates based on your experience and forecast the appropriate size or each tables! Don't be shocked!
Then create your Locally Managed Tablespaces.
when it comes to tables, then I would suggest you LEAVE THE STORAGE PARAMETERS OFF. If you are going with a simple "index tablespace" and "data tablespace" -- you might just as well go with "a single tablespace" and put everything in it (no performance gains to be
had by separating the two structures -- none). Then just use auto-allocation and stripe the tablespace across as many devices as possible.
You can also use tools like TOAD to generate estimates based on your own inputs.
Again, sizing tables of your own table is not an exact science.
good luck!
kind regards
yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 12:40 AM
тАО09-07-2005 12:40 AM
Re: database storage capacity planning
Thanks for the feedback. Unfortunately when requesting additional storage space one needs to provide solid proof our how one came to the requested storage amount. Therefore we need a solid formula to forecast storage capacity growth.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 01:02 AM
тАО09-07-2005 01:02 AM
Re: database storage capacity planning
can you try tom kytes's show_space procedure to calculate the current table sizes and extrapolate?
===============
create or replace
procedure show_space
( p_segname in varchar2,
p_owner in varchar2 default user,
p_type in varchar2 default 'TABLE',
p_partition in varchar2 default NULL )
authid current_user
as
l_free_blks number;
l_total_blocks number;
l_total_bytes number;
l_unused_blocks number;
l_unused_bytes number;
l_LastUsedExtFileId number;
l_LastUsedExtBlockId number;
l_LAST_USED_BLOCK number;
procedure p( p_label in varchar2, p_num in number )
is
begin
dbms_output.put_line( rpad(p_label,40,'.') ||
p_num );
end;
begin
for x in ( select tablespace_name
from dba_tablespaces
where tablespace_name = ( select tablespace_name
from dba_segments
where segment_type = p_type
and segment_name = p_segname
and SEGMENT_SPACE_MANAGEMENT <> 'AUTO' )
)
loop
dbms_space.free_blocks
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
freelist_group_id => 0,
free_blks => l_free_blks );
end loop;
dbms_space.unused_space
( segment_owner => p_owner,
segment_name => p_segname,
segment_type => p_type,
partition_name => p_partition,
total_blocks => l_total_blocks,
total_bytes => l_total_bytes,
unused_blocks => l_unused_blocks,
unused_bytes => l_unused_bytes,
LAST_USED_EXTENT_FILE_ID => l_LastUsedExtFileId,
LAST_USED_EXTENT_BLOCK_ID => l_LastUsedExtBlockId,
LAST_USED_BLOCK => l_LAST_USED_BLOCK );
p( 'Free Blocks', l_free_blks );
p( 'Total Blocks', l_total_blocks );
p( 'Total Bytes', l_total_bytes );
p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );
p( 'Unused Blocks', l_unused_blocks );
p( 'Unused Bytes', l_unused_bytes );
p( 'Last Used Ext FileId', l_LastUsedExtFileId );
p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );
p( 'Last Used Block', l_LAST_USED_BLOCK );
end;
/
==================
Example output:
exec show_space('TEMP');
Free Blocks.............................
Total Blocks............................523
Total Bytes.............................4284416
Total MBytes............................4
Unused Blocks...........................8
Unused Bytes............................65536
Last Used Ext FileId....................1
Last Used Ext BlockId...................27559
Last Used Block.........................5
You need have other "internal" information on the predictable growth...
hope this helps!
kind regards
yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 01:59 AM
тАО09-07-2005 01:59 AM
Re: database storage capacity planning
We have worked on a possible formula as follows:
Table Size= (Projected Rows per day x Physical Block Size)/Rows per block
Comments Please.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 02:10 AM
тАО09-07-2005 02:10 AM
Re: database storage capacity planning
If you know the past, I'll know the future! ;)
That was the base idea of my last post: you can only estimate the future rows if you have considerable sample of the past. And you should do this calculation in a year basis because you will avoid sasonal effects...
Best Regards,
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 02:25 AM
тАО09-07-2005 02:25 AM
Re: database storage capacity planning
I was never able to work these formulas and I found that the best way (for us) was to upload (good size) sample data and retrieve information from the database regarding space used for both DATA and INDEX.
Then, if you knwo how many rows you will get you can work out the storage requirement.
Obviously, based on this "row" information you may give yourself some room for nb row "mis-evaluation".
Regards
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 02:38 AM
тАО09-07-2005 02:38 AM
Re: database storage capacity planning
As this is a datawahouse DB you may investigate the possibility to use data compression.
cf. attachment
Regards
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 02:39 AM
тАО09-07-2005 02:39 AM
Re: database storage capacity planning
Regards
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2005 02:48 AM
тАО09-07-2005 02:48 AM
Re: database storage capacity planning
http://www.oracle.com/technology/products/bi/pdf/o9ir2_compression_performance_twp.pdf
Regards
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2005 04:03 AM
тАО09-08-2005 04:03 AM
Re: database storage capacity planning
Yes, table compression, available in Oracle since 9iR2, can help you a lot and may cut down a great deal your estimations if your haven't considered compression yet.
In a pure DW environment I would set PCTFREE=0 and enable compression at the table level. I've seen interesting ratios in my experience, as high as 80%. Table compression plus partitioning is a nice combination for DW.
But if you ever need to update such a table, be advised that UPDATE is not the way to go. That would most likely make your table to grow in size, and even worse it will take a way long time. I would pre-create a table with the desired column changes, and then exchange partitions instead.
Then you can re-estimate your numbers. As for your formula, it sounds reasonable as baseline, but as stated above, I would be more aggressive. You can add a certain percentage, 20% to say one number, to that estimation to count for aside variations. For example, you decide to create a new index on a "fat" table, which you did not consider in your estimations.
Regards,
Ariel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2005 07:07 AM
тАО09-08-2005 07:07 AM
Re: database storage capacity planning
Very difficult, but here goes:
Adding to your calculation
You need to add additional space for
- Indexes (surely used)
- PCT_FREE freespace in case your table gets later updates, which will fill NULL-fields or longer strings.
- SNAPSHOT-Tables (Matrialized views)(if used) based on the calculated table will grow about the same amount of space or more
- Additional SORT (TEMP) Space may be needed
The key is the dictionary structure of the table in charge and the knowledge of how many rows you expect per day.
Create a dummy, fill 100K rows into it and take measurements esp. of the additional stuff like indexes and snapshots.
Two tips:
1) If you an take influence of the DB_BLOCK_SIZE, bigger blocks will give you less block overhead, so your net storage will be a higher percentage. Since Oracle 9 you can use difrent blockszies per tablespace
2) Do a hard calculation on you expected PCTFREE. 10% (default in many cases) PCTFREE on a 300GB table will give you 30GB of wasted space if you do not really need it.
Good luck
Volker