<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: database storage capacity planning in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620352#M816280</link>
    <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;As this is a datawahouse DB you may investigate the possibility to use data compression. &lt;BR /&gt;&lt;BR /&gt;cf. attachment&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
    <pubDate>Wed, 07 Sep 2005 09:38:20 GMT</pubDate>
    <dc:creator>Jean-Luc Oudart</dc:creator>
    <dc:date>2005-09-07T09:38:20Z</dc:date>
    <item>
      <title>database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620338#M816266</link>
      <description>We are in the process doing a capacity planning forcast for our datawarehouse. Has anyone a formula or &lt;BR /&gt;automated process to calculate table storage requirements/sizes? We have reviewed the Oracle Note:10640.1&lt;BR /&gt;but the results do not give the correct values.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!</description>
      <pubDate>Wed, 07 Sep 2005 02:01:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620338#M816266</guid>
      <dc:creator>Edgar_8</dc:creator>
      <dc:date>2005-09-07T02:01:36Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620339#M816267</link>
      <description>Hi Edgar,&lt;BR /&gt;&lt;BR /&gt;Here are some simple rules I remember now:&lt;BR /&gt;&lt;BR /&gt;- A different tablespace for tables and indexes. Also consider a pair of tablespaces for each different module. &lt;BR /&gt;&lt;BR /&gt;- The size of the datafiles you should use depends on several factors and it is very difficult to estimate. But since you are talking about a datawharouse, you should size each datafile with 2Gb for starting...&lt;BR /&gt;&lt;BR /&gt;- About tables, the important aspect to keep few large extents. I have a little script that helps me to control this for all tables:&lt;BR /&gt;&lt;BR /&gt;select s.owner, s.segment_name, s.extents, s.max_extents &lt;BR /&gt;from dba_segments s &lt;BR /&gt;where s.extents &amp;gt;= 10 and &lt;BR /&gt;s.tablespace_name != 'SYSTEM' and &lt;BR /&gt;s.owner != 'SYS' and &lt;BR /&gt;s.segment_type = 'TABLE' and &lt;BR /&gt;not exists (select 1 from dba_tblexts e &lt;BR /&gt;where nvl( e.extents,0) + nvl( e.extents_tratados,0) + 2 &amp;gt;= nvl( s.extents,0) and &lt;BR /&gt;s.segment_name = e.segment_name and &lt;BR /&gt;s.owner = e.owner); &lt;BR /&gt;&lt;BR /&gt;I've created this dba_tblexts table to know how many extents I've verified and considered normal for each table: if an extent for a table is created each month it may be normal but if this happens each week, it may be because the "next extent" size in the table storage definitions is too small and you should increase it... &lt;BR /&gt;&lt;BR /&gt;Anyway, if you have table with many extents, you cannot recreate it but you can always export and import it with the "merge" option to create a single extent at import time.&lt;BR /&gt;&lt;BR /&gt;For indexes, it is much more easy because you can recreate them: I normally recreate indexed with more than 10 extents. &lt;BR /&gt;&lt;BR /&gt;That's what I remember for now. Hope this'll help you!&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;&lt;BR /&gt;Eric Antunes&lt;BR /&gt;</description>
      <pubDate>Wed, 07 Sep 2005 04:20:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620339#M816267</guid>
      <dc:creator>Eric Antunes</dc:creator>
      <dc:date>2005-09-07T04:20:42Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620340#M816268</link>
      <description>Hi Eric,&lt;BR /&gt;&lt;BR /&gt;We need to establish storage capacity for tables and therefore require a formula to determine a tables size in MB/GB.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!</description>
      <pubDate>Wed, 07 Sep 2005 04:28:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620340#M816268</guid>
      <dc:creator>Edgar_8</dc:creator>
      <dc:date>2005-09-07T04:28:52Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620341#M816269</link>
      <description>Ok, it is a more simple question than I was thinking:&lt;BR /&gt;&lt;BR /&gt;select segment_name, sum(bytes) ebytes &lt;BR /&gt;from dba_extents &lt;BR /&gt;where owner not in ('SYS', 'SYSTEM') and&lt;BR /&gt;segment_type = 'TABLE'&lt;BR /&gt;group by segment_name&lt;BR /&gt;order by sum(bytes) desc&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;&lt;BR /&gt;Eric Antunes</description>
      <pubDate>Wed, 07 Sep 2005 06:24:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620341#M816269</guid>
      <dc:creator>Eric Antunes</dc:creator>
      <dc:date>2005-09-07T06:24:55Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620342#M816270</link>
      <description>Here is the same script in Mb:&lt;BR /&gt;&lt;BR /&gt;select segment_name "Table Name", round( sum(bytes) / 1024 / 1024) "Table Size (Mb)" &lt;BR /&gt;from dba_extents &lt;BR /&gt;where owner not in ('SYS', 'SYSTEM') and&lt;BR /&gt;segment_type = 'TABLE'&lt;BR /&gt;group by segment_name&lt;BR /&gt;order by round( sum(bytes) / 1024 / 1024) desc&lt;BR /&gt;&lt;BR /&gt;Eric Antunes</description>
      <pubDate>Wed, 07 Sep 2005 06:27:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620342#M816270</guid>
      <dc:creator>Eric Antunes</dc:creator>
      <dc:date>2005-09-07T06:27:33Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620343#M816271</link>
      <description>Hi Eric,&lt;BR /&gt;&lt;BR /&gt;Thanks for the feedback. We already know how to determine the current size a table is taking. What we are attempting to determine is to project what a tables size would be say in 2006 given that say the number of records in that table is "x". Is there know way of projecting a tables size using the number of expected rows;average row length/size; and db block size?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!</description>
      <pubDate>Wed, 07 Sep 2005 06:38:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620343#M816271</guid>
      <dc:creator>Edgar_8</dc:creator>
      <dc:date>2005-09-07T06:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620344#M816272</link>
      <description>&lt;BR /&gt;Hmmm, I would simply assume a linear relation of row-count and used-space.&lt;BR /&gt;x rows now, taking y GB? Then for 2x rows anticipate 2y GB.&lt;BR /&gt;&lt;BR /&gt;For datawarehouse applications I would go much more aggressive on the file sizes then Eric suggest. How about starting at 20GB?&lt;BR /&gt;&lt;BR /&gt;Also, be sure to try to have Oracle manage as much as possible: reduce the number of tablespaces, reduce the number of device,  Stripe And Mirror Everything (SAME).&lt;BR /&gt;&lt;BR /&gt;KISS: if a whole set of disks  / storage quantity is going to be dedicated for the Oracle DB, then just hand it over. Don;t try to help to much by dividing it up in morcels.&lt;BR /&gt;&lt;BR /&gt;I assume you are not yet running 10g, but you may want to look ahead at 10g new features to understand which direction Oracle is going. (which I'm tempted to summarize as 'give oracle the keys to the castle and it will take over from there' :-).&lt;BR /&gt;&lt;BR /&gt;Just an opinion,&lt;BR /&gt;hth,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Wed, 07 Sep 2005 07:12:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620344#M816272</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-09-07T07:12:49Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620345#M816273</link>
      <description>If you know the expected number of rows per year for example, you can estimate for each table what will be the size of this table in the future. &lt;BR /&gt;&lt;BR /&gt;For example, I've a table A with 1097651 rows sizing 264 Mb. This gives 4158 rows per Mb. &lt;BR /&gt;&lt;BR /&gt;I know the average rows per year with the following query:&lt;BR /&gt;&lt;BR /&gt;select max(trunc( creation_date)) - min(trunc( creation_date)) "Table Days", &lt;BR /&gt;count(*) "N.Âº Rows",&lt;BR /&gt;round( count(*) / (max( trunc( creation_date)) - min(trunc( creation_date))) * 365) "Avg. Rows / Year"&lt;BR /&gt;from A&lt;BR /&gt;&lt;BR /&gt;For table A, I get the following result:&lt;BR /&gt;&lt;BR /&gt;Table Days N.Âº Rows Avg. Rows / Year&lt;BR /&gt;1753 1097651 228547&lt;BR /&gt;&lt;BR /&gt;So, the table will increase: 228547 (Row/Year) / 4158 (Row/Mb) = 55Mb!!&lt;BR /&gt;&lt;BR /&gt;Eric Antunes</description>
      <pubDate>Wed, 07 Sep 2005 07:19:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620345#M816273</guid>
      <dc:creator>Eric Antunes</dc:creator>
      <dc:date>2005-09-07T07:19:33Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620346#M816274</link>
      <description>hi edgar!&lt;BR /&gt;&lt;BR /&gt;just use estimates based on your experience and forecast the appropriate size or each tables! Don't be shocked!&lt;BR /&gt;&lt;BR /&gt;Then create your Locally Managed Tablespaces.&lt;BR /&gt;&lt;BR /&gt;when it comes to tables, then I would suggest you LEAVE THE STORAGE PARAMETERS OFF.  If you are going with a simple "index tablespace" and "data tablespace" -- you might just as well go with "a single tablespace" and put everything in it (no performance gains to be &lt;BR /&gt;had by separating the two structures -- none). Then just use auto-allocation and stripe the tablespace across as many devices as possible.&lt;BR /&gt;&lt;BR /&gt;You can also use tools like TOAD to generate estimates based on your own inputs.&lt;BR /&gt;&lt;BR /&gt;Again, sizing tables of your own table is not an exact science.&lt;BR /&gt;&lt;BR /&gt;good luck!&lt;BR /&gt;&lt;BR /&gt;kind regards&lt;BR /&gt;yogeeraj</description>
      <pubDate>Wed, 07 Sep 2005 07:35:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620346#M816274</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2005-09-07T07:35:56Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620347#M816275</link>
      <description>Hi Eric/Yogi,&lt;BR /&gt;&lt;BR /&gt;Thanks for the feedback. Unfortunately when requesting additional storage space one needs to provide solid proof our how one came to the requested storage amount. Therefore we need a solid formula to forecast storage capacity growth.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!</description>
      <pubDate>Wed, 07 Sep 2005 07:40:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620347#M816275</guid>
      <dc:creator>Edgar_8</dc:creator>
      <dc:date>2005-09-07T07:40:14Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620348#M816276</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;can you try tom kytes's show_space procedure to calculate the current table sizes and extrapolate?&lt;BR /&gt;&lt;BR /&gt;===============&lt;BR /&gt;create or replace&lt;BR /&gt;procedure show_space&lt;BR /&gt;( p_segname in varchar2,&lt;BR /&gt;  p_owner   in varchar2 default user,&lt;BR /&gt;  p_type    in varchar2 default 'TABLE',&lt;BR /&gt;  p_partition in varchar2 default NULL )&lt;BR /&gt;authid current_user&lt;BR /&gt;as&lt;BR /&gt;    l_free_blks                 number;&lt;BR /&gt;&lt;BR /&gt;    l_total_blocks              number;&lt;BR /&gt;    l_total_bytes               number;&lt;BR /&gt;    l_unused_blocks             number;&lt;BR /&gt;    l_unused_bytes              number;&lt;BR /&gt;    l_LastUsedExtFileId         number;&lt;BR /&gt;    l_LastUsedExtBlockId        number;&lt;BR /&gt;    l_LAST_USED_BLOCK           number;&lt;BR /&gt;    procedure p( p_label in varchar2, p_num in number )&lt;BR /&gt;    is&lt;BR /&gt;    begin&lt;BR /&gt;        dbms_output.put_line( rpad(p_label,40,'.') ||&lt;BR /&gt;                              p_num );&lt;BR /&gt;    end;&lt;BR /&gt;begin&lt;BR /&gt;    for x in ( select tablespace_name&lt;BR /&gt;                 from dba_tablespaces&lt;BR /&gt;                where tablespace_name = ( select tablespace_name&lt;BR /&gt;                                            from dba_segments&lt;BR /&gt;                                           where segment_type = p_type&lt;BR /&gt;                                             and segment_name = p_segname&lt;BR /&gt;                                  and SEGMENT_SPACE_MANAGEMENT &amp;lt;&amp;gt; 'AUTO' )&lt;BR /&gt;             )&lt;BR /&gt;    loop&lt;BR /&gt;    dbms_space.free_blocks&lt;BR /&gt;    ( segment_owner     =&amp;gt; p_owner,&lt;BR /&gt;      segment_name      =&amp;gt; p_segname,&lt;BR /&gt;      segment_type      =&amp;gt; p_type,&lt;BR /&gt;      partition_name    =&amp;gt; p_partition,&lt;BR /&gt;      freelist_group_id =&amp;gt; 0,&lt;BR /&gt;      free_blks         =&amp;gt; l_free_blks );&lt;BR /&gt;    end loop;&lt;BR /&gt;&lt;BR /&gt;    dbms_space.unused_space&lt;BR /&gt;    ( segment_owner     =&amp;gt; p_owner,&lt;BR /&gt;      segment_name      =&amp;gt; p_segname,&lt;BR /&gt;      segment_type      =&amp;gt; p_type,&lt;BR /&gt;          partition_name    =&amp;gt; p_partition,&lt;BR /&gt;      total_blocks      =&amp;gt; l_total_blocks,&lt;BR /&gt;      total_bytes       =&amp;gt; l_total_bytes,&lt;BR /&gt;      unused_blocks     =&amp;gt; l_unused_blocks,&lt;BR /&gt;      unused_bytes      =&amp;gt; l_unused_bytes,&lt;BR /&gt;      LAST_USED_EXTENT_FILE_ID =&amp;gt; l_LastUsedExtFileId,&lt;BR /&gt;      LAST_USED_EXTENT_BLOCK_ID =&amp;gt; l_LastUsedExtBlockId,&lt;BR /&gt;      LAST_USED_BLOCK =&amp;gt; l_LAST_USED_BLOCK );&lt;BR /&gt;&lt;BR /&gt;    p( 'Free Blocks', l_free_blks );&lt;BR /&gt;    p( 'Total Blocks', l_total_blocks );&lt;BR /&gt;    p( 'Total Bytes', l_total_bytes );&lt;BR /&gt;    p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );&lt;BR /&gt;    p( 'Unused Blocks', l_unused_blocks );&lt;BR /&gt;    p( 'Unused Bytes', l_unused_bytes );&lt;BR /&gt;    p( 'Last Used Ext FileId', l_LastUsedExtFileId );&lt;BR /&gt;    p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );&lt;BR /&gt;    p( 'Last Used Block', l_LAST_USED_BLOCK );&lt;BR /&gt;end;&lt;BR /&gt;/&lt;BR /&gt;==================&lt;BR /&gt;&lt;BR /&gt;Example output:&lt;BR /&gt;exec show_space('TEMP');&lt;BR /&gt;Free Blocks.............................&lt;BR /&gt;Total Blocks............................523&lt;BR /&gt;Total Bytes.............................4284416&lt;BR /&gt;Total MBytes............................4&lt;BR /&gt;Unused Blocks...........................8&lt;BR /&gt;Unused Bytes............................65536&lt;BR /&gt;Last Used Ext FileId....................1&lt;BR /&gt;Last Used Ext BlockId...................27559&lt;BR /&gt;Last Used Block.........................5&lt;BR /&gt;&lt;BR /&gt;You need have other "internal" information on the predictable growth...&lt;BR /&gt;&lt;BR /&gt;hope this helps!&lt;BR /&gt;&lt;BR /&gt;kind regards&lt;BR /&gt;yogeeraj</description>
      <pubDate>Wed, 07 Sep 2005 08:02:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620348#M816276</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2005-09-07T08:02:07Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620349#M816277</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We have worked on a possible formula as follows:&lt;BR /&gt;&lt;BR /&gt;Table Size= (Projected Rows per day x Physical Block Size)/Rows per block&lt;BR /&gt;&lt;BR /&gt;Comments Please.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!</description>
      <pubDate>Wed, 07 Sep 2005 08:59:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620349#M816277</guid>
      <dc:creator>Edgar_8</dc:creator>
      <dc:date>2005-09-07T08:59:43Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620350#M816278</link>
      <description>Hi Edgar,&lt;BR /&gt;&lt;BR /&gt;If you know the past, I'll know the future! ;)&lt;BR /&gt;&lt;BR /&gt;That was the base idea of my last post: you can only estimate the future rows if you have considerable sample of the past. And you should do this calculation in a year basis because you will avoid sasonal effects...&lt;BR /&gt;&lt;BR /&gt;Best Regards,&lt;BR /&gt;&lt;BR /&gt;Eric Antunes</description>
      <pubDate>Wed, 07 Sep 2005 09:10:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620350#M816278</guid>
      <dc:creator>Eric Antunes</dc:creator>
      <dc:date>2005-09-07T09:10:18Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620351#M816279</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;I was never able to work these formulas and I found that the best way (for us) was to upload (good size) sample data and retrieve information from the database regarding space used for both DATA and INDEX.&lt;BR /&gt;&lt;BR /&gt;Then, if you knwo how many rows you will get you can work out the storage requirement.&lt;BR /&gt;Obviously, based on this "row" information you may give yourself some room for nb row "mis-evaluation".&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
      <pubDate>Wed, 07 Sep 2005 09:25:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620351#M816279</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2005-09-07T09:25:34Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620352#M816280</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;As this is a datawahouse DB you may investigate the possibility to use data compression. &lt;BR /&gt;&lt;BR /&gt;cf. attachment&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
      <pubDate>Wed, 07 Sep 2005 09:38:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620352#M816280</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2005-09-07T09:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620353#M816281</link>
      <description>sorry forgot the attachment&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
      <pubDate>Wed, 07 Sep 2005 09:39:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620353#M816281</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2005-09-07T09:39:26Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620354#M816282</link>
      <description>Also this link&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.oracle.com/technology/products/bi/pdf/o9ir2_compression_performance_twp.pdf" target="_blank"&gt;http://www.oracle.com/technology/products/bi/pdf/o9ir2_compression_performance_twp.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
      <pubDate>Wed, 07 Sep 2005 09:48:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620354#M816282</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2005-09-07T09:48:43Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620355#M816283</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;Yes, table compression, available in Oracle since 9iR2, can help you a lot and may cut down a great deal your estimations if your haven't considered compression yet. &lt;BR /&gt;In a pure DW environment I would set PCTFREE=0 and enable compression at the table level. I've seen interesting ratios in my experience, as high as 80%. Table compression plus partitioning is a nice combination for DW.&lt;BR /&gt;But if you ever need to update such a table, be advised that UPDATE is not the way to go. That would most likely make your table to grow in size, and even worse it will take a way long time. I would pre-create a table with the desired column changes, and then exchange partitions instead.&lt;BR /&gt;&lt;BR /&gt;Then you can re-estimate your numbers. As for your formula, it sounds reasonable as baseline, but as stated above, I would be more aggressive. You can add a certain percentage, 20% to say one number, to that estimation to count for aside variations. For example, you decide to create a new index on a "fat" table, which you did not consider in your estimations.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Ariel</description>
      <pubDate>Thu, 08 Sep 2005 11:03:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620355#M816283</guid>
      <dc:creator>Ariel Cary</dc:creator>
      <dc:date>2005-09-08T11:03:49Z</dc:date>
    </item>
    <item>
      <title>Re: database storage capacity planning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620356#M816284</link>
      <description>Uh, wait, I'll get my crystal view ball ...&lt;BR /&gt;&lt;BR /&gt;Very difficult, but here goes:&lt;BR /&gt;&lt;BR /&gt;Adding to your calculation&lt;BR /&gt;&lt;BR /&gt;You need to add additional space for&lt;BR /&gt;- Indexes (surely used)&lt;BR /&gt;- PCT_FREE freespace in case your table gets later updates, which will fill NULL-fields or longer strings.&lt;BR /&gt;- SNAPSHOT-Tables (Matrialized views)(if used) based on the calculated table will grow about the same amount of space or more&lt;BR /&gt;- Additional SORT (TEMP) Space may be needed &lt;BR /&gt;&lt;BR /&gt;The key is the dictionary structure of the table in charge and the knowledge of how many rows you expect per day.&lt;BR /&gt;Create a dummy, fill 100K rows into it and take measurements esp. of the additional stuff like indexes and snapshots.&lt;BR /&gt;&lt;BR /&gt;Two tips:&lt;BR /&gt;1) If you an take influence of the DB_BLOCK_SIZE, bigger blocks will give you less block overhead, so your net storage will be a higher percentage. Since Oracle 9 you can use difrent blockszies per tablespace&lt;BR /&gt;2) Do a hard calculation on you expected PCTFREE. 10% (default in many cases) PCTFREE on a 300GB table will give you 30GB of wasted space if you do not really need it.&lt;BR /&gt;&lt;BR /&gt;Good luck&lt;BR /&gt;Volker</description>
      <pubDate>Thu, 08 Sep 2005 14:07:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/database-storage-capacity-planning/m-p/3620356#M816284</guid>
      <dc:creator>Volker Borowski</dc:creator>
      <dc:date>2005-09-08T14:07:45Z</dc:date>
    </item>
  </channel>
</rss>

