- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- dump table then load data back in causes errors
Operating System - HP-UX
1753977
Members
7576
Online
108811
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2004 05:39 PM
тАО10-16-2004 05:39 PM
We have a backup data base that we are monitoring the inbound and out bound traffic on our Network interface.
Here is the problem, this data base gets rebuilt everyweek by our production db so these tables are over written with data contained from our prod db, and we do not want that.
We would like to dump these tables to a flat file then when the database comes back up, truncate the table then input the data back in.
The problem we have when we try this is, we get a "Commit point reached - logical record count" and not all the data gets inputted. What does commit point reached mean?
Second when it does input the data, the next time our monitor script runs to imput the data again it does not put it in order.
What would be the proper procedure for doing this...
SQLPLUS=/opt/app/oracle/product/8.1.7/bin/sqlplus
SQLFILE=/home/log/lanmon_dump.sql
LANMONSPOOL=/home/log/restore_lanmon.csv
TABLESPACE_MON=/home/log/restore_tblmon.csv
# Create SQL File
echo "set heading off" >>$SQLFILE
echo "set feedback off" >>$SQLFILE
echo "set pagesize 40" >>$SQLFILE
echo "set linesize 400" >>$SQLFILE
echo "set term off" >>$SQLFILE
echo "" >>$SQLFILE
echo "spool $LANMONSPOOL" >>$SQLFILE
echo "" >>$SQLFILE
echo "select DDMMYY ||','|| TIME ||','|| IN_IF0 ||','|| OUT_IF0 ||','|| IN_IF1 ||','|| OUT_IF1" >>$SQLFILE
echo " from lanmon" >>$SQLFILE
echo "" >>$SQLFILE
echo " /" >>$SQLFILE
echo " spool off" >>$SQLFILE
***********************************
Our actual data... (shortened)
select * from lanmon
DDMMYY TIME IN_IF0 OUT_IF0 IN_IF1
---------- ---------- -------------------- -------------------- ----------------
16/10/04 18:00:00 362308 0 38588
16/10/04 18:30:01 186700 0 3816
16/10/04 23:30:00 8622 0 7860
17/10/04 00:00:00 100093 46 5094
14/10/04 15:00:01 0 0 0
14/10/04 17:30:01 1279906 0 1794
14/10/04 18:00:01 1278768 0 6456
14/10/04 23:00:01 1280086 0 3666
14/10/04 23:30:01 1279906 0 6234
15/10/04 00:00:01 1279008 0 5322
15/10/04 00:30:01 3967259 46 2166
As you can see the dates do not follow each other.
This is our comma separated file for input... (shrtened)
15/10/04,01:30:01,3180,0,1488,4372
15/10/04,02:00:00,3300,0,1662,4549
15/10/04,02:30:00,3240,0,1200,4008
15/10/04,03:00:00,3360,0,1722,4666
**********************************
Here is the problem, this data base gets rebuilt everyweek by our production db so these tables are over written with data contained from our prod db, and we do not want that.
We would like to dump these tables to a flat file then when the database comes back up, truncate the table then input the data back in.
The problem we have when we try this is, we get a "Commit point reached - logical record count" and not all the data gets inputted. What does commit point reached mean?
Second when it does input the data, the next time our monitor script runs to imput the data again it does not put it in order.
What would be the proper procedure for doing this...
SQLPLUS=/opt/app/oracle/product/8.1.7/bin/sqlplus
SQLFILE=/home/log/lanmon_dump.sql
LANMONSPOOL=/home/log/restore_lanmon.csv
TABLESPACE_MON=/home/log/restore_tblmon.csv
# Create SQL File
echo "set heading off" >>$SQLFILE
echo "set feedback off" >>$SQLFILE
echo "set pagesize 40" >>$SQLFILE
echo "set linesize 400" >>$SQLFILE
echo "set term off" >>$SQLFILE
echo "" >>$SQLFILE
echo "spool $LANMONSPOOL" >>$SQLFILE
echo "" >>$SQLFILE
echo "select DDMMYY ||','|| TIME ||','|| IN_IF0 ||','|| OUT_IF0 ||','|| IN_IF1 ||','|| OUT_IF1" >>$SQLFILE
echo " from lanmon" >>$SQLFILE
echo "" >>$SQLFILE
echo " /" >>$SQLFILE
echo " spool off" >>$SQLFILE
***********************************
Our actual data... (shortened)
select * from lanmon
DDMMYY TIME IN_IF0 OUT_IF0 IN_IF1
---------- ---------- -------------------- -------------------- ----------------
16/10/04 18:00:00 362308 0 38588
16/10/04 18:30:01 186700 0 3816
16/10/04 23:30:00 8622 0 7860
17/10/04 00:00:00 100093 46 5094
14/10/04 15:00:01 0 0 0
14/10/04 17:30:01 1279906 0 1794
14/10/04 18:00:01 1278768 0 6456
14/10/04 23:00:01 1280086 0 3666
14/10/04 23:30:01 1279906 0 6234
15/10/04 00:00:01 1279008 0 5322
15/10/04 00:30:01 3967259 46 2166
As you can see the dates do not follow each other.
This is our comma separated file for input... (shrtened)
15/10/04,01:30:01,3180,0,1488,4372
15/10/04,02:00:00,3300,0,1662,4549
15/10/04,02:30:00,3240,0,1200,4008
15/10/04,03:00:00,3360,0,1722,4666
**********************************
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2004 08:08 PM
тАО10-16-2004 08:08 PM
Solution
Commit point reached means you can't put any more records on hold without issuing a commmit. Its set in init.ora.
You can change this with a special init.ora if you wish.
To copy files over you might find it easier to just copy the files over and do whats called a database clone.
Our dba copyes the .dbf and index files over and then runs a script changing the database name into the test database instancce name.
Seems a little more straightforward.
There are other ideas on database replication on the many oracle websites. otn.oracle.com technet.oracle.com etc.
If you have oracle support they may be able to give you other ways to duplicate data.
SEP
You can change this with a special init.ora if you wish.
To copy files over you might find it easier to just copy the files over and do whats called a database clone.
Our dba copyes the .dbf and index files over and then runs a script changing the database name into the test database instancce name.
Seems a little more straightforward.
There are other ideas on database replication on the many oracle websites. otn.oracle.com technet.oracle.com etc.
If you have oracle support they may be able to give you other ways to duplicate data.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-17-2004 05:21 AM
тАО10-17-2004 05:21 AM
Re: dump table then load data back in causes errors
>> then input the data back in
HOW? It sounds like you are using SQL*loader, which would indeed be the right tool. But it is critical to share the command line/option file with us.
You may also want to consider export/import as data tools, and with Oracle 9i, it can be interesting to 'unhook' a tablespace from an old db, and hook it back up to a new db. No data move, just metadata move.
>> "Commit point reached - logical record count" and not all the data gets inputted. What does commit point reached mean?
That is supposed to be a SQL*Loader informational, which should have no effect on the actuall data loaded, other then showing how far it got. Switch on SILENT=FEEDBACK to surpress. Read up on ROWS and ARRAYSIZE to control.
>> Second when it does input the data, the next time our monitor script runs to imput the data again it does not put it in order.
Relational databases do NOT have an order (of arrival) requirement. Any order you perceive without askig explicitly for it is accidental and can not be relied upon... no matter how predictable it seems. Actually, it IS pretty predictable. The DB in does NOT go out of its way to create dis-order, generally filling rows in order of arrival = time order. But it is allowed to stick a fresh row any empty space it likes.
If you want garantueed order, you will have to request that. Just add "ORDER BY TIME" to your select.
Free advice:
Those 'echo .. >> file' look scary.
What it the file already exist?
Consider
cat > $SQLPLUS < set heading...
set feedbac...
EOF
set pages 400? consider: set pages 0
Cheers,
Hein.
HOW? It sounds like you are using SQL*loader, which would indeed be the right tool. But it is critical to share the command line/option file with us.
You may also want to consider export/import as data tools, and with Oracle 9i, it can be interesting to 'unhook' a tablespace from an old db, and hook it back up to a new db. No data move, just metadata move.
>> "Commit point reached - logical record count" and not all the data gets inputted. What does commit point reached mean?
That is supposed to be a SQL*Loader informational, which should have no effect on the actuall data loaded, other then showing how far it got. Switch on SILENT=FEEDBACK to surpress. Read up on ROWS and ARRAYSIZE to control.
>> Second when it does input the data, the next time our monitor script runs to imput the data again it does not put it in order.
Relational databases do NOT have an order (of arrival) requirement. Any order you perceive without askig explicitly for it is accidental and can not be relied upon... no matter how predictable it seems. Actually, it IS pretty predictable. The DB in does NOT go out of its way to create dis-order, generally filling rows in order of arrival = time order. But it is allowed to stick a fresh row any empty space it likes.
If you want garantueed order, you will have to request that. Just add "ORDER BY TIME" to your select.
Free advice:
Those 'echo .. >> file' look scary.
What it the file already exist?
Consider
cat > $SQLPLUS <
set feedbac...
EOF
set pages 400? consider: set pages 0
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-23-2005 01:05 AM
тАО02-23-2005 01:05 AM
Re: dump table then load data back in causes errors
Thanks
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP