- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Archive of Quickly growing log files
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2000 08:59 AM
11-02-2000 08:59 AM
Archive of Quickly growing log files
I would like to keep the information stream uninterrupted while either copying or moving the older info off to tape or another file. I have tried using cp -p but between the time the copy is started and when it is completed, info is lost.
Thank you for your help.
Rich
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2000 09:06 AM
11-02-2000 09:06 AM
Re: Archive of Quickly growing log files
Supposing your two files are log1 and log2.
create a link to log1 : ln -s log1 log
use log to save your logs.
when you need to backup :
rm log
ln -s log2 log
backup log1 to tape
Regards,
Patrice.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2000 09:21 AM
11-02-2000 09:21 AM
Re: Archive of Quickly growing log files
Patrice's suggestion will work providing your processes do not have the logfiles open for writing all the time.
I would suggest though that instead of doing
rm log
ln -s
you do 'ln -sf
as this will reduce the time when 'log' does not exist.
Regards,
John
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2000 09:53 AM
11-02-2000 09:53 AM
Re: Archive of Quickly growing log files
Here's a mechanism you can try. Assume that your logging script is called "mylogger". Execute it as follows:
# mylogger | sh -c 'while read LINE ; do echo $LINE >> /tmp/log ; done'
...and so it runs for awhile, and then do:
# mv /tmp/log /tmp/log.old
...and now mylogger continues to run and you have TWO log files
Try this with a simple script for
#!/usr/bin/sh
while true
do
date
sleep 1
done
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-2000 01:59 AM
11-03-2000 01:59 AM
Re: Archive of Quickly growing log files
What about having your program writing
its log into a named pipe (see mkfifo)
and having another process reading from
the same named pipe and writing to one or
many different files ?
The named pipe would allow some write buffering
while the reading process swaps output files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-2000 05:49 AM
11-03-2000 05:49 AM
Re: Archive of Quickly growing log files
Treat for example USER SIGNAL 1 ( singal 16) to close your log file, rename and, reopen again.