- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- cp command and large files
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 04:45 AM
тАО03-31-2004 04:45 AM
1. Shutdown immediate of the db
2. Initiate cp commands--16 run at the same time
3. Bring db up while cp's are still going--otherwise, our db would be down for about 7-8 hours while the backup runs.
Is this going to give us a valid backup? My question is--if you initiate a cp command in HPUX, and someone makes changes to that file while it is copying, is the file being copied going to be corruped? Or do we have a snapshot of the file when we initiate the cp command and the changes won't show up while we're copying the file?
Would RMAN be faster?
Hope someone can help.
Regards,
Lori
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 05:01 AM
тАО03-31-2004 05:01 AM
SolutionCp in no way "freezes" a file. You need to completely rethink your process -- either with hotbackups which RMAN can do for you or enhance your existing scheme with the magic word you mentioned "snapshots". You can shutdown your database; using the OnlineJFS mount option snapof=, create a snapshot mountpoint of the database filesystems. This process takes only a few seconds. Next restart the database and cp or backup the snapshot'ed versions of the file and now you have a safe, cold backup with almost all the uptime of a hotbackup.
You need to do something because right now you have nothing of any value whatsoever.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 05:02 AM
тАО03-31-2004 05:02 AM
Re: cp command and large files
Have you consider export, you can do an export.
Another backup strategy would be to put all tablespaces in the datafile on backup mode ?
alter tablespace TABLESPACE_NAME begin backup;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 05:08 AM
тАО03-31-2004 05:08 AM
Re: cp command and large files
You should maybe watch out that so many cp are not a bottleneck.
NFS is certainly slowing down. Use FTP if you can.
RMAN will be faster on incremental. If you do a full backup with rman, it will take approx. the same time, but incrementals will only backup modified oracle blocs... Maybe an incremental on month begining and an incremental other week-ends would do.
"Reality is just a point of view." (P. K. D.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 05:46 AM
тАО03-31-2004 05:46 AM
Re: cp command and large files
Regards,
Lori
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 05:52 AM
тАО03-31-2004 05:52 AM
Re: cp command and large files
Regards, Clay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-31-2004 11:24 AM
тАО03-31-2004 11:24 AM
Re: cp command and large files
Let's focus on the opening lines for a moment:
" oracle database (about 10G total)"
" 2. Initiate cp commands--16 run at the same time"
" down for about 7-8 hours while the backup runs. "
Is that really ony 10 Gigabyte?
If so, your speed is only: 416 KB/sec! 10 * 1024 * 1024 / ( 7 * 60 * 60 )
Surely that is totaly unacceptable!
Why not invest in some extra local storage and clone the DB to local disks first, and push then over the wire later!
Any IO system worth talking about does 20MB/sec (in and out at the same time) at which speed 10Gb takes about 10 minutes. On my tester here, with a single EVA 5000 we run upwards of 300MB/Sec. Half in, half out would put a copy at around minute or so for 10GB.
So... what is the real DB size to be copied?
What does the IO landscape roughly look like?
How many HBA (speeds?), How many spindles? Anything to spare for a staging area?
Be sure to use careful placement making sure the write activity does not disturb the seeks for the read.
Be sure not to have too many concurrent streams for too few disks/filesystems to reduce fragmentation and disk head thrashing.
A few seperate, un-striped, un-bound disks might be ideal for the staging area.
Have you consider piping through a compress tool to reduce the output IO volume? Oracle database frequently are heavily compressable, like 2x or better.
Good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2004 01:43 AM
тАО04-01-2004 01:43 AM
Re: cp command and large files
Here's a relatively easy way to do this:
Assuming that you have your Oracle database on LVM disks....
If you don't already have it, get Mirror/UX.
Then add another cheap disk to your system and mirror the logical volumes onto the new disk. Create a shell script that will:
a) shutdown Oracle
b) split the mirror
c) restart Oracle
d) mount the mirror copies on the new disk with a different mount point
Then you can copy the files off the mirror copy for as long as it takes, while the database is running. When the copy is done (put the copy in the same script), unmount the mirror copies, then rejoin and resync the mirrors - you'll be ready for the next backup!
-greg
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2004 02:19 AM
тАО04-01-2004 02:19 AM
Re: cp command and large files
Actually, since you're only talking about 10Gb, you may not even need to worry about mirror/ux if you don't have it and don't want the expense. Just add another disk (preferably on a separate SCSI controller, if available), and when you shut the database down, copy the .dbf files over to the other disk while the database is down. Then restart Oracle and backup the copies of the .dbf files. It takes 5 min on my server to copy a 2Gb file, but that's with the Oracle filesystem mounted with mincache=direct, convosync=direct (minimal vxfs buffering). It's would be a good bit faster if I remounted the filesystem to use vxfs buffers. So, maybe 10-15 minutes to copy 10Gb? It would be worth a try for a good cold backup. RMAN is good, but if you've got the time and the opportunity, it's always good to have a cold backup to work from.