Operating System - HP-UX
1753873 Members
7338 Online
108809 Solutions
New Discussion юеВ

cp command and large files

 
SOLVED
Go to solution
Lori Downing
New Member

cp command and large files

Hello, we do a cold backup of our oracle database (about 10G total) every Saturday. We use the cp command over an NFS mount on an external network. What we do is this:
1. Shutdown immediate of the db
2. Initiate cp commands--16 run at the same time
3. Bring db up while cp's are still going--otherwise, our db would be down for about 7-8 hours while the backup runs.

Is this going to give us a valid backup? My question is--if you initiate a cp command in HPUX, and someone makes changes to that file while it is copying, is the file being copied going to be corruped? Or do we have a snapshot of the file when we initiate the cp command and the changes won't show up while we're copying the file?

Would RMAN be faster?

Hope someone can help.

Regards,
Lori
8 REPLIES 8
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: cp command and large files

You have a backup that is perfectly valid in the sense that the data was copied byte-for-byte and at the same time it is perfectly useless -- you can't restore a database with it.

Cp in no way "freezes" a file. You need to completely rethink your process -- either with hotbackups which RMAN can do for you or enhance your existing scheme with the magic word you mentioned "snapshots". You can shutdown your database; using the OnlineJFS mount option snapof=, create a snapshot mountpoint of the database filesystems. This process takes only a few seconds. Next restart the database and cp or backup the snapshot'ed versions of the file and now you have a safe, cold backup with almost all the uptime of a hotbackup.

You need to do something because right now you have nothing of any value whatsoever.
If it ain't broke, I can fix that.
Nicolas Dumeige
Esteemed Contributor

Re: cp command and large files

A copy of a dbf wich is accessed is unusable. This can only be done when the instance is down.

Have you consider export, you can do an export.


Another backup strategy would be to put all tablespaces in the datafile on backup mode ?

alter tablespace TABLESPACE_NAME begin backup;



All different, all Unix
Fred Ruffet
Honored Contributor

Re: cp command and large files

Your cp will effectively not be valid (Oracle Support meaning). Wether they will be useable or not depends on how much time cp will continue while users updates the DB (ie : will dbf and redo will be enough to recover).

You should maybe watch out that so many cp are not a bottleneck.

NFS is certainly slowing down. Use FTP if you can.

RMAN will be faster on incremental. If you do a full backup with rman, it will take approx. the same time, but incrementals will only backup modified oracle blocs... Maybe an incremental on month begining and an incremental other week-ends would do.
--

"Reality is just a point of view." (P. K. D.)
Lori Downing
New Member

Re: cp command and large files

Thank you everyone for your replies. I had a feeling that this wasn't going to work, even though I had 2 gurus tell me otherwise. We will get another solution in place ASAP.

Regards,
Lori
A. Clay Stephenson
Acclaimed Contributor

Re: cp command and large files

I would think that the gurus who advised you are, by definition, not.

Regards, Clay
If it ain't broke, I can fix that.
Hein van den Heuvel
Honored Contributor

Re: cp command and large files



Let's focus on the opening lines for a moment:
" oracle database (about 10G total)"
" 2. Initiate cp commands--16 run at the same time"
" down for about 7-8 hours while the backup runs. "

Is that really ony 10 Gigabyte?
If so, your speed is only: 416 KB/sec! 10 * 1024 * 1024 / ( 7 * 60 * 60 )
Surely that is totaly unacceptable!

Why not invest in some extra local storage and clone the DB to local disks first, and push then over the wire later!

Any IO system worth talking about does 20MB/sec (in and out at the same time) at which speed 10Gb takes about 10 minutes. On my tester here, with a single EVA 5000 we run upwards of 300MB/Sec. Half in, half out would put a copy at around minute or so for 10GB.

So... what is the real DB size to be copied?

What does the IO landscape roughly look like?

How many HBA (speeds?), How many spindles? Anything to spare for a staging area?

Be sure to use careful placement making sure the write activity does not disturb the seeks for the read.

Be sure not to have too many concurrent streams for too few disks/filesystems to reduce fragmentation and disk head thrashing.
A few seperate, un-striped, un-bound disks might be ideal for the staging area.

Have you consider piping through a compress tool to reduce the output IO volume? Oracle database frequently are heavily compressable, like 2x or better.

Good luck!
Greg OBarr
Regular Advisor

Re: cp command and large files


Here's a relatively easy way to do this:

Assuming that you have your Oracle database on LVM disks....

If you don't already have it, get Mirror/UX.

Then add another cheap disk to your system and mirror the logical volumes onto the new disk. Create a shell script that will:

a) shutdown Oracle
b) split the mirror
c) restart Oracle
d) mount the mirror copies on the new disk with a different mount point


Then you can copy the files off the mirror copy for as long as it takes, while the database is running. When the copy is done (put the copy in the same script), unmount the mirror copies, then rejoin and resync the mirrors - you'll be ready for the next backup!

-greg
Greg OBarr
Regular Advisor

Re: cp command and large files


Actually, since you're only talking about 10Gb, you may not even need to worry about mirror/ux if you don't have it and don't want the expense. Just add another disk (preferably on a separate SCSI controller, if available), and when you shut the database down, copy the .dbf files over to the other disk while the database is down. Then restart Oracle and backup the copies of the .dbf files. It takes 5 min on my server to copy a 2Gb file, but that's with the Oracle filesystem mounted with mincache=direct, convosync=direct (minimal vxfs buffering). It's would be a good bit faster if I remounted the filesystem to use vxfs buffers. So, maybe 10-15 minutes to copy 10Gb? It would be worth a try for a good cold backup. RMAN is good, but if you've got the time and the opportunity, it's always good to have a cold backup to work from.