1837379 Members
3067 Online
110116 Solutions
New Discussion

Copying problem!!

 
SOLVED
Go to solution
Adam Noble
Super Advisor

Copying problem!!

Hi,

I have a problem due to the fact I can't add any more disks to a specific volume group. It has a maximum of 16. Anyway I need to copy one of the lvols into another area on a new volume group. I am in the process of testing this. I created a new mountpoint and simply did a cp -rp to this area. I then intended to simply umount and then mount on the new area. I have noticed however that after the copy the amount of used space differs from the original. If I do a file listing there are exactly the same number of files in the new area. It suggests however almost 1GB less data. Can anyone explain this.
6 REPLIES 6
Pete Randall
Outstanding Contributor
Solution

Re: Copying problem!!

In a word: fragmentation.

As files get added/deleted/changed over time, fragmentation increases, causing the same number of files to occupy more space.


Pete

Pete
Steven E. Protter
Exalted Contributor

Re: Copying problem!!

Shalom,

Online JFS a pay for add in product can defragment vxfs filesystems in place.

The other method might be to back up the fire and restore it, but OnlineJFS is the best way to go for this situation.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Adam Noble
Super Advisor

Re: Copying problem!!

Of course I'm not thinking. Thats why you can defrag a filesystem. Thanks for waking me up tho it is a Friday.
A. Clay Stephenson
Acclaimed Contributor

Re: Copying problem!!

There are several possible answers and more than one of them may be in play: 1) Sparse files 2) Symbolic links 3) The original filetree had very large directories with many empty slots and the new directories are just big enough to fit the files that now exist.

Ooops, forget sparse files that would explain why the original files occupy fewer disk blocks than the original - not your problem.

If you want to be absolutely sure then write a script that compares the cksum's of the original files to those of the copies but I suspect that you are fine.

By the way, cp -rp will work but if you want to retain all of the metadata (notably directory ownerships) between the original and copy versions then a tar, cpio, or fbackup|frecover pipeline does a much better job.
If it ain't broke, I can fix that.
Patrick Wallek
Honored Contributor

Re: Copying problem!!

A better test than looking at just the number of files and space used is to check the cksum's of the files.

What you can do is something like:

# cd /curr_dir
# find . -type f -exec cksum {} \; > /var/tmp/cksum_orig_dir

# cd /new_dir
# find . -type f -exec cksum {} \; > /var/tmp/cksum_new_dir

Once the cksums are done, you can sort and diff the files. If there are no differences, then your copy is OK.

Some differences in size can be in the size of the directory itself. Directories grow as files are added, but as files are removed, they do not shrink.
Ralph Grothe
Honored Contributor

Re: Copying problem!!

Another reason could be a different block size.
Is there any difference in output from fstyp?
Besides, I would recommend to pass big enough values for No. of PVs, PE size, and max. No. of PEs during VG creation.
Better to be required to use a larger PE size from the start than running into PV or PE limitations later when space gets short, and requiring recreation of VG.


Madness, thy name is system administration