Operating System - HP-UX
1821980 Members
3096 Online
109638 Solutions
New Discussion юеВ

multiple cp commands and file fragmentation

 
SOLVED
Go to solution
Todd McDaniel_1
Honored Contributor

multiple cp commands and file fragmentation

My question may be a bit obtuse, but here goes.

My dbas are asking a few questions about how the "free list" is used when multiple copies are made from one filesystem to a new clean filesystem.

If they execute 15 copy commands at the same time, how will the assignation of blocks be handled? is there a whole group of blocks/extents of space reserved based on the total size of the copy?

OR is data copied based on a round-robin thru the 15 cp commands since there is time-slicing on UNIX to handle a plethora of commands on the system?

I know that on RAW disks, that they could specify the range to be used, but on cooked FS can it be reserved by the cp command based on the filesize?

Question #2...

They also had a question about how either RMAN or a restore from Netbackup tape will re-layout the data on the disks. will it merely mirror the previous layout of data or reorg it in some fashion.

Their Question was this: Will the restore "make an effort minimize the number of threads writing to a single filesystem to reduce the potential fragmentation? "

_______________________________________
I know that is a lot to read, but if you can help me understand how the free list of blocks are assigned when multiple copies are executed or in the case of a restore how HPUX handles any defragmentation, my guess is it doesnt on this last one, but merely relays the data back to its original place.
Unix, the other white meat.
7 REPLIES 7
Steven E. Protter
Exalted Contributor
Solution

Re: multiple cp commands and file fragmentation

As far as I know ionodes are assigned in the order they are requested. If you have 15 file copies going on at the same time, you are likely to get fragmentation.

Time slicing is the way Unix does work, including I/O. You can defragemnt filesystems after the fact with the fsadm command in Online JFS.

This applies to cooked filesystems.

Question 2.

I don't think the primary job of the I/O processes is to minimize fragmentation. The primary job is to get the data written to disk cleanly as fast as possible.

Your best course of action in my opinion is to defragment after the fact.

Saying that, I've been running for years without OnlineJFS, we just purchased it. We' occaisionally defragmented our filesystems because we rearranged disk, forcing us to back up and restore the data.

Since we did those restores(oracle) one at a time with fbackup/frecover, there was a defragmentation effect because we used newfs to create the new filesystem.

Not that anyone EVER noticed improved oracle performance after such an operation.

HP-UX 64 bit 11.11 September 2003 Patch.
Oracle 8.1.7.4.0 and Oracle 9.0.2.x

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Alzhy
Honored Contributor

Re: multiple cp commands and file fragmentation

It's been eons since I've done defrags on my VxFS filesystems - specially on my Fileservers (NAS function). On cooked filesystems serving as RDBMS storage -- this I think will not be a big issue as these filesystems will contain a very minor amount of files (inodes).

On your question how will RMAN/NetBackup restore / relayout Data on the disk -- again I think this will not be an issue. But for large Filesystems with Zillions of files/dirs -- I usually recreate the Filesystem instead of just deleting the files/dirs using the mkfs_vxfs command and restore back... I have not actually done comparison how a restored Filesystem is fragmented after restoring a vxdump, Legato or NetBackup data stream...

Hakuna Matata.
Steven E. Protter
Exalted Contributor
Todd McDaniel_1
Honored Contributor

Re: multiple cp commands and file fragmentation

Steve,

Since my filesystems are Striped, and data is layed out over 8 disks. would that exacerbate the fragmentation? OR b/c they are striped would that lessen the fragmentation?

My guess is it would lessen it b/c the data is spread out over 8 disks and not simply written to one disk at a time in a typical mirorred filesystem. Since the nature of striping purposely is a "sort" or a "type" of fragmentation in itself.

It seems to me that the fragmentation is less of an issue on striped filesystems than it would be on a mirrored filesystem where multiple copies would be very fragmented in the case of 15 cp commands. Possibly, b/c of the multiple reads used on a striped FS?

I hope my questions make sense. heh
Unix, the other white meat.
Bruno Ganino
Honored Contributor

Re: multiple cp commands and file fragmentation

Unix recovers the space dynamically, therefore the restore does not carry out one defragmentation with no specific option.
For defrag really file system you must
1) make backup of fs
2) to use the command newfs that it cleans up the fs completely
3) to make restore of the fs
Bye
Bruno
Torino (Turin) +2H
Todd McDaniel_1
Honored Contributor

Re: multiple cp commands and file fragmentation

Nelson,

I hope you can forgive my boldness regarding assigning you points for your reponse to my question.

It seems that you have a poor record of assigning points to your own posts. This is not a retailation, but merely a suggestion to assign points to your many posts, 172 of 483 do have points awarded but 311 dont.

I will assign points, but please address yours as well.

Please dont be offended, I know we all get busy and forget the small things. Just a gentle reminder.
Unix, the other white meat.
Alzhy
Honored Contributor

Re: multiple cp commands and file fragmentation

Todd, not at all... although that number 483 posts - is suspect.. I've only been on ITRC since 1/2002 but have been active only since about Jun. 2003... I think I only have posts that number in the teens but not hundreds... I will try to assign posts from now on...

On the post at hand -- you can do a quick test if you want.. create a Filesystem (VxFS) and purposedly create a nunber of files and randomly deleting , adding.. then take a fragmentation report .. forgot the syntax before you save to tape. For comparisons, use vxdump and Netbackup streams.. then restore back either on the filesystem "cleaned" using mkfs or just wiping away the files/dirs (now do not delete lost+found now..)... run fragmentation reports again.. you'll be surprised at what you'll find out.

Hakuna Matata.