Operating System - HP-UX
1831458 Members
3883 Online
110025 Solutions
New Discussion

Re: Filesystem Defragmentation Question

 
David Poe_2
Advisor

Filesystem Defragmentation Question

I have a 100 GB filesystem, which suffers terribly from our bad implementation of software which we hope to fix soon. Anyways, there are several million files from 3 - 30k a piece, averaging 200,000 files in a directory and several hunder directories. I understand that performance suffers because of the number of files in a directory, however, could fragmentation of inodes and directories be an issue as well? About 35 GB of files a month are put onto the system, and another 35 GB are archived off to a tar/zip on a cheaper storage system. The fsadm output is below. Can someone explain what these numbers mean, and at what point would you want to defrag a filesystem?

TIA!

"fsadm -F vxfs -D -E /interfaces"

Directory Fragmentation Report
Dirs Total Immed Immeds Dirs to Blocks to
Searched Blocks Dirs to Add Reduce Reduce
total 3280 218842 518 196 1255 68161

Extent Fragmentation Report
Total Average Average Total
Files File Blks # Extents Free Blks
11450869 1 1 6725426
blocks used for indirects: 3626
% Free blocks in extents smaller than 64 blks: 23.99
% Free blocks in extents smaller than 8 blks: 3.32
% blks allocated to extents 64 blks or larger: 1.83
Free Extents By Size
1: 17932 2: 12995 4: 2944 8: 7114
16: 7979 32: 5079 64: 4187 128: 2692
256: 1217 512: 390 1024: 105 2048: 8
4096: 3 8192: 0 16384: 1 32768: 2
5 REPLIES 5
TwoProc
Honored Contributor

Re: Filesystem Defragmentation Question

I'd definetly look at using fsadm to reorg that stuff. Another thing to think of is - just after weekend full backup - just before bringing back up yours apps (big assumptions in place there) - newfs that thing and restore from tape. I'd do this at once in a while (every quarter,six months, or year or so) - just to give your files a file system which provides a (nods to the late Bob Ross) "happy place for your to live."

We are the people our parents warned us about --Jimmy Buffett
A. Clay Stephenson
Acclaimed Contributor

Re: Filesystem Defragmentation Question

My experience with vxfs filesystems is that even when heavily fragmented the gains from frequent defragging operations are difficult to perceive and are generally difficult to measure. I've never seen them exceed 5% improvement even on filesystems that have gone for months without a defrag. It's really your large directories that are killing you.
If it ain't broke, I can fix that.
Jeff Schussele
Honored Contributor

Re: Filesystem Defragmentation Question

Hi David,

As rules go:

1) Anytime you get into 6 digits or more in dir entries performance will *always* suffer.
You just can't cache or hache that many!
This is a *very* poor design & you need to change this.

2) Fragmentation is not really a problem until you've approached/exceeded 90% usage at some point in time.

3) Can't hurt (except CPU/disk usage) to run the actual defrag command & *always* use -d -D & -e -E when doing so so you can see the before/after results. And I generally run it *at least* twice. As it can be a step-by-step process to get to optimum. BUT with those dir entry numbers...hell it could take you 4-5 times or more.

You REALLY need to trim those directories or frankly you'll be battling this problem - forever.
For details on the output run
man fsadm_vxfs
It'll explain it all.

My 2 cents,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Sheriff Andy
Trusted Contributor

Re: Filesystem Defragmentation Question

David,
I like to use the defrag as follows;

fsadm -F vxfs -deDE /interfaces.
-e reorganizes & consolidates extents
-d reorganizes & optimizes directories
-E reports extent fragmentation
-D reports directory fragmentation

Under the Imends to Add, Dirs to Reduce & Blocks to Reduce, you want them as close to zero as possible.

Under % Free blocks in extents smaller than 64 blks: & % Free blocks in extents smaller than 8 blks:, you want these numbers to be as low as possible.

Under % blks allocated to extents 64 blks or larger:, you want this as large as possible.

Hope that this helps.
David Poe_2
Advisor

Re: Filesystem Defragmentation Question

Thanks for the info, it definately gives me something to look through. I believe I will try the defrag, and make the defrag run at a lower priority so it doesn't kill disk/cpu. All of our HP servers are running full throttle 80% of the time. We are currently in the middle of putting in several new (and larger) HP servers to alleviate our load problem. We also have a new project to change the way we store files, but that project won't be up and running for another year an a half, unfortunately. The idea is to store several of these smaller files into one larger file. In defense of the original architects, the intention was that we would not be anywhere near the load we are currently.

Thanks again for the information!