Operating System - OpenVMS
1820619 Members
1925 Online
109626 Solutions
New Discussion юеВ

Re: Defrag Fun and Games

 
SOLVED
Go to solution
Peter Quodling
Trusted Contributor

Defrag Fun and Games

Have a large and very dynamic stripe setted disk (3 x 9GB)

It has had defrag running on it nightly, but with a start at 6 PM and run for 6 hours threshold on it. Problem was that it was running in parallel with defrags on several other disks, and for this and various other reasons, it was only just getting into the second phase of the defrag, before the defrag scheduler was (as instructed) shutting it down. (the Mailto was going to some wierd address that my predecessor had set up, which meant that we weren't aware of the problem.

This hit a crisis point when we found that extension of .dir files, was dying presumably because (as DFU reported) the Largest Free extent was less than a Megabyte. (about 5GB of free space on the disk, but some 500 - 700 MB Files, were reporting 30,000 to 40,000 extents. )This is 24 x 7, and we are generating 500 MB to 1000 mb of new traffic into this disc every day, (Most of which gets processed within 1-3 days) and deleted. I have had defrag running at elevated priority all weekend, but it's really not getting ahead of the game (Largest free extent hovering around 20MB, with around 5GB of unused space. )

This system is a heavy use 24x7 web server (OSU)

My current thoughts are to take an outage, add some extra disks (this is a 4100 with two full shelves on a KZPAC) -18 GB probably, move as much as I can in terms of heavy I/O load off this stripe set, and let the system try to catch up.

Anyone have any other thoughts or approaches?

Mister Q
Leave the Money on the Fridge.
6 REPLIES 6
Karl Rohwedder
Honored Contributor

Re: Defrag Fun and Games

Peter,

if you can take an outage, it would make sense to use backup/restore to make an initial contigous disk, this is probably quicker than to use DFG on such a beast.

Pls. check the initial INIT parameters and allow for a suitably sized INDEXF.SYS.

regards Kalle
John Gillings
Honored Contributor
Solution

Re: Defrag Fun and Games

Mister Q,

You don't say what version you're running.

Although perhaps not applicable to your current configuration, Dynamic Volume Expansion (DVE) and Dissimilar Device Shadowing available in V7.3-2 give you a way to resolve this type of issue online and on the fly. Make sure any new drives you initialize are with /LIMIT to enable DVE. Also make sure all your volumes are in shadow sets. You can then add a new, larger volume, to the shadow set, wait for the copy to complete, then remove the smaller member and expand the volume to the new physical size. Disk upgrade with ZERO downtime! Of course, all the "new" space will be contiguous.

If this isn't posible for now, have a look at the data on the disk. Are there any particularly large files you can move somewhere else to give DEFRAG some room to move? If you can recover some space, try a defrag consolidate freespace first, then go to other phases.

(BTW, you're showing your age. A little stripe set of 3x 9GB drives isn't considered at all "large" these days ;-)
A crucible of informative mistakes
John Gillings
Honored Contributor

Re: Defrag Fun and Games

re kalle: "it would make sense to use backup/restore to make an initial contigous disk"

Please try and avoid "backup and restore" back to the original volume(s). It's far too risky. Backup to a new disk (or stripe set) and switch drives. Try to avoid any sequence of operations that leaves you without an immediate fallback. As well as being safer, you only need ONE backup operation, rather than two.
A crucible of informative mistakes
Wim Van den Wyngaert
Honored Contributor

Re: Defrag Fun and Games

We try to exclude all shortliving files from being defragmented (such as sybase dumps that stay for 1 day, zip files that are deleted when backuped, log files ... ). This speeds up the operation. Simply use /excl in defrag.

Wim
Wim
Jan van den Ende
Honored Contributor

Re: Defrag Fun and Games

... or even BEGIN with preventing fragmentation by setting the /EXTEND for the disk to a value that is, say, about half of the expected averidge file size...

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
comarow
Trusted Contributor

Re: Defrag Fun and Games

There was an elevation to DFO Engineering where a perfectly huge contiguous multi volume disk with a small cluster size will consume significant cpu time during free space consolidation. This has led to a review and improvement in the code stream.

If you log a call you can get an early edition of the code. It is a significant performance improvement.

However, some fragementation actually improves performances, since it can find small pieces of data quickly. It is very rare that a perfectly contiguous disk will in actual useage perform best.

Bob