- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Dealing with a badly fragmented disk
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-31-2008 09:23 AM
тАО01-31-2008 09:23 AM
Re: Dealing with a badly fragmented disk
Maybe you could clear up one thing in relation to not having to backup/init/restore the disk. I may be off the beaten track altogher with this, but I know from a few tests I did that I could not copy /contig a 2 gig file back onto this disk, and had to revert to allowing it be created whereever the system deemed fit. How can I make the free space contiguous if it is already fragmented (or does it matter in the slightest?). The 100 odd gig of files that created on this disk are deleted after the network backup, amd shortly thereafter. a replacement set is created. Are not all of these new files going to have the same contigous space allocation problems if nothing if things remain unchecked.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-31-2008 09:42 AM
тАО01-31-2008 09:42 AM
Re: Dealing with a badly fragmented disk
Working without visibility is a challenge. However, moving these files will often free up space.
In the case of backup files, the they are effectively never in use, so the fragmentation is not necessarily a performance issue.
What I am recommending is exhausting the possibilities BEFORE going through the step of image backup/restore, as that will not inherantly solve more than the instant problem.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-31-2008 12:59 PM
тАО01-31-2008 12:59 PM
Re: Dealing with a badly fragmented disk
If you do resort to rebuilding the disk...
(and I'd STRONGLY recommend instead of BACKUP/INIT/RESTORE via tape to the same physical drive that instead, you get a new drive,
$ BACKUP/IMAGE
then swap the physical drives. That way you always have an immediate recovery path, you're never at risk of losing your data to a failed tape or tape drive, and it's MUCH faster)
anyway, back to my original point. If this disk contains very large files, consider setting the cluster size to a very large value when you reinit (and remember to BACKUP/IMAGE/NOINIT to preserve the new cluster size). The larger the cluster size, the less fragmentation is possible, and the less it matters. Yes, you can "waste" up to a whole cluster at the end of every file, but if you have a small number of large files, that's a negligible overhead.
Remember that a file header can hold between 55 and 75 extents, so choose a cluster size that is at least 1/50th of the size of your average file (so, for example, if the files are around 1GB, perhaps choose a cluster size of 65536 blocks). That way it's impossible for those files to overflow their headers.
If the disk is shared with small files, consider segregating your data according to size. With appropriate use of search lists, this can often be made transparent to the application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-31-2008 01:56 PM
тАО01-31-2008 01:56 PM
Re: Dealing with a badly fragmented disk
If the help is correct, that isn't possible.
V7.3-2
INITIALIZE
/CLUSTER_SIZE
/CLUSTER_SIZE=number-of-blocks
Defines, for disk volumes, the minimum allocation unit in blocks.
The maximum size you can specify for a volume is 16382 blocks, or
1/50th the volume size, whichever is smaller.
V8.3
INITIALIZE
/CLUSTER_SIZE
/CLUSTER_SIZE=number-of-blocks
Defines, for disk volumes, the minimum allocation unit in blocks.
The maximum size you can specify for a volume is 16380 blocks, or
1/50th the volume size, whichever is smaller.
----
Which value is correct 16380 or 16382, or if either of these is correct, I am not sure.
The HM2$W_CLUSTER is a 16 bit field, and since 0 isn't a valid value, it could have been interpreted as 65536. However, according to "VMS File System Internals", (Kirby McCoy 1990), HM2$W_CLUSTER must have a non-zero value for the homeblock to be considered valid. So that limits the upper bound to (2^16)-1 (65535). I am not sure why the value is limited to (2^14)-4 (16380 according to V8.3 help).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-21-2008 05:13 PM
тАО02-21-2008 05:13 PM
Re: Dealing with a badly fragmented disk
If your file has more than 70ish fragments it creates a new file header to contain them. IIRC it was 72 for the first header, and 76 after that, but it's been a while since I've dealt with this. (I did Diskeeper support for a long time.)
Even if you consolidate the fragments for a specific header, you'll need to do a header consolidation to fix up all the fragments in the file. Once you get a specific header down to a couple of fragments you can then start consolidating headers. If you have, say, ten headers, with one fragment a piece, it will still look like a file with ten frags.
I haven't looked at DFU for a while. If it has a header consolidation function you may want to use that _after_ you do the defrag of the specific headers.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2008 11:11 PM
тАО02-28-2008 11:11 PM
Re: Dealing with a badly fragmented disk
I tested what the largest clustersize I could use to initialize a 50 GB vdisk. On both 7.3-2 and 8.3 it was 16380, so the help from 8.3 is correct.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-11-2008 03:43 PM
тАО06-11-2008 03:43 PM
Re: Dealing with a badly fragmented disk
The home block has a 16 bit field (HM2$W_IBMAPVBN) that contains the VBN of the first block of the index file bitmap in INDEXF.SYS. This VBN will always be clusterfactor*4+1, therefore the maximum clusterfactor must be .le. 16383 (16383*4)+1=65533.
- « Previous
- Next »