Operating System - OpenVMS
1827474 Members
2033 Online
109965 Solutions
New Discussion

Re: ACP Index File errors

 
robert70
Valued Contributor

ACP Index File errors

I am running VMS 7.3-2 on a DS20E
we had a problem through the night when we were unable to create new files?

%RMS-E-CRE, ACP file create failed

after we deleted and purged we were able to.

we were also seeing some ACP index file error.

Is there some limit on the number of files that a disk can hold? and is this setup in the INDEXF.SYS file? How can you check what this is set to and can you modify easily?

Thanks
13 REPLIES 13
Heinz W Genhart
Honored Contributor

Re: ACP Index File errors

Hi robertedinburgh

With

$ SHOW DEVICE/FULL 'disk'

you will see 'Maximum files allowed'

Could it be that the disk is very fragmented ?

regards

Geni
Uwe Zessin
Honored Contributor

Re: ACP Index File errors

There is a file header bitmap (to maintain free/used header blocks) within INDEXF.SYS. The file cannot grow larger than the bitmap describes - even with dynamic expansion. As the bitmap is at the beginning of INDEXF.SYS the only way to grow is a backup, initialize with a larger bitmap and data-restore.
.
Uwe Zessin
Honored Contributor

Re: ACP Index File errors

Good point, a very fragmented disk can eat up headers as well, because they include the retrieval pointers. If the first header is full, additional pointers go into extent headers. A simple backup/restore can 'fix' this.
.
robert70
Valued Contributor

Re: ACP Index File errors

Thanks

is there any way to find out the total number of files on a device (without doing a dir/size=total, which takes ages)?
robert70
Valued Contributor

Re: ACP Index File errors

i mean dir/total
Uwe Zessin
Honored Contributor

Re: ACP Index File errors

You would have to go through INDEXF.SYS directly - I think there is freeware that can do this. Watch out for aliases or hard links as DIRECTORY can be fooled by them and count files/directories multiple times.

And don't get confused by 'Maximum Files' as it shows the size of the header bitmap, but a single file can use multiple headers.
.
Jan van den Ende
Honored Contributor

Re: ACP Index File errors

Robert,

was there no second line in the error message, with some more explanation?

This can happen:
- when trying to create CONTIGUOUS files with no big enough contiguous space ("DEVICE FULL")
- when many files are very fragmented, such that the index file can no longer hold the headers (it needs to be contigous) (cannot now reproduce the secondary message exactly, but it is rather descriotive)
- when you are exceeding the max number of files on a drive (VERY unlikely, unless you have BIG drives with SMALL clustersizes and MANY ( > 16 M !! ) files on one drive.

The temporary cure is to delete some files (as you found).
If the cause is fragmentation, defragging (or deleting a VERY fragmented file; BIG .LOG files are a likely target) is a somewhat longer lasting cure.

A more permanent solution is to INIT the drive with bigger-than-default /HEADERS (and maybe /MAXFILES and/or /DIRECTORIES)
see HELP INIT for details.

It may be interesting, if you (temporarily) have an extra drive available, to INIT that one, and do a BACKUP/IMAGE/NOINIT from the current drive to the newly-INITed one.

Success.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
robert70
Valued Contributor

Re: ACP Index File errors

our max here is 517,000 files and I purged a lot this morning - we now have 413,000 files
Willem Grooters
Honored Contributor

Re: ACP Index File errors

It may be worthwhile to check the fragmentation of a file on the disk.

DFU will show, probably. a Q&D template: see attachement...
Willem Grooters
OpenVMS Developer & System Manager
Hein van den Heuvel
Honored Contributor

Re: ACP Index File errors


>> %RMS-E-CRE, ACP file create failed
What was the SECOND error line?

Could you create SOME files, but not other very similare files?

>> we were also seeing some ACP index file error.

Detailed message?

>> after we deleted and purged we were able to.

Besides running out of headers as discussed by others, one classic and often seen cause for this is running out of CONTIGUOUS space to extent a DIRECTORY to create an entry for a new file.
Some new files in the very same directory might still work... if the alphabetic name ordering leads to a block with space. Very confusing.

What is the largest contiguous free space?
What is the largest directory?
DFU can answer both questions quickly!

http://www.digiater.nl/dfu

>> is there any way to find out the total number of files on a device (without doing a dir/size=total, which takes ages)?

Yes... DFU REPORT


Cheers,
Hein.




Mike Kier
Valued Contributor

Re: ACP Index File errors

>our max here is 517,000 files and I purged a lot this morning - we now have 413,000 files

Another question you might like to think about is whether you really need 400K+ files sitting in the directories and occupying INDEXF.SYS entries on a single volume or if you'd be equally well served if they were in some much smaller number of BACKUP savesets, ZIP archives, or spread across a number of LD virtual volumes.
Practice Random Acts of VMS Marketing
Robert Gezelter
Honored Contributor

Re: ACP Index File errors

Robert,

DFU is helpful, but one should not overlook the standard ANALYZE utility.

Among other things, file headers may be tied up in lost files. ANALYZE will also check a variety of file system paramters.

The OP does not mention the error message, which would be most useful.

Of course, the most consistent results will be reported on a quiescent volume.

- Bob Gezelter, http://www.rlgsc.com
Willem Grooters
Honored Contributor

Re: ACP Index File errors

[quote]
Besides running out of headers as discussed by others, one classic and often seen cause for this is running out of CONTIGUOUS space to extent a DIRECTORY to create an entry for a new file.
[/quote]

Thought about that :)
The file has beencreated (!) but the directory cannot be extended (a directory file needs to be contiguous and if there is not enough space available, creation of the new file will fail).
Try another name, if this can be stuffed somewhere within the file, it will succeed.

If this happens, it normally means (severe) fragmentation of the disk. Your application performance will suffer as well.

@Mike Kier:
Nothing extreme with todays multi-Gb disks. But I agree that you could ask yourself if it is required to have these files on-line and couldn't be compressed in some way or another.

However, it's an idea if direct access is not a requirement. If it is, think about placing them on a logical volume, e.g. using the LD facility (same author as DFU). But that may require software changes.
Willem Grooters
OpenVMS Developer & System Manager