Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Jobs failing -SYSTEM-W-DIRALLOC

odwillia
Frequent Advisor

Jobs failing -SYSTEM-W-DIRALLOC

We are having jobs fail. This is the error are people are getting:

As the execution of DB_TRANSACTION_APPLIER.EXE has failed the outbound files have not been created
Here is the issue programming is seeing on one of the hard disk:

SERVER1::Theresa's:::>@UTIL:STAMP_FILE DBTRANS:Sts_Audit_DBO_TRANS_.Dat
%RENAME-E-OPENOUT, error opening DB$ROOT:[TRANS]STS_AUDIT_DBO_TRANS__17OCT200800450798.DAT; as output
-RMS-E-ENT, ACP enter function failed
-SYSTEM-W-DIRALLOC, allocation failure on directory file
5 REPLIES
odwillia
Frequent Advisor

Re: Jobs failing -SYSTEM-W-DIRALLOC

Some additional info:

Mopr_HouAVA>edit/read dbtmp:db_nite.log
1 $ SET NOON
*c
(16-OCT-2008 21:49:21.55)$ Exit
(16-OCT-2008 21:49:21.56)$!-------------------------------------------------------------------------------
(16-OCT-2008 21:49:21.56)$! Copy the file into the location needed by the SCN sub process
(16-OCT-2008 21:49:21.56)$! 17-Dec-2007 TAS Removing SCNTRANS: from coping ETC_Trans_File
(16-OCT-2008 21:49:21.56)$! to now copy to STTRANS:
(16-OCT-2008 21:49:21.56)$! Process the ETC_Trans_File so it is alligned correctly for the
(16-OCT-2008 21:49:21.56)$! ST system. This will mirror what occurs to the DB_DP_POST
(16-OCT-2008 21:49:21.56)$! outbound ST file for ST file generation.
(16-OCT-2008 21:49:21.56)$!-------------------------------------------------------------------------------
(16-OCT-2008 21:49:21.56)$! Copy 'ETC_Trans_File'SCNTRANS:DB_'Tag'_SCN_TRANS.DAT - was
(16-OCT-2008 21:49:21.56)$ Copy DBTRANS:ETC_DB_CT_OUTB.Building_ETC_TRANS DBTMP:DB_2008101621492078_ST_TRANS.DAT
%COPY-E-OPENIN, error opening DB$ROOT:[TRANS]ETC_DB_CT_OUTB.BUILDING_ETC_TRANS; as input
-RMS-E-FNF, file not found
%SYSTEM-F-ABORT, abort
%SYSTEM-F-ABORT, abort
PROD_DB job terminated at 16-OCT-2008 21:49:21.57
Accounting information:
Buffered I/O count: 54577 Peak working set size: 62144
Direct I/O count: 145657 Peak virtual size: 257952
Page faults: 113410 Mounted volumes: 0
Charged CPU time: 0 00:01:14.32 Elapsed time: 0 00:12:35.03
[EOB]
labadie_1
Honored Contributor

Re: Jobs failing -SYSTEM-W-DIRALLOC

Hello

You lack contiguous disk space.

You have already posted the same problem
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1265697


Vms says about it

$ help/mess diralloc


DIRALLOC, allocation failure on directory file

Facility: SYSTEM, System Services

Explanation: The file system failed to allocate space to increase the
size of a directory file. Because directory files must be
contiguous, this error might be caused by the disk being
full. More likely, there is not enough contiguous space on
the disk for the directory, so the free disk space is being
fragmented.

User Action: Reorganize the free disk space by copying it with the Backup
utility, or restructure your application to use a larger
number of smaller directories.

So it seems your disk is badly fragmented

You should download DFU from the freeware and check the health of your disks
http://www.digiater.nl/dfu.html
Then post a
$ mc dfu report 'disk'


Do a

$ sh dev db$root
to see what disk it is, and then post

$ dump/header/block=count=0 'disk':[000000]indexf.sys

The value for "map area words in use" will be interesting. Ideally it should be low, and if it is near 155 it is very bad.
odwillia
Frequent Advisor

Re: Jobs failing -SYSTEM-W-DIRALLOC

I did. You are right. Thanks for your response.
Robert Gezelter
Honored Contributor

Re: Jobs failing -SYSTEM-W-DIRALLOC

Odwillia,

As was noted, directories must be contiguous. However, if the directory has a lot of activity in terms of additions and deletions, the directory may not be making the best use of its space (this happens often with high activity mail accounts on volumes with low amounts of space).

DFU is a freeware package that can reorganize directories on OpenVMS to reduce internal fragmentation (it is available for Alpha and Itanium).

Renaming the files to one or more different directories and then renaming them back in collating sequence will also reduce internal fragmentation.

- Bob Gezelter, http://www.rlgsc.com


Hoff
Honored Contributor

Re: Jobs failing -SYSTEM-W-DIRALLOC

I'd encourage a review for disk fragmentation, for large directories, for excessively fragmented files, for RMS indexed file fragmentation, etc; a systemic review of the running configuration.

This could then additionally involve more proactive steps, including monitoring and tuning, upgrading systems or disks or memory or other resources, and the application of ECO kits.

OpenVMS can often operate quietly for a very long time, but it does need occasional maintenance; DIRALLOC is evidence that this configuration needs a look.