- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- %COB-F-FILE_LOCKED, file is locked by another acce...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-27-2005 05:33 AM
тАО09-27-2005 05:33 AM
%COB-F-FILE_LOCKED, file is locked by another access stream
The process does not perpetually fail, only once in a while, and once the production run is stopped and the data files (including the offending locked file) is restored from backup, the batch process proceeds normally. The log of the process, where this occurs, looks like this:
$ ASSIGN FR$DATA:FSFILE.DAT FSFILE
$ ASSIGN FR$DATA:FTFILE.DAT FTFILE
$DIR/FULL/SEC $2$DUA41:[SYNFBD.SYNFBDDAT]FSFILE.DAT
Directory $2$DUA41:[SYNFBD.SYNFBDDAT]
FSFILE.DAT;298 File ID: (77,26,0)
Size: 374934/374934 Owner: [0703PC,D7PCPCM]
Created: 25-JUN-2000 02:14:02.39
Revised: 23-SEP-2005 17:50:26.49 (37585)
Expires:
Backup: 23-SEP-2005 22:21:16.06
Effective:
Recording:
File organization: Indexed, Prolog: 3, Using 1 key
Shelved state: Online
Caching attribute: Writethrough
File attributes: Allocation: 374934, Extend: 0, Maximum bucket size: 2, Globa
` l buffer count: 0, Version limit: 3
Record format: Fixed length 312 byte records
Record attributes: Carriage return carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWE, Owner:RWED, Group:RWED, World:
Access Cntrl List: (IDENTIFIER=[0703TN,D7TNSYN],ACCESS=READ+EXECUTE)
Client attributes: None
Total of 1 file, 374934/374934 blocks.
$ANALYZE/DISK_STRUCTURE/REPAIR $2$DUA41:
Analyze/Disk_Structure/Repair for _$2$DUA41: started on 23-SEP-2005 23:09:17.47
`
%ANALDISK-I-OPENQUOTA, error opening QUOTA.SYS
-SYSTEM-W-NOSUCHFILE, no such file
$SHOW USE/FULL/NODE
OpenVMS User Processes at 23-SEP-2005 23:09:20.42
Total number of users = 2, number of processes = 3
Username Node Process Name PID Terminal
D7PCPCM TOA01 JOB_SCHD_OPER00 25037245 NTY2675: ([10.21.6.4])
D7PCPCM TOA01 NFBD010_266A036 25036A8C (Batch)
JPAUL TOA01 JIM 25037443 NTY2673: ([10.21.6.4])
$ RUN FR$EXE:FBD010
I199999YYB199999YY
%COB-F-FILE_LOCKED, file $2$DUA41:[SYNFBD.SYNFBDDAT]FSFILE.DAT; is locked by ano
` ther access stream
-RMS-E-FLK, file currently locked by another user
%TRACE-F-TRACEBACK, symbolic stack dump follows
image module routine line rel PC abs PC
DEC$COBRTL 0 0000000000015C34 000000007C1E3C34
DEC$COBRTL 0 00000000000152C0 000000007C1E32C0
DEC$COBRTL 0 000000000000D9D8 000000007C1DB9D8
FR$DBMS_IO FSFSIO FSFSIO 381 00000000000008B0 00000000000C6A70
FBD010 FBD010 FBD010 13819 000000000000C074 000000000006C074
FBD010 0 000000000007F6F0 000000000008F6F0
0 FFFFFFFF8B1BB3F4 FFFFFFFF8B1BB3F4
%DCL-W-SKPDAT, image data (records not beginning with "$") ignored
$JOB_ERRORED:
This is the first time in the production run that the FSFILE is accessed.
I've looked through the COBOL source for the executable, and there's none of the COBOL language that locks or releases the file, so it's not that.
We're also using CONNX v8.9SP1 during non-production hours to gather data from the file, but I haven't heard of any problems of this sort arising from that program.
I've seen a number of bizarre things happen on the DS20, but I can't imagine that there's any persistent memory leak or anything like that.
Like I say, this doesn't happen every night during production runs, might happen once, and goes away when files are restored from backup.
Thanks if you can help!
Guy Noce
Business Services Engineer
Towson University
gnoce@towson.edu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-27-2005 06:27 AM
тАО09-27-2005 06:27 AM
Re: %COB-F-FILE_LOCKED, file is locked by another access stream
I'm 99.99% sure that Cobol is reporting the honest thruth. That is, when is was time to access the file (for write) RMS found there was a sharing conflict. An other process, maybe even the same process, must have had the file open and not allowing access. Maybe that lasted only a second, but it was the wriong second.
Do you ever have a defragger running while this job runs? A backup stream?
Sometime you can use AUDITING to register the file access and come to conclusions.
Here is a thought... how about pre-opening the file, allowing writers in the batch job.
If that fails, then don't bother starting the cobol.
Something like:
$ASSIGN...
:
$OPEN/READ/SHARE=WRITE/ERROR=fsfile_problem my_fs_file FSFILE
$RUN my_cobol_program
$CLOSE my_fsfile
:
:
$fsfile_problem:
$SHOW DEV/FILE FR$DATA ! See if we can cathc the culprit!
$SHOW SYSTEM
:
Btw... I have never seen a 170MB indexed file where a 2 block bucket size was optimal.
This program my be suffering from excessive, avoidable IOs. Of course the 2 block bucket size could be optimal, but it could also be a sign of 'neglect' resulting in performance problems, for example due to the index being 4 or more levels deep. Please consider a CONVERT with optimized FDL.
Regards,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-27-2005 09:04 AM
тАО09-27-2005 09:04 AM
Re: %COB-F-FILE_LOCKED, file is locked by another access stream
Put an audit ACE on the file:
(AUDIT=SECURITY,ACCESS=READ+WRITE+SUCCESS)
This will generate an audit message each time the file is opened telling you which process, what time and by what program.
You could also code your cobol program to handle an error on the OPEN statement. Maybe a retry loop if the open fails? If you take that approach, make sure the retry loop has a sanity check so it doesn't go infinite and also log a message (with timestamp) somewhere so you know when it's happened.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-27-2005 06:25 PM
тАО09-27-2005 06:25 PM
Re: %COB-F-FILE_LOCKED, file is locked by another access stream
what version of cobol are you using?
How do you open file?
Vax Cobol, without explicit declaration, try to open file in exclusive mode so two stream can't open the same file.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-28-2005 10:38 AM
тАО09-28-2005 10:38 AM
Re: %COB-F-FILE_LOCKED, file is locked by another access stream
when the file is locked you should be able to do a sho devic/fil on the disk to determine the pid of the process opening it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-28-2005 08:40 PM
тАО09-28-2005 08:40 PM
Re: %COB-F-FILE_LOCKED, file is locked by another access stream
Purely Personal Opinion