Operating System - OpenVMS
1752578 Members
4302 Online
108788 Solutions
New Discussion юеВ

Re: 10428 byte record limit

 
Jakes Bruwer
Advisor

10428 byte record limit

We are receiving a data file from a Windows environment via FTP. I checked the record lenght and it is as follows:
$ BlockSize = 0
$ BBH_L_BLOCKSIZE = %x28*8
$ BlockSize = F$CVUI(BBH_L_BLOCKSIZE, 32, Record)
$ sh sym blocksize BLOCKSIZE = 1735552814 Hex = 67726F2E Octal = 14734467456

When I try to read this file it gives error:

open/read infil filename.dat;
read infil rec
%RMS-W-RTB, 10428 byte record too large for user's buffer

Is there any way to get this file into a format that is readable to VMS?
Can't attach the file, but here is the dir/full output:
Linkcount: 1
File organization: Sequential
Shelved state: Online
Caching attribute: Writethrough
File attributes: Allocation: 36, Extend: 0, Global buffer count: 0 No version limit
Record format: Variable length, maximum 0 bytes, longest 10428 bytes
Record attributes: Carriage return carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Access Cntrl List: None
Client attributes: None

Thanks
10 REPLIES 10
Bojan Nemec
Honored Contributor

Re: 10428 byte record limit

Jakes,

From the HP OpenVMS DCL Dictionary:

"The maximum size of any record that can be read in a single READ command is 2048 bytes."

http://h71000.www7.hp.com/doc/83final/9996/9996pro_160.html

So you can not read this file with DCL. It can be read in any language because the RMS limit for sequential files is 32767.

Bojan
Shriniketan Bhagwat
Trusted Contributor

Re: 10428 byte record limit

Hi,

Similar topic was discussed in the below thread. May be helpful.

http://h30499.www3.hp.com/t5/Languages-and-Scripting/RMS-W-RTB-when-execute-DCL-script/m-p/5174778#M11800

Regards,
Ketan

H.Becker
Honored Contributor

Re: 10428 byte record limit

Which version of VMS? Wasn't there an enhancement to DCL for longer symbol values? On a current version I can't read up to but 8192 bytes. However there are some limitations what I can do with symbols holding the max of bytes, for example f$length doesn't work. But files with records up to 8000 bytes seem to work without a problem.

I suspect that perl can handle such files. Other than that, depending on what you want to do with the records, a small program in your favorite programming language seems to be a good approach.
Hoff
Honored Contributor

Re: 10428 byte record limit

Reading rather much into what is going on here, I will assume this is a configuration where a VMS file is transferred to a Windows box, and then back to a VMS box. If this assumed sequence is the case here, then this can be normal behavior.

The use of the BBH_L_BLOCKSIZE variable name implies this might be VMS BACKUP saveset. If this is a BACKUP saveset, then this is expected behavior, though I'd tend to expect the file attributes tool you're looking at would reset the RMS metadata file attributes.

If the data file is +not+ a VMS BACKUP saveset, then whatever you're doing with that DCL calculation is inapplicable here. That sequence is very specific to BACKUP.

If this file is +not+ originating on Windows, then it would appear that the file itself is an unexpected format, or the file metadata is not established correctly, or both.

DCL itself is not designed to read large data records, and also tends to have issues with binary-format records.

In general, the best solution can be to zip-protect your files, as the RMS metadata will otherwise get stripped off during its trip through Microsoft Windows.

Specifically..

zip "-V" ziparchive file

Here is a more detailed write-up:

http://labs.hoffmanlabs.com/node/473

The zip and unzip tools are available from various sources, including from the Process.com OpenVMS FTP download site. (The HP versions of zip and unzip are traditionally very stale. Avoid those.)

If this is not a VMS file, then were going to need to have a look at some file details, and the command sequence, and the origin of this file.

If I've guessed incorrectly here (and which is entirely possible), please provide some background on the file (and possibly a dump of a block or three of its contents, though that 10K record size is pretty big, so you might have to attach a zip archive of it, or post it somewhere we can download from) and on the sequences of commands, and on the Windows and OpenVMS versions involved here, and (in addition to your look at a specific solution) on what general requirements you're looking to address and to resolve here.
John Gillings
Honored Contributor

Re: 10428 byte record limit

Jakes,
To your specific question...

>Is there any way to get this file into a
>format that is readable to VMS?

Sure, there are plenty of ways. For example, you could change the record format from variable to fixed:

$ max=
$ SET FILE/ATTR=(RFM:FIX,MRS:'max',LRL:'max')

You can now read the file as fixed sized chunks of 'max' bytes, BUT you have to deblock the records yourself, interpreting the record length fields and keeping track of the start of the next record.

Since DCL has some rather severe limits on symbol and command line length, even on the latest versions, you won't be able to fit a 10K record into a single symbol. The result will probably be horribly complex code, dealing with fragmented records. In theory you could use the reverse of the above trick to write a fixed length file and flip it back to variable, but I think that's just way too ugly to contemplate.

DCL is a workable language for some tasks, but for other tasks things get WAY too complex and fragile. Once records lengths get this large, you really need to look for another language.
A crucible of informative mistakes
Hein van den Heuvel
Honored Contributor

Re: 10428 byte record limit

Considering Jakes is from South Africa he may appreciate the Dutch saying seemingly applicable to this situation: "Hij heeft de klok horen luiden maar weet niet waar de klepel hangt"

I'm with Hoff... looks like this is part of an exercise to determine the block size of an OpenVMS backup set which is stored in 32 bits starting bit %x28*8 with the symbolic name used.

Years ago I wrote a little DCL script as per below the signature to do this which much similar code. Try that instead?

Mind you, on OpenVMS 8.3 you can just use BACKUP/REPAIR to fix this.

Also, as presented this fix is likely to fail as the file is indicated as variable length record which typically means it will have been corrupted during the FTP.

Please re-FTP in BINARY mode which will result in fixed length 512 bytes records which DCL will happily read.

Hope this helps,
Hein van den Heuvel

$ typ FIXSAVESET.COM
$IF p1.EQS."" THEN INQUIRE p1 "Save set file name ?"
$IF f$search(p1).EQS."" THEN EXIT
$WRITE SYS$OUTPUT " RFM was ", F$FILE(p1,"RFM"), ", MRS = ", -
F$FILE(p1,"MRS"), ", LRL = ", F$FILE(p1,"LRL"), "."
$SET FILE /ATTR=(RFM=FIX, MRS=44, LRL=44) 'p1 ! Easier for DCL
$OPEN/READ file 'p1
$READ file record
$CLOSE file
$mrs = F$CVSI(40*8,32,record)
$WRITE SYS$OUTPUT "Setting blocksize to: ",MRS
$SET FILE /ATTR=(RFM=FIX, MRS='mrs', LRL='mrs') 'p1
Hein van den Heuvel
Honored Contributor

Re: 10428 byte record limit

Jakes,

If the received file is NOT a backup save set, then kindly try to explain why you checked 'the record length' the way you did.
Why that offset?

How did the symbol 'Record' get loaded with data?

What is it's relation to filename.dat?

How is the BlockSize symbol subsequently used?

And please provide and attachment with a .TXT file with the output of DUMP/BLOCK=COUNT=1 for the suspect file. Filename.dat?

You may want to compare DUMP output for this file and a re-transfer in binary mode.
Ditto for DUMP/BLOCK=(START=20, COUNT=1)

Good luck,
Hein

Jakes Bruwer
Advisor

Re: 10428 byte record limit

Hi Guys,
Thanks for your replies to my question.
The file that I'm talking about is not a backups saveset. I took an extract from a command procedure which fixes corrupt savesets to determine the block size. I now realize that it is not relevant here and is probably calculated wrong. The file is actually a XML output file generated by a Windows based application.
What I now did, was to use the following command to just get the file into fixed records and therfore making it more managable. Set File/Attributes=(RFM:FIX,MRS:2048,LRL=2048,ORG=SEQ,RAT=NONE)
The file can now be typed or read from an application. Our developer seems to be OK with this.
I have been working with OpenVMS for many years now, and truly think the world of it. There must be a good reason for it having this type of limitation on the readable record length.

Thanks again for your input.
Hein van den Heuvel
Honored Contributor

Re: 10428 byte record limit

Ah! Thanks for that clarification.
Too bad you did not include a DUMP output in the reply.

You indicate that with the altered attributes you can now type the file. Does it look like an ongoing stream of bytes or are there new-lines in place. If you can see reasonable line breaks, not just wraps, then there may be embedded CR-LF's in the data (check for 0A 0D in the hex part of the dump ). If so then you can fix the file better with SET FILE/ATTR=RFM=STM

Now the record size limitation is a DCL artifact, not a general VMS restriction.
DCL 'lives' in P1 space and is supposed to restrict its memory usage. Apparently the record buffer it allocated is less than 10240 and more than 2048 and is does not dynamically reallocate as needed (no standard vms tool does :-( ).

You'll find that non-DCL openVMS programs. C, Cobol, Perl, whatever are likely to read the file just fine... if it is truly variable length 10240.

The real RMS record length limit is about 32 K and larger records need to be chopped up as you did.

The RMS record length limit is due to the record sizes being stored as a 16 bit signed integer in variable length record files.

Too bad really... only -1 ended up as a special value and stream files don't have a count field so the limit could easily be raised to 65K - 2. Oh well :-(

Regards,
Hein

[there was thought of using negative record sizes as 'deleted' records, but that was never done that way. Deleted records (only created by RU-Journalling), are implemented using -1 to skip to the next 512 byte block, and to have all records start at 512-byte block intervals. Clumsy if you ask me. ]