Operating System - OpenVMS
1752800 Members
5906 Online
108789 Solutions
New Discussion юеВ

Buffer error when using FTP

 
The Brit
Honored Contributor

Buffer error when using FTP

When trying to FTP an XML file from Windows up to a VMS node, (running 8.3 Alpha, TCPIP services V5.6), I get the following error;

ftp> put IQI_INV_20081021_092000000.XML

200 PORT command successful.

150 Opening data connection for DSA17:[MISCXML_FTP.IQMETRIX.IN]IQI_INV_20081021_092000000.XML; (xx.xx.xx.xx,4113)

550-RMS WRITE RTB record too large.

550 !UL byte record too large for user's buffer

ftp: 117200 bytes sent in 0.03Seconds 3780.65Kbytes/sec.

If I send the file to another VMS machine (Alpha 7.3-2, TCPWare 5.7), I dont have a problem.

The receiving User Accounts are identical on both VMS nodes. Is there any system parameter of user quota, of TCPIP logical/quota I can set to allow a normal transfer to the node which is currently giving the error.

I tried transfering the file in binary, however it wrecked the formating.

thanks

Dave.
11 REPLIES 11
Hoff
Honored Contributor

Re: Buffer error when using FTP

In the typical case, this is just how the TCP/IP Server ftp server works with files with long records. (IIRC, there was a discussion of the longest record length allowable under ftp one of the times this matter cropped up.)

As an alternative here, zip the input file, then toss it over via ftp binary, then (obviously) unzip it.

The one thing here that gives me pause around the "standard" explanation is the number of bytes that did get transferred over. Is it possible that the file isn't correctly terminated, or has some sort of a corruption or there is an unusually long record buried in there somewhere?
Hein van den Heuvel
Honored Contributor

Re: Buffer error when using FTP

Well, for starters, if it was all one records, then 117200 is too big. Anything over 32K is.

8.3 + TCPIP 5.6 is radically different from 7.3 + TCPware 5.7. Different defaults may well be in place.

As you hint yourself, You may want to check out some TCPIP$FTP.... logicals to influence those defaults as needed. See: http://h71000.www7.hp.com/doc/83final/6526/6526pro_041.html

I tend to DUMP/BLOC=COUNT=1 the resulting file too 'see' whether it makes sense andf what the line terminator might be (LF, CR-LF). If you need further help, and the data allows that, then maybe you can attach a text file with a dump. Don't bother with a dump/record. That assumes a structure which might not be there.

>> I tried transfering the file in binary, however it wrecked the formating.

Typically an XML file should be tranferred in ASCII mode as it sound like you tried. Make sur eyou explicitly specify that as a trial.

Failing that, use BINARY mode, and dump as per above. The proceed to use $SET FILE/ATTRIBUTE=(RFM=xxx) to make it match what you see. xxx would be STMLF or just STM basd on what you see. You may want to set MRS and/or LRL or just set those to 0

Good luck,
Hein.
John Gillings
Honored Contributor

Re: Buffer error when using FTP

Dave,

Check the attributes of the file(s). My guess is your original file (the one getting errors) has an invalid "Longest Record Length" - LRL. It could 0, or at least less than the actual LRL. The FTP client is therefore working with false information, which breaks assumptions.

When you send the file to another node, RMS is creating the remote file. Since it sees all the records, it knows the real LRL and sets it correctly in the new copy. Now that FTP has correct information, the transfer is successful.

If you know the correct LRL, you can set it with:

$ SET FILE/ATTRIBUTE=(LRL:value) file

or create a new file with:

CONVERT old-file new-file

This should ensure the LRL is correct.

One other possibility (unlikely since I'm guessing the file is effectively human readable text), it's a stream_lf file, and there's a section that has more than 32767 bytes (the RMS maximum record length) between LF characters. Since this is an architectural issue, there's no simple fix. You'd need to find the extra long "record" and somehow break it into smaller pieces.

There may be alternatives involving setting the file attributes to (say) fixed length, then sending the file in binary. This may then need the other end to fiddle with the attributues to "fix" the file (but then it's not clear what the other end is, or what it's expecting).

As ever when transferring data between different systems, you sometimes need a deeper understanding of the exact format of your data in order to reconcile differences in expectations and assumptions.

[FWIW - opinion follows:] LRL is arguably an obsolete concept - it's original purpose was to give code a hint as to what size buffer is needed to process a file. This meant that the application would minimize resource consumption by not having to allocate buffers "big enough for all possibilities".

The idea has a flaw in that a file open for shared write may have longer records written to it after it was first opened by a reader, thus the LRL which was adequate when the file was first opened may be too small to hold newer records. Furthermore, on modern systems where memory is orders of magnitude cheaper than it was when RMS was architected, allocating a 32K buffer is nowhere near as extravagant as it once would have been considered.

From this, it could be considered that we should set the LRL of all files to 32768 and be done with it (as did the DECC RTL circa V6)? Well, on the other hand this is not necessarily a good idea. The classic example where this can hurt is SORT, which uses the LRL to allocate a sort table - one LRL sized entry for each record in the file. When LRL is set higher than the real LRL, SORT performance can be significantly degraded (though it may not be as significant these days with faster CPUs and more available memory).
A crucible of informative mistakes
Wim Van den Wyngaert
Honored Contributor

Re: Buffer error when using FTP

TCPIP$FTP_STREAMLF If defined, the FTP server and client create files as RMS STREAM_LF files. The default is variable-length files.

May be try this logical. What is the longest record on you Windows file ? What is used a line separator ?

Wim
Wim
The Brit
Honored Contributor

Re: Buffer error when using FTP

Thanks for your replies guys, lots of really useful information and suggestions.

In the case of this file, I resolved the issue by remembering that I had an old, unused Share out there that was created when testing Advanced Server, about 6 months ago.

Also, if the problem crops up again before I have pinned down a resolution, I also have Hoff's "zip it up" suggestion, which I hadn't thought of.

I will be investigating all of the suggestions you made, and I will return with any solution I come across.

Thanks again.

Dave.
Hoff
Honored Contributor

Re: Buffer error when using FTP

Confirm this isn't a case of GIGO, too.

XML usually doesn't have gigantic text records. (I don't know off-hand if there's an architected or recommended longest record, though.)

I have encountered some environments that seemingly go out of their way to generate ill-formed XML.

Fire up one of the various xml verification tools that are around the 'net, or an xml pretty printer, and see if that helps identify the problem. This would be on the source platform; prior to the transfer.

With Mac OS X, Linux or various Unix distros, there are tools baked into the distributions; for this case, tools akin to xmlwf and xmllint are part of most any distribution, and would be obvious choices.



Steven Schweda
Honored Contributor

Re: Buffer error when using FTP

> [...] Hoff's "zip it up" suggestion [...]

Zip and UnZip also have options which can be
used to get inappropriate CR and/or LF line
endings translated, too.

> When trying to FTP an XML file [...]

ASCII or binary? _You_ may know that it's
text, but binary may avoid the record-length
trouble. You'd probably need to do some SET
FILE /ATTRIBUTES stuff to make it look like
text again on the VMS side, however.
The Brit
Honored Contributor

Re: Buffer error when using FTP

Thanks for the help guys,

I guess my final question is;

Why would the transfer work when transfering from

Windows --> VMS(7.3-2) host running TCPWARE V5.7

but fail when transfering from

Windows --> VMS (8.3) host running TCPIP Services V5.6

Dave.
Steven Schweda
Honored Contributor

Re: Buffer error when using FTP

> Why would the transfer [...]

Well, duh. Maybe because the software's
different?

> ASCII or binary?

Still wondering...