Operating System - OpenVMS
1752489 Members
5567 Online
108788 Solutions
New Discussion

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

 
SOLVED
Go to solution
Navipa
Frequent Advisor

Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

Hi,
I have Rdb/VMS V4.1-0 on CharonVAX OpenVMS 6.1. Our DB has 4 uniform and 3 mixed areas and spreaded on 4 vdisks of each 4 GB. And we have replaced one of the 4GB vdisk with 8GB and it application and DB runs fine more than year without any issue. As Stromasys said that version might support vdisk of size larger than 8GB, I have created a vdisk of size larger than 8GB such as 8.5, 9, 10, 12 & 16GB.

Backup and EXPORT work fine with any size of vdisks, including the size larger than 8GB. But the IMPORT fails with the following error message...
"IMPORTing table tsty
IMPORTing table tloo
IMPORTing table tccd
%SQL-F-IOERROR, an unexpected I/O error occurred
%RMS-W-RTB, 8224 byte record too large for user's buffer
%SQL-F-RESABORT, terminating the IMPORT operation.

I have attached a txt with the details of the IMPORT command I used, Import job failure log, & "$dir/full fname" and other txt file with UAF and SYSGEN parameters details.

I can't EXPORT our DB into 8GB as our DB is size larger than 8GB, but I tried with different UAF values and vdisks size, of larger than 8GB.

Is there any limitation of using vdisk of size larger than 8GB for IMPORT?
but I wonder how could the Backup and EXPORT works with any vdisks of size larger than 8GB.

Any idea? and suggestions?
Thanks Navipa

25 REPLIES 25
Hoff
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

The metadata associated with the sequential export-import files here are probably corrupt, or the files are corrupt.   This is typical of RMS files being moved across a network.  

While it might be possible to reset attributes, it's also possible the files have gotten hosed during the transfer. 

To protect the files during transfer, get a current version of zip — zip version 3.0, no earlier — and unzip — unzip version 6.0, no earlier — and then zip the source files with the "-V" option.   Quote the "-V" to prevent the case from changing.   Re-transfer the files.   Try again.  

Older versions of zip and unzip do not support archive sizes past about two gibibytes; 2 GiB.   You're way past that.

Some zip background

Here's the zip and unzip source code.

I don't know of pre-built versions of these tools off-hand — the various places these tools have been posted around the 'net are uniformly down-revision.   Sometimes badly.    Though some of which are good enough to get the current versions of zip and unzip unpacked, and then built.

(The versions of these and many of the other tools posted at HPE site are archiac, and the HPE folks are unresponsive.   The version of zip that HPE uses for kitting patches is positively ancient, and with more than a few known problems.)

You can also use BACKUP to protect the files, though that will get corrupted.   It's feasible to repair the most common BACKUP saveset corruptions.

It's also possible that there's some other problem here with this fossil version of Oracle Rdb, but what you're seemingly concerned — vdisks, quotas, etc — is not likely relevant here.

Steven Schweda
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

> To protect the files during transfer, get a current version of zip
> [...]

   Do we have any evidence that ZIP was used to package/move any files
here?

> I don't know of pre-built versions of these tools off-hand [...]

   Mr. Goatley has generally built some for major releases.  What's
available should be at:

      ftp://ftp.info-zip.org/pub/infozip/vms/

   For the more adventuresome, newer (beta) source kits should be
available at:

      ftp://ftp.info-zip.org/pub/infozip/beta/

And the Info-ZIP team has been known to build special kits on request
for folks who lack a compiler.

> You can also use BACKUP to protect the files, though that will get
> corrupted. [...]

   Well, _can_ get corrupted.

   In any case, it might help to get a more complete description of
exactly which data were moved how.  I'd expect any all-VMS method
(BACKUP, COPY) to cause no trouble, but any non-VMS method involving a
network opens up more potential for trouble.

abrsvc
Respected Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

Maybe its me, but I don't see any Zip or Unzip involved here.  I see backup and export being used to get the files to the larger disk.  This should work as expected as there is nothing to "change" the data outside of RDB.  I would expect that these utilities work regardless of the disk size unless there is a limitation of the OS.

Can you better describe what was actually attempted?  Exact commands would be best.   Maybe the sequence used was in error.

Dan

Hoff
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

The suggested addition of the zip command is to protect the file and its metadata during whatever sort of network transfer was likely used here.   It's not the cause.  

Rule of thumb with BACKUP: savesets get corrupted by ftp until and unless they don't, and I'd prefer to describe how to fix the corruptions rather than to let inexperienced folks run into it fresh and have to ask for more help. 

The pre-built VAX unzip posted at Process is not current.  Based on strings and grep, it's 5.52.   Good enough to unpack the current unzip source archive version (and then build that), but that 5.52 version is not compatible with larger zip archives.    For grins, I checked the Alpha unzip image posted there, and it too is 5.52; old.   (I haven't reported that to Hunter, having just had somebody bump into that yesterday.)

 

 

 

Steven Schweda
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

> The pre-built VAX unzip posted at Process is not current.

   True, but...

> Based on strings and grep, it's 5.52. Good enough to unpack the
> current unzip source archive version (and then build that), but that
> 5.52 version is not compatible with larger zip archives.

   What it's good enough for is to unpack the unz60xv.zip and
zip30xv.zip kits in the same directory, which should contain more modern
pre-built executables (with large-file support for the Alpha and IA64
executables).  The "readme" there is out-of-date, and doesn't make this
as clear as it could/should.

   But in all this Zip+UnZip detail, I forgot that this is a VAX
(-equivalent) system, and the Info-ZIP programs (all versions) lack
large-file support on VAX (because the C RTL does).  So Zip and UnZip
may be of little use on VAX for files of this size.  As usual,
everything's complicated.

Navipa
Frequent Advisor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

Thanks Dan,

I have modified my posting and it has the information you as for, and I attached two txt documant which has the details about DB IMPORT command and the error logs.

I was using 8GB Vdisk to take the RDB backup and for DB Reclaim 9DB Export and Import), it ws working fine, but now my DB size incrased bigger than 8GB, so I created new Vdisk of size 9GB (tested with 10 and 12 GB also) to do the same BACKUP and DB Reclaim. With new larger Vdisk, Backup works, but DB Restore fails (error give below) & DB Export works, but DBIMport fails after Importing few of the tables. The error are given again below...

DB IMPORT Error
IMPORTing relation tsty
IMPORTing relation tloo
IMPORTing relation tccd
%RDO-E-IOERROR, an unexpected I/O error occurred
%RMS-W-RTB, 8224 byte record too large for user's buffer
%RDO-F-RESABORT, terminating the IMPORT operation

DB Restore Error
%RMU-I-LOGRESSST, restored storage area DBA_01:[PROD]UNIFORM_AREA_01.RDA;1
%RMU-I-LOGRESSST, restored storage area DBA_02:[PROD]UNIFORM_AREA_02.RDA;1
%RMU-I-LOGRESSST, restored storage area DBA_03:[PROD]MIXED_AREA_03.RDA;1
%RMU-I-LOGRESSST, restored storage area DBA_04:[PROD]MIXED_AREA_04.RDA;1
%RMU-E-READERR, error reading $DISK1:[BACKUP]TELDB_RDB.RBF;1
-RMU-E-BLOCKCRC, software block CRC error
%RMU-W-BADPTLARE, invalid larea for uniform format data page 380468
%RMU-W-BADPTLAR2, SPAM larea_dbid: 73, page larea_dbid: 19529
%RMU-E-INVRECTYP, invalid record type in backup file
%RMU-F-FATALERR, fatal error on RESTORE

Hoff
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

Get formal help.  I know you won't though, and I know that the managers involved here have forced these constraints onto this configuration — it's a fossil version of OpenVMS VAX, it's VAX, it's a fossil version of Oracle Rdb, little or no staff support and no escalation support, etc — and I'd suspect that your managers have made their project constraints into your problem.   I also know this migration has been going on for vastly longer than it should have — getting a user-mode application from a real VAX to an emulator takes a few days for the basic data transfer, barring emulator bugs or device-specific dependencies and some logical names, or some really slow data links.   So...  

Figure out how to protect the files during transfer.   If zip won't do large files, then use BACKUP.    Expect to have to reset the file attributes when the BACKUP arrives on the destination system.   There are other potential transfer paths — DECnet being one — but those can require more knowledge and potentially more troubleshooting, and I'd tend to avoid that here.

Much like your focus on quotas — in the absence of quota-specific errors — it ain't the Rdb import that's the problem here.   Yes, that import is blowing up.   But the problem is almost certainly upstream from Rdb.  

Assuming the commands used to export the data are valid, how are the exports being transferred from the source host to the destination host.  Whatever you're doing here for the file transfer is what is screwing up the import.   Protect The RMS Files containing the database export.   Use a BACKUP saveset to do that.   If all the export files are in the directory dev:[dir], then use the command:

BACKUP dev:[dir]*.* FOO.BCK/SAVE

Transfer FOO.BCK to the emulator, using whatever corrupting path is in use now.

Reset the attributes on the BACKUP saveset file per the steps and tools in the previously-linked article.   Then restore the saveset:

BACKUP FOO.BCK/SAVE otherdev:[otherdir]

Then populate the database.

Hein van den Heuvel
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

>> Backup and EXPORT work fine with any size of vdisks, including the size larger than 8GB. But the IMPORT

How can you say that? I think there is better than 50% odds that the corruption was established during the export. Creating the backup/export failed, even failing to report that it failed, because it did not realize it.

The import tool worked fine. If ran into a corrupted RMS record and reported that without silently corrupting the database so the import program worked abolutely fine, but the import operation had to fail (because the fiel was bad). Can we agree on that?

That number 8224 ( decimal. 0x2020) is NOT a random number. Come'on you guys, you've seen it before. 8224 is the decimal value for a string of 2 spaces. So RMS ran into pure data where it expected a record length.

Both the Import and export tool are probably fine. Why? Unless RDB at that time used it's own mimic of RMS file writes, the RDB tools would not give one hoot about the file size. They just asks RMS to write (PUT) and read (GET) recrods and RMS had better do the rigth thing. That may have failed. In fact it must have failed because the corruption is in bytes which RDB tools cannot touch.. As long as we are fingerpointing, RMS is unlikely to be in the wrong as well. It is just the messenger and likely fell victim to the file-system.

The next step in the investigation would be to find out which record was bad cq which was the last good one. You can try $ ANALYZE/RMS/CHECK on the file, or write a quick program to look for a record longer than MRS (or LRL) = 1024.

Is there any Network or ZIP involved here? Or are the replies mentioing that just noise?

What openVMS version. 6.1? 5.5-2? You may well have run into an VMS bug @ 8GB.  

[edit: I now see VAX VMS 6.1 in first line of main topic... I somehow missed that. Hein]

This is all wild speculation, but if this problem started to occur repeatably once disks were larger than 8GB, your only workaround MIGHT be to use 7.99GB disks bound together in a volume set to give 16GB of space.

Just thinking out loud,

Hein.

 

 

Hein van den Heuvel
Honored Contributor

Re: Rdb SQL IMPORT fails with " %RMS-W-RTB, 8224 byte record too large for user's buffer"

For jucks I adapted a little program I had to (RMS) read a file an trap records which are too long.

It reports the number of records succesfully read, and the RFA (VBN,Byte-offset) for the last record read.

You may want to try it agains your file. Grab the attachment, save to OpenVMS as RTB.MAR compile and link.

Once it reports the RTB error, use DUMP/RECORD=(COUNT=2, START=xxx)  on were XXX = the record count to see what's going on according to RMS. Or, use DUMP/BLOCK=(COUNT=2,START=yyy) where YYY is the VBN part of the RFA. to see the gory details.

In the example below, I took the program source which has a line of 106 bytes and changed the file attribute to a maximum recordsize of 100, which the program than uses as user buffer size, causing a failure reading the 106 byte record. In the example, the dump just shows all is normal... if I had not mucked with the MRS (or LRL)

$ macro rtb
$ link rtb
$ mcr sys$login:rtb tmp.tmp
112 Records, Last=(7,20), 2839 Bytes. Avg=25, LRL=106 @ 19, SRL=0 @ 1
%SYSTEM-S-NORMAL, normal successful completion
$ set file/att=mrs=100 tmp.tmp  ! Force incorrect attribute the file.
$ mcr sys$login:rtb tmp.tmp
18 Records, Last=(2,96), 567 Bytes. Avg=31, LRL=68 @ 16, SRL=0 @ 1
%RMS-W-RTB, 106 byte record too large for user's buffer
$ dump/rec=(start=18,count=2) tmp.tmp
Dump of file TMP.TMP on  8-MAR-2016 00:19:11.33
File ID (60411,8,0)   End of file block 7 / Allocated 9

Record number 18 (00000012), 0 (0000) bytes, RFA(0002,0000,0060)

Record number 19 (00000013), 106 (006A) bytes, RFA(0002,0000,0062)

 20202020 3A4C4F52 544E4F43 5F4F4146 FAO_CONTROL:     000000
 52204C55 21222020 44494353 412E2020   .ASCID  "!UL R 000010
 5521283D 7473614C 202C7364 726F6365 ecords, Last=(!U 000020
 74794220 51554021 202C2957 55212C4C L,!UW), !@UQ Byt 000030
 4C524C20 2C4C5521 3D677641 202E7365 es. Avg=!UL, LRL 000040
 3D4C5253 202C4C55 21204020 4C55213D =!UL @ !UL, SRL= 000050
              224C 55212040 204C5521 !UL @ !UL"...... 000060