Operating System - OpenVMS
1829331 Members
2462 Online
109989 Solutions
New Discussion

Re: delete file, very very slow

 
SOLVED
Go to solution
Jan van den Ende
Honored Contributor

Re: delete file, very very slow

Ian,

My vote added!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
labadie_1
Honored Contributor

Re: delete file, very very slow

Good idea Ian ! I have voted too.
Dave Gudewicz
Valued Contributor

Re: delete file, very very slow

Another vote posted.

Dave...
Scc_3
Advisor

Re: delete file, very very slow

Hello All,
As a matter of fact delete the files is not a very big issue here to start with.
But because a year ago, they start changing things around and keeping so many files that I will usually delete them ahead of time. But I don't have the time to do it since we have a few bad problem for the last several months from bad disk, power problem, NCP....etc.
Finially I think I get back to the right track now.

I just have to write a program to convert a few files, they are so big almost 63000 blocks, I don't why it takes up some many room. I wrote a convert program just copy every single line to another file from start to finish and the new file is only 6500 blocks.
example :

File "a" contain the following fields
character*20 name
integer*4 quote_number
real*4 amount

open/create a new file call "b"
with same format

read a
b.name = a.name
b.quote_number = a.quote_number
b.amount = a.amount
write file "b"
repeat al this step till file "a" all record read or eof.

Scc
Arch_Muthiah
Honored Contributor

Re: delete file, very very slow

I just added my vote.


Archunan
Regards
Archie
Steven Schweda
Honored Contributor

Re: delete file, very very slow

Re: 63000 blocks v. 6500 blocks:

> I don't why it takes up some many room.

I'd look into that. DIRE /FULL. What kind of
files are these? You would probably need to
DUMP a few records (before and after your
conversion) and post the output before anyone
else could even guess what's happening.
Hein van den Heuvel
Honored Contributor

Re: delete file, very very slow

Like Steven said, a DIR/FULL preferably combined with the output of DUMP/RECO=COUNT=3 will give a good clue.

It looks like the file has binary data. This suggests it is NOT created with created with say DCK but for example Fortran or Cobol or C program. Maybe the original program write a larger record with only the first few field being used?
That is, beyond your a.amount there are other fields, they just have no value at this point.

Or maybe the file was created with 'fixed length record size' (C file creates will inherit attributes from existing files with the same name).

Or maybe it was a relative file with variable length records size, and maximum record size 10x the actual need.

Or, even more unlikely, they used RMS RUJ (Run Unit Journalling) in which case each record starts on a block boundary.

hth,
Hein.


An
Scc_3
Advisor

Re: delete file, very very slow

Hello,
The date file is a fix length binary data file.
Each record is 660 bytes or equal to 165 blocks long fix format.

There must be a lot of empty spaces between the records that are empty. Don't know.

After I run a convert testing, just read the file records by records and write to another file. The new file is much much smaller. almost 80% less

SCC
Scc_3
Advisor

Re: delete file, very very slow

Hello,
I just try to dump this file
qheadr.trs 45048 blocks (the current size)

Dump qheadr.trs
it show end of file block 45048/ allocated 45048
virtural block number 1 (000000001), 512 (0200) bytes
then there are some field are blank like


00000000 00000000 00000000 00000000.......000010
.
.
.
.
00413534 33312d39 3632d33 36322d33 3034001 ..403-277-1345a. 0000a0

when I keep on watching the output there si maybe 10 - 12 lines
0000000 000000 00000000 00000000.......0001a0
0000000 000000 00000000 00000000.......0001b0

what I am going to do is running the convert file and check the dump on that new file and see the different.

Thanks !
Scc

I was wonder is that possible fortran or Vms5.5 are limited to 512 bytes and this file the record is length is 660 and it cause it to create a lot of empty fields.
Ian Miller.
Honored Contributor

Re: delete file, very very slow

can you post the result of DIR/FULL ?
____________________
Purely Personal Opinion
Scc_3
Advisor

Re: delete file, very very slow

qheadr.trs;1 file id :(10042,12,0)
size 45135/45135
creates 04-may-2005reviised 23-nov-2005 (20739)
expired
backup:
file organization : indexed,prolog: 3, using 5 keys
file attributes : allocation: 45135, extend:0, max bucket size : 2
record format : fixed length 660 byte records
record attributes : none
rms attributes : none
journaling enables : none
file protection: system:rwed,owner:rwed,group:re,world
access cntrllist: none

totla of 1 file 45135/45135 blocks


p.s
after I run the convert the file by read it and write to another new file, the size drop from 45135 to 7818 block.
I also do a dir/full on the this new file
the output are the same except it show 7818 blocks

Scc
Hein van den Heuvel
Honored Contributor

Re: delete file, very very slow


Scc,
There are a lot of replies here, but please keep focussing on what was alread written and asked. Why have us speculate when you can provide data as requested?!
Like that dir/full requested days ago,
Like DUMP/RECO as requested vs a plain dump?!

So here is what happening.
You have an INDEXED file which by the looks of it has been neglected for many years ans is in dire need for a tune-up.

The records size is 600 bytes, probably not compressed, so it takes 600 bytes + about 11 bytes overhead. This has to wholy fit in the 2 block bucket. That is 1024 bytes minus 15 bytes overhead. So only one record fits / bucket.

And you are interested in only a few bytes per record, yet each record is requested to take 1024 bytes. Still surprised?

Please take a moment to try tune the file.
A simple ANAL/RMS/FDL.. EDIT/FDL/NOINT... CONVERT/STAT...

Cheers,
Hein.

Scc_3
Advisor

Re: delete file, very very slow

Hi Mr. Hein Van Den,

Wow, that is interesting to learn this command. I usually ana/rms/fdl before.

But never use the way you post :
I try that and look really great.
This is what I do :

Ana/rms/fdl oheadr.trs
edit/fdl/noint oheadr.trs
convert oheadr.trs oheadr.trstest

The orginal size was 63162 blocks and the new one is only 1557 blocks.

I don't think I have any lost information on data. Since I print out a copy of all orders information and compare using the new convert file, The output are the same.

I believe that 3 command lines, is to read the format of the orginal file and dump them is fdl format, edit them (in this case no) and then convert back to the orginal format.
Is that true ? or please collect me if I am wrong. That is the best tool I ever use.

Thanks !
Scc
Hein van den Heuvel
Honored Contributor

Re: delete file, very very slow



Close.

What ANAL/RMS/FDL does is create a DESCRIPTION of the file called, for example file.FDL

The EDIT/FDL/NOINTE is a special, but standard, RMS tool to tweak/optimize the DESCRIPTION of the file to allow optimal re-load, as far as an automatic task can optimize it (It does not know how you use the file, it only know how many records, keys, and such)

The CONVERT phase actually reload the data, using that new, better, description.
This will get you larger buckets, packing more records per bucket, compressing the data, and setting up a reasonable extent such that the new file does nto fragment (as much) if it has to grow.

Reading the new file might now only take 100 IOs where the old one would have been 20,000 IOs, one for each 2 block databucket = one IO per record. Mind you, due to caching those IOs can be hard to count

Cheers,
Hein.


Scc_3
Advisor

Re: delete file, very very slow

Thank You !

If I know this earlier, I don't have to write a program to read and write to dump into another files.

I bet this ways are very safe to use and without lost of information inside the data file.

Scc
Hein van den Heuvel
Honored Contributor

Re: delete file, very very slow


Right:

Convert is the official way to move data around, re-organize, cleanup after periods of inserts, updates, delete. Fully supported. Regular use highly recommended.
Not known to have ever dropped a record.
(not to say there have not been bugs with it, but in general no primary data loss)

fyi 1: that FDL file is just text. Type it. Look around. Compare the pre-convert with a post-convert one. Let the numbers sink in for a while.

fyi 2: I created several tools for rms indexed file on the openvms freeware in the rms_tools directory, and some more unpublished / improved ones.
(Send me an Email for a copy, check my profile for address hint).

I think that at this point we have drifted sufficiently far way from the 'delete slow' topic to close this topic.

Feel free to open a fresh topic on file tuning & maintenance if/when needed

Regards,
Hein.

Thomas Ritter
Respected Contributor

Re: delete file, very very slow

Prehaps it's a bit late. We regulary have to delete directories containing over 100,000s of files. The delete can sometimes take up to 8 hours. All of our deletes are executed using batch jobs and run overnight. Sure I could use DFU and look at ways at improving the delete performance, but our time is best spend on other areas. This thread makes for good reading though !
Tom