- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: delete file, very very slow
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 05:14 AM
11-21-2005 05:14 AM
Re: delete file, very very slow
My vote added!
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 05:24 AM
11-21-2005 05:24 AM
Re: delete file, very very slow
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 06:34 AM
11-21-2005 06:34 AM
Re: delete file, very very slow
Dave...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 07:00 AM
11-21-2005 07:00 AM
Re: delete file, very very slow
As a matter of fact delete the files is not a very big issue here to start with.
But because a year ago, they start changing things around and keeping so many files that I will usually delete them ahead of time. But I don't have the time to do it since we have a few bad problem for the last several months from bad disk, power problem, NCP....etc.
Finially I think I get back to the right track now.
I just have to write a program to convert a few files, they are so big almost 63000 blocks, I don't why it takes up some many room. I wrote a convert program just copy every single line to another file from start to finish and the new file is only 6500 blocks.
example :
File "a" contain the following fields
character*20 name
integer*4 quote_number
real*4 amount
open/create a new file call "b"
with same format
read a
b.name = a.name
b.quote_number = a.quote_number
b.amount = a.amount
write file "b"
repeat al this step till file "a" all record read or eof.
Scc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 07:26 AM
11-21-2005 07:26 AM
Re: delete file, very very slow
Archunan
Archie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 11:07 AM
11-21-2005 11:07 AM
Re: delete file, very very slow
> I don't why it takes up some many room.
I'd look into that. DIRE /FULL. What kind of
files are these? You would probably need to
DUMP a few records (before and after your
conversion) and post the output before anyone
else could even guess what's happening.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2005 03:21 PM
11-21-2005 03:21 PM
Re: delete file, very very slow
It looks like the file has binary data. This suggests it is NOT created with created with say DCK but for example Fortran or Cobol or C program. Maybe the original program write a larger record with only the first few field being used?
That is, beyond your a.amount there are other fields, they just have no value at this point.
Or maybe the file was created with 'fixed length record size' (C file creates will inherit attributes from existing files with the same name).
Or maybe it was a relative file with variable length records size, and maximum record size 10x the actual need.
Or, even more unlikely, they used RMS RUJ (Run Unit Journalling) in which case each record starts on a block boundary.
hth,
Hein.
An
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2005 07:20 AM
11-22-2005 07:20 AM
Re: delete file, very very slow
The date file is a fix length binary data file.
Each record is 660 bytes or equal to 165 blocks long fix format.
There must be a lot of empty spaces between the records that are empty. Don't know.
After I run a convert testing, just read the file records by records and write to another file. The new file is much much smaller. almost 80% less
SCC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2005 07:57 AM
11-22-2005 07:57 AM
Re: delete file, very very slow
I just try to dump this file
qheadr.trs 45048 blocks (the current size)
Dump qheadr.trs
it show end of file block 45048/ allocated 45048
virtural block number 1 (000000001), 512 (0200) bytes
then there are some field are blank like
00000000 00000000 00000000 00000000.......000010
.
.
.
.
00413534 33312d39 3632d33 36322d33 3034001 ..403-277-1345a. 0000a0
when I keep on watching the output there si maybe 10 - 12 lines
0000000 000000 00000000 00000000.......0001a0
0000000 000000 00000000 00000000.......0001b0
what I am going to do is running the convert file and check the dump on that new file and see the different.
Thanks !
Scc
I was wonder is that possible fortran or Vms5.5 are limited to 512 bytes and this file the record is length is 660 and it cause it to create a lot of empty fields.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2005 08:35 PM
11-22-2005 08:35 PM
Re: delete file, very very slow
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 02:26 AM
11-23-2005 02:26 AM
Re: delete file, very very slow
size 45135/45135
creates 04-may-2005reviised 23-nov-2005 (20739)
expired
backup:
file organization : indexed,prolog: 3, using 5 keys
file attributes : allocation: 45135, extend:0, max bucket size : 2
record format : fixed length 660 byte records
record attributes : none
rms attributes : none
journaling enables : none
file protection: system:rwed,owner:rwed,group:re,world
access cntrllist: none
totla of 1 file 45135/45135 blocks
p.s
after I run the convert the file by read it and write to another new file, the size drop from 45135 to 7818 block.
I also do a dir/full on the this new file
the output are the same except it show 7818 blocks
Scc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 04:52 AM
11-23-2005 04:52 AM
Re: delete file, very very slow
Scc,
There are a lot of replies here, but please keep focussing on what was alread written and asked. Why have us speculate when you can provide data as requested?!
Like that dir/full requested days ago,
Like DUMP/RECO as requested vs a plain dump?!
So here is what happening.
You have an INDEXED file which by the looks of it has been neglected for many years ans is in dire need for a tune-up.
The records size is 600 bytes, probably not compressed, so it takes 600 bytes + about 11 bytes overhead. This has to wholy fit in the 2 block bucket. That is 1024 bytes minus 15 bytes overhead. So only one record fits / bucket.
And you are interested in only a few bytes per record, yet each record is requested to take 1024 bytes. Still surprised?
Please take a moment to try tune the file.
A simple ANAL/RMS/FDL.. EDIT/FDL/NOINT... CONVERT/STAT...
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 06:15 AM
11-23-2005 06:15 AM
Re: delete file, very very slow
Wow, that is interesting to learn this command. I usually ana/rms/fdl before.
But never use the way you post :
I try that and look really great.
This is what I do :
Ana/rms/fdl oheadr.trs
edit/fdl/noint oheadr.trs
convert oheadr.trs oheadr.trstest
The orginal size was 63162 blocks and the new one is only 1557 blocks.
I don't think I have any lost information on data. Since I print out a copy of all orders information and compare using the new convert file, The output are the same.
I believe that 3 command lines, is to read the format of the orginal file and dump them is fdl format, edit them (in this case no) and then convert back to the orginal format.
Is that true ? or please collect me if I am wrong. That is the best tool I ever use.
Thanks !
Scc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 06:45 AM
11-23-2005 06:45 AM
Re: delete file, very very slow
Close.
What ANAL/RMS/FDL does is create a DESCRIPTION of the file called, for example file.FDL
The EDIT/FDL/NOINTE is a special, but standard, RMS tool to tweak/optimize the DESCRIPTION of the file to allow optimal re-load, as far as an automatic task can optimize it (It does not know how you use the file, it only know how many records, keys, and such)
The CONVERT phase actually reload the data, using that new, better, description.
This will get you larger buckets, packing more records per bucket, compressing the data, and setting up a reasonable extent such that the new file does nto fragment (as much) if it has to grow.
Reading the new file might now only take 100 IOs where the old one would have been 20,000 IOs, one for each 2 block databucket = one IO per record. Mind you, due to caching those IOs can be hard to count
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 07:06 AM
11-23-2005 07:06 AM
Re: delete file, very very slow
If I know this earlier, I don't have to write a program to read and write to dump into another files.
I bet this ways are very safe to use and without lost of information inside the data file.
Scc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2005 11:37 AM
11-23-2005 11:37 AM
Re: delete file, very very slow
Right:
Convert is the official way to move data around, re-organize, cleanup after periods of inserts, updates, delete. Fully supported. Regular use highly recommended.
Not known to have ever dropped a record.
(not to say there have not been bugs with it, but in general no primary data loss)
fyi 1: that FDL file is just text. Type it. Look around. Compare the pre-convert with a post-convert one. Let the numbers sink in for a while.
fyi 2: I created several tools for rms indexed file on the openvms freeware in the rms_tools directory, and some more unpublished / improved ones.
(Send me an Email for a copy, check my profile for address hint).
I think that at this point we have drifted sufficiently far way from the 'delete slow' topic to close this topic.
Feel free to open a fresh topic on file tuning & maintenance if/when needed
Regards,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2005 04:37 PM
12-11-2005 04:37 PM
Re: delete file, very very slow
Tom
- « Previous
-
- 1
- 2
- Next »