<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: What process is writing to what big or many little files? in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960544#M74583</link>
    <description>w.,&lt;BR /&gt;&lt;BR /&gt;from your Forum Profile:&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;I have assigned points to 23 of 34  responses to my questions.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Looks like most of the unassigned ones are just one or two entries in a stream.&lt;BR /&gt;&lt;BR /&gt;Mind, I do NOT say you necessarily need to give lots of points. It is fully up to _YOU_ to decide how many. If you consider an answer is not deserving any points, you can also assign 0 ( = zero ) points, and then that answer will no longer be counted as unassigned.&lt;BR /&gt;&lt;BR /&gt;To easily find your streams with unassigned points, click your own name somewhere.&lt;BR /&gt;This will bring up your profile.&lt;BR /&gt;Near the bottom of that page, under the caption "My Question(s)" you will find "questions or topics with unassigned points " Clicking that will give all, and only, your questions that still have unassigned postings.&lt;BR /&gt;&lt;BR /&gt;Thanks on behalf of your Forum colleagues.&lt;BR /&gt;&lt;BR /&gt;PS. - nothing personal in this. I try to post it to everyone with this kind of assignment ratio in this forum. If you have received a posting like this before - please do not take offence - none is intended!&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
    <pubDate>Sat, 18 Feb 2006 06:17:01 GMT</pubDate>
    <dc:creator>Jan van den Ende</dc:creator>
    <dc:date>2006-02-18T06:17:01Z</dc:date>
    <item>
      <title>What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960534#M74573</link>
      <description>We've all had this problem, some process is filling up a disk and we have to find it and kill it before the disk is full.  It might be writting one big file or it might be writing may little ones.  I've been lucky and found the culprit before the disk was full but on a busy system this can be difficult.  So I'm fishing for better ideas/programs for identifying the process that's filling the disk, on a system where MON PROC/TOPDIO shows a lot of processes most or all of which are not guilty and where SHOW DEVICE/FILE yields a list of a hundred files.  Does anybody have a good trick for doing this?</description>
      <pubDate>Thu, 16 Feb 2006 14:32:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960534#M74573</guid>
      <dc:creator>Clark Powell</dc:creator>
      <dc:date>2006-02-16T14:32:19Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960535#M74574</link>
      <description>I've had good luck with the &lt;BR /&gt;&lt;BR /&gt;DIR/SIZE=ALL/DATE/SELECT=SIZE=MINIMUM=100000 [*...]*.*;*/OWNER&lt;BR /&gt;&lt;BR /&gt;This will give a list of all the files larger than 100,000 blocks as well as when it was created and who the owner is.  The number can be adjusted to whatever value you need. Aside from that I would suggest writing a command procedure that uses the SHOWDEV/FILES/OUT=tmp-file-name and then open the temp-file-name and read each file name looking for files larger than a value passed in P1 to the procedure.&lt;BR /&gt;&lt;BR /&gt;As for creating a large number of small files if they are all the same name then you could use the SET FILE/VERSION=n to automatically keep more than that number of files from being present in the directory at a time.  If the file names are not the same name...well, I will have to think on that one a little more.&lt;BR /&gt;&lt;BR /&gt;Phil</description>
      <pubDate>Thu, 16 Feb 2006 14:44:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960535#M74574</guid>
      <dc:creator>Phillip Thayer</dc:creator>
      <dc:date>2006-02-16T14:44:37Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960536#M74575</link>
      <description>W.&lt;BR /&gt;&lt;BR /&gt;if many small files MIGHT be the issue, &lt;BR /&gt;$ DIR/SIZE/SIN="-0:10" &lt;BR /&gt;will show any files less than 10 minutes old.&lt;BR /&gt;Adjust time value to your needs, but realise that evaluating the dates of MANY files, especially if BIG directories, requires some time itself.&lt;BR /&gt;&lt;BR /&gt;OTOH, this WILL in one pass eliminate the "many small file creation" option.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 16 Feb 2006 15:26:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960536#M74575</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-02-16T15:26:45Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960537#M74576</link>
      <description>Using the DFU freeware utility is much quicker for searching a disk then using DIRECTORY [*...].&lt;BR /&gt;&lt;BR /&gt;$ DFU SEARCH device: -&lt;BR /&gt;/SIZE=MINIMUM=10000 /ALLOCATED -&lt;BR /&gt;/CREATED=SINCE=-2 !large files last 2 hours&lt;BR /&gt;&lt;BR /&gt;$ DFU SEARCH device: /CHARACTERISTICS=DIR -&lt;BR /&gt; /SIZE=MINIMUM=100  !large directories&lt;BR /&gt;&lt;BR /&gt;$ DFU DIRECTORY device: /VERSION=1000  !many versions</description>
      <pubDate>Thu, 16 Feb 2006 18:35:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960537#M74576</guid>
      <dc:creator>Jess Goodman</dc:creator>
      <dc:date>2006-02-16T18:35:54Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960538#M74577</link>
      <description>Will DFU locate and report on files not entered into a directory (f.ex. a file that results from opening a device spooled to an intermediate disk that is associated with a print queue)?</description>
      <pubDate>Thu, 16 Feb 2006 18:48:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960538#M74577</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2006-02-16T18:48:15Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960539#M74578</link>
      <description>Large files are probably easy enough to spot, but lots of little ones would be harder to catch. Things like RMS stats are useless for that. As are DIRIO and BUFIO counts. The SDA LCK tool might catch volume lock activity? &lt;BR /&gt;&lt;BR /&gt;A directory for recent files is a reasonable approach. DIR/SINCE [*...] is probably too slow for that though. DFU is recommended instead.&lt;BR /&gt;&lt;BR /&gt;DFU is fast because it walks INDEXF.SYS and deals with directories as an afterthought (LIB$FID_TO_NAME)&lt;BR /&gt;Thus is will also catch temp file which are not entered into a directory, or perhaps move between directories.&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Thu, 16 Feb 2006 19:30:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960539#M74578</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-16T19:30:12Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960540#M74579</link>
      <description>Clark,&lt;BR /&gt;&lt;BR /&gt;The solution is quotas, in this particular case, disk quotas.&lt;BR /&gt;&lt;BR /&gt;Disk quotas, stop the offending process dead. The balance is that it is possible to terminate a job which needed "just one more block". In order to prevent a runaway process from filling a disk, this is the cost of that safety.&lt;BR /&gt;&lt;BR /&gt;I often recommend that clients consider this approach. In these relatively disk space rich days, I recommend that the quotas be somewhat high (say 2+ times the anticipated amount), but any reasonable quota short of infinity will have the desired effect.&lt;BR /&gt;&lt;BR /&gt;My first encounter with this type of problem was back on about VAX/VMS Version 2.0. My officemate left something running overnight, and it produced a bit more printed output to the printer (spooled to the system device) than intended. In the morning, the Operations manager was, to put it politely, rather upset).&lt;BR /&gt;&lt;BR /&gt;This problem occurs with both spooled files and disk resident files, and the solution is, admittedly, far older than VMS. &lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 17 Feb 2006 04:58:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960540#M74579</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-02-17T04:58:02Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960541#M74580</link>
      <description>Yeah, Disk block Quotas, is the righth way to solve this.&lt;BR /&gt;I just have not been on a system that had those going for ages.&lt;BR /&gt;&lt;BR /&gt;Even if you set them to near infinite, you can still use the report, and specifically DIFFERENNCE between a current report and 'yesterdays' report to find out heavy users (yesterday could also be last hour or last week of course).&lt;BR /&gt;You'd want a perl or DCL script to report the changes by username.&lt;BR /&gt;&lt;BR /&gt;Of course this this is more (only) useful if there actually are distinct usernames being used like they should be.&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Fri, 17 Feb 2006 06:57:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960541#M74580</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-17T06:57:27Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960542#M74581</link>
      <description>Great Ideas!  Thanks.</description>
      <pubDate>Fri, 17 Feb 2006 10:35:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960542#M74581</guid>
      <dc:creator>Clark Powell</dc:creator>
      <dc:date>2006-02-17T10:35:51Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960543#M74582</link>
      <description>You can also use the XFC SDA extension to &lt;BR /&gt;find a runaway process doing IO.  This &lt;BR /&gt;requires recent versions, i.e. the XFC &lt;BR /&gt;remedials for V7.3-2 and following released last November.&lt;BR /&gt;&lt;BR /&gt;In SDA -&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt;  xfc set trace/select=level=2   &lt;BR /&gt;&lt;BR /&gt;Since the display here is 132 columns, I've &lt;BR /&gt;included an example in an attachment.&lt;BR /&gt;&lt;BR /&gt;A little explanation:&lt;BR /&gt;&lt;BR /&gt;The level=2 trace level starts XFC recording&lt;BR /&gt;reads, writes, and file system calls into&lt;BR /&gt;XFC.  Higher levels of tracing turn on more&lt;BR /&gt;detailed debugging trace points.  For reads&lt;BR /&gt;and writes, the starting and ending point of&lt;BR /&gt;each IO is captured.  In addition, the latency of the IO is reported.  The trace &lt;BR /&gt;displays as much of the file name as possible&lt;BR /&gt;along with the PID of the process doing the&lt;BR /&gt;IO.  We get the VBN and IO size as well.  The&lt;BR /&gt;first number on the line is a sequence number&lt;BR /&gt;of the trace entry (not very interesting). &lt;BR /&gt;The second number is a sequence number which&lt;BR /&gt;is incremented for every IO (all IOs, not&lt;BR /&gt;just XFC IOs).  This allows you to match up&lt;BR /&gt;IO starts and completions on very busy systems.&lt;BR /&gt;&lt;BR /&gt;The latency is measured using the system &lt;BR /&gt;cycle counter and is not valid if the IO &lt;BR /&gt;completes on a different processor.  The &lt;BR /&gt;latency only includes time within XFC itself.&lt;BR /&gt;It does not include overhead added by the&lt;BR /&gt;QIO call or RMS.  Cache hits are noted with&lt;BR /&gt;an asterisk next to the operation description.&lt;BR /&gt;&lt;BR /&gt;The overhead for the level 2 tracing is &lt;BR /&gt;small, but measurable.  The trace entries &lt;BR /&gt;are kept in a ring buffer containing only&lt;BR /&gt;2000 entries.  At this time, there isn't&lt;BR /&gt;any way to increase the size of the buffer&lt;BR /&gt;at runtime (maybe next version).&lt;BR /&gt;&lt;BR /&gt;In SDA, the XFC SHOW VOLUME/BRIEF and XFC&lt;BR /&gt;SHOW FILE/BRIEF commands may also be useful&lt;BR /&gt;in this kind of situation.&lt;BR /&gt;&lt;BR /&gt;To set the tracing back to the default,&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; XFC SET TRACE/SELECT=LEVEL=1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Mark</description>
      <pubDate>Fri, 17 Feb 2006 15:43:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960543#M74582</guid>
      <dc:creator>Mark Hopkins_5</dc:creator>
      <dc:date>2006-02-17T15:43:12Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960544#M74583</link>
      <description>w.,&lt;BR /&gt;&lt;BR /&gt;from your Forum Profile:&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;I have assigned points to 23 of 34  responses to my questions.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Looks like most of the unassigned ones are just one or two entries in a stream.&lt;BR /&gt;&lt;BR /&gt;Mind, I do NOT say you necessarily need to give lots of points. It is fully up to _YOU_ to decide how many. If you consider an answer is not deserving any points, you can also assign 0 ( = zero ) points, and then that answer will no longer be counted as unassigned.&lt;BR /&gt;&lt;BR /&gt;To easily find your streams with unassigned points, click your own name somewhere.&lt;BR /&gt;This will bring up your profile.&lt;BR /&gt;Near the bottom of that page, under the caption "My Question(s)" you will find "questions or topics with unassigned points " Clicking that will give all, and only, your questions that still have unassigned postings.&lt;BR /&gt;&lt;BR /&gt;Thanks on behalf of your Forum colleagues.&lt;BR /&gt;&lt;BR /&gt;PS. - nothing personal in this. I try to post it to everyone with this kind of assignment ratio in this forum. If you have received a posting like this before - please do not take offence - none is intended!&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Sat, 18 Feb 2006 06:17:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960544#M74583</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-02-18T06:17:01Z</dc:date>
    </item>
    <item>
      <title>Re: What process is writing to what big or many little files?</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960545#M74584</link>
      <description>&lt;BR /&gt;Hey Folks... noone going say 'attaboy Mark!' ?!&lt;BR /&gt;&lt;BR /&gt;This tracing is a grand addition to the xfc, and it all Mark's doing.&lt;BR /&gt;Thank you Mark!&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Feb 2006 18:34:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/what-process-is-writing-to-what-big-or-many-little-files/m-p/4960545#M74584</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-02-21T18:34:18Z</dc:date>
    </item>
  </channel>
</rss>

