<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disks in cluster erroneously reported as full by VMS in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084027#M13478</link>
    <description>&amp;gt;&amp;gt; Our 2 node VMS cluster (8.3 running on an Itanium rx4640) reports that several of the shadowed disks are full. &lt;BR /&gt;&lt;BR /&gt;Sounds like it is just a 'visual' problem.&lt;BR /&gt;Annoying, but just a number in a report.&lt;BR /&gt;Is there anything actually going wrong?&lt;BR /&gt;Failures to create or extent files?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;  A reboot resolves the issue. &lt;BR /&gt;&lt;BR /&gt;Now there's a whopping big hammer!&lt;BR /&gt;PLEASE consider a more fine grained approach should such problems come back.&lt;BR /&gt;PLEASE try a simple dismount and (re)mount sequence to see if that fixes it.&lt;BR /&gt;&lt;BR /&gt;Andy suggests that you might have create an unsupported setup. That may be the case.&lt;BR /&gt;&lt;BR /&gt;John has a fine reply, as always. Read it carefully!&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; Any thoughts?&lt;BR /&gt;&lt;BR /&gt;1) Sounds like a bug, but could be an unsupported configuration. Check with the fine folks as OpenVMS support?&lt;BR /&gt;&lt;BR /&gt;2) VERIFY whether there is a real problem or just very annoying, misleading, information.&lt;BR /&gt;&lt;BR /&gt;3) NEVER reboot an OpenVMS server for such 'problem'.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 10 Oct 2007 21:08:21 GMT</pubDate>
    <dc:creator>Hein van den Heuvel</dc:creator>
    <dc:date>2007-10-10T21:08:21Z</dc:date>
    <item>
      <title>Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084023#M13474</link>
      <description>HI,&lt;BR /&gt;We've experienced this problem on 2 different systems. Our 2 node VMS cluster (8.3 running on an Itanium rx4640) reports that several of the shadowed disks are full. i.e. a 'show dev d' says that dsa103, dsa105, dsa107 have no free blocks. Each node has an MSA1000 attached, and the disks are shadowed between them. However, we know they are not full. When we count up the file sizes of files on the individual disks, only small ammounts of the disks are used.  A reboot resolves the issue. Luckily, this has only happened on our test and post test systems, not live. (as yet).  Any thoughts?&lt;BR /&gt;Thanks.</description>
      <pubDate>Wed, 10 Oct 2007 11:12:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084023#M13474</guid>
      <dc:creator>Philip Howes</dc:creator>
      <dc:date>2007-10-10T11:12:03Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084024#M13475</link>
      <description>Had some nodes crashed or were some disks unproperly dismounted? VMS marks a specific amount of disk space as occupied and puts these blocks in its extend cache. If not dismounted propely VMS has no chance to mark these blocks as free. A 'set volume/rebuild' should fix this.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Wed, 10 Oct 2007 11:55:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084024#M13475</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2007-10-10T11:55:08Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084025#M13476</link>
      <description>Just to clarify &amp;gt;&amp;gt;&amp;gt;Each node has an MSA1000 attached, and the disks are shadowed between them. &lt;BR /&gt;&lt;BR /&gt;Does this mean each system is only connected to 1 MSA1000 or do both systems connect to each MSA.  &lt;BR /&gt;&lt;BR /&gt;In other words are you mixing shadowing and MSCP disk serving?  That isn't a supported configuration.&lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Wed, 10 Oct 2007 12:13:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084025#M13476</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2007-10-10T12:13:12Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084026#M13477</link>
      <description>Philip,&lt;BR /&gt;&lt;BR /&gt;  The number displayed by SHOW DEVICE as the free blocks on the disk is just a number, and it can drift. It plays no part in allocation, so if you attempt to allocate space, and there is space available, the allocation will work regardless of what SHOW DEVICE says.&lt;BR /&gt;&lt;BR /&gt;$  SET VOLUME/REBUILD&lt;BR /&gt;&lt;BR /&gt;should bring it back into line, if that fails, try&lt;BR /&gt;$ SET VOLUME/REBUILD=FORCE&lt;BR /&gt;&lt;BR /&gt;  If that still doesn't work, you may have "lost" files, which can be recovered with ANALYZE/DISK/REPAIR.&lt;BR /&gt;&lt;BR /&gt;  Note that DIRECTORY doesn't necessarily give an accurate result for disk consumption (try it on a system disk!). It can over or under estimate, depending on how you phrase the command. If you are counting up files, make sure you use the allocated size DIRECTORY/SIZE=ALL or F$FILE item ALQ.&lt;BR /&gt;&lt;BR /&gt;  Also note that the concept of "number of free blocks on a disk", is not as simple as it might first appear. There are legitimate reasons why multiple nodes in a cluster might correctly report different values for free space, and, in a multi user environment, the values can vary wildly from instant to instant. You should therefore avoid code with logic like:&lt;BR /&gt;&lt;BR /&gt;IF sufficient-disk-space THEN&lt;BR /&gt;  do something&lt;BR /&gt;ELSE&lt;BR /&gt;  handle error&lt;BR /&gt;ENDIF&lt;BR /&gt;&lt;BR /&gt;because it will suffer from both false positives and false negatives due to the finite time between sampling and acting. Instead it should be coded as:&lt;BR /&gt;&lt;BR /&gt;do something&lt;BR /&gt;IF error&lt;BR /&gt;THEN&lt;BR /&gt;  handle error&lt;BR /&gt;ENDIF</description>
      <pubDate>Wed, 10 Oct 2007 19:35:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084026#M13477</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-10-10T19:35:44Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084027#M13478</link>
      <description>&amp;gt;&amp;gt; Our 2 node VMS cluster (8.3 running on an Itanium rx4640) reports that several of the shadowed disks are full. &lt;BR /&gt;&lt;BR /&gt;Sounds like it is just a 'visual' problem.&lt;BR /&gt;Annoying, but just a number in a report.&lt;BR /&gt;Is there anything actually going wrong?&lt;BR /&gt;Failures to create or extent files?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;  A reboot resolves the issue. &lt;BR /&gt;&lt;BR /&gt;Now there's a whopping big hammer!&lt;BR /&gt;PLEASE consider a more fine grained approach should such problems come back.&lt;BR /&gt;PLEASE try a simple dismount and (re)mount sequence to see if that fixes it.&lt;BR /&gt;&lt;BR /&gt;Andy suggests that you might have create an unsupported setup. That may be the case.&lt;BR /&gt;&lt;BR /&gt;John has a fine reply, as always. Read it carefully!&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; Any thoughts?&lt;BR /&gt;&lt;BR /&gt;1) Sounds like a bug, but could be an unsupported configuration. Check with the fine folks as OpenVMS support?&lt;BR /&gt;&lt;BR /&gt;2) VERIFY whether there is a real problem or just very annoying, misleading, information.&lt;BR /&gt;&lt;BR /&gt;3) NEVER reboot an OpenVMS server for such 'problem'.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Oct 2007 21:08:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084027#M13478</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-10-10T21:08:21Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084028#M13479</link>
      <description>Thanks for all your replies - there is some interesting stuff to investigate, and some good advice.  We noticed the problem when our application failed, as it couldn't write to a disk.  So the problem 'appeared' to be real rather than a display issue. But I'll look into that.  &lt;BR /&gt;I took a load of logs and screen dumps before we rebooted, but i had to see whether the problem would would continue after the reboot- which is why i did that. &lt;BR /&gt;I've raised a call with HP, so I've got the usual weeks of log exchanges to look forward to. Will let you know how we get on.</description>
      <pubDate>Thu, 11 Oct 2007 03:45:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084028#M13479</guid>
      <dc:creator>Philip Howes</dc:creator>
      <dc:date>2007-10-11T03:45:45Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084029#M13480</link>
      <description>I've checked our MSCP_LOAD value and this is set to '1'.  We also use Volume Shadowing between the MSA boxes (each node has a connection to each MSA).  Is this wrong, as suggested in an earlier post?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Phil</description>
      <pubDate>Thu, 11 Oct 2007 04:44:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084029#M13480</guid>
      <dc:creator>Philip Howes</dc:creator>
      <dc:date>2007-10-11T04:44:27Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084030#M13481</link>
      <description>Just curious. Before the reboot, you did do a /Siz=all when you totaled up the file sizes, right?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Oct 2007 08:49:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084030#M13481</guid>
      <dc:creator>Zeni B. Schleter</dc:creator>
      <dc:date>2007-10-11T08:49:34Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084031#M13482</link>
      <description>From memory we used; &lt;BR /&gt;e.g. 'dir dsa103:[000000...]/size/grand'</description>
      <pubDate>Thu, 11 Oct 2007 09:32:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084031#M13482</guid>
      <dc:creator>Philip Howes</dc:creator>
      <dc:date>2007-10-11T09:32:24Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084032#M13483</link>
      <description>We have had very large scratch files created that used LOTS of space but until the scratch files are closed you don't see what they are using with the /SIZ .  You need /SIZ=ALL to see what is allocated.&lt;BR /&gt;&lt;BR /&gt;We had two batch jobs that occasionally walked on each other.  I created a batch job to look at new files with the /SIZ=all so that I finally got an idea of what kind of space was being used in a very dynamic fashion.  If the jobs in your case are detached and don't exit til shutdown , they may clean up behind themselves and you will not see what was there after the reboot.&lt;BR /&gt;&lt;BR /&gt;All when I was checking I think I used the modified date so that if the file was one that existed but was growing, that it would be reported,too.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Oct 2007 09:42:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084032#M13483</guid>
      <dc:creator>Zeni B. Schleter</dc:creator>
      <dc:date>2007-10-11T09:42:07Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084033#M13484</link>
      <description>@Zeni,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;All when I was checking I think I used the modified date so that if the file was one that existed but was growing, that it would be reported,too.&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;Unlikely, as the Modified Date only gets updated when the file is closed.&lt;BR /&gt;&lt;BR /&gt;Permanently open files will never update it.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 11 Oct 2007 13:04:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084033#M13484</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2007-10-11T13:04:49Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084034#M13485</link>
      <description>@Jan&lt;BR /&gt;&lt;BR /&gt;Going from memory in my suggestion.  After the fact, I knew it was open scratch files with huge allocations from sorting.  Before finding the problem , I was looking for any possible growth.  I have seen uses where empty files are copied and appended to and deleted.  Original date is the template creation and the modification reflects its update.</description>
      <pubDate>Thu, 11 Oct 2007 14:12:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084034#M13485</guid>
      <dc:creator>Zeni B. Schleter</dc:creator>
      <dc:date>2007-10-11T14:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084035#M13486</link>
      <description>I've checked our MSCP_LOAD value and this is set to '1'. We also use Volume Shadowing between the MSA boxes (each node has a connection to each MSA). Is this wrong, as suggested in an earlier post?&lt;BR /&gt;&lt;BR /&gt;--&lt;BR /&gt;&lt;BR /&gt;That looks fine to me; I have no idea what the previous poster who implied there might be a problem was thinking.&lt;BR /&gt;&lt;BR /&gt;Mixing Shadowing and MSCP-serving is certainly supported!  One can obviously shadow a served member unit.&lt;BR /&gt;&lt;BR /&gt;What does not work is the MSCP-serving of the virtual unit (the "DS" device).  The notion that it's not supported is technically true, but it flat out will not work, so it's absolutely impossible to get oneself into that type of supported configuration.&lt;BR /&gt;&lt;BR /&gt;Perhaps the posted with the concern was thinking of a different problem?&lt;BR /&gt;&lt;BR /&gt;-- Rob (who has spent a fair amount of time inside both SHDRIVER and DUDRIVER).</description>
      <pubDate>Thu, 11 Oct 2007 18:43:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084035#M13486</guid>
      <dc:creator>Robert Brooks_1</dc:creator>
      <dc:date>2007-10-11T18:43:31Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084036#M13487</link>
      <description>&amp;gt;&amp;gt;&amp;gt;I've checked our MSCP_LOAD value and this is set to '1'. We also use Volume Shadowing between the MSA boxes (each node has a connection to each MSA). Is this wrong, as suggested in an earlier post?&amp;lt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;That is fine.  I am not sure what Andy Bustamante was referring when he stated "In other words are you mixing shadowing and MSCP disk serving? That isn't a supported configuration."&lt;BR /&gt;&lt;BR /&gt;You cannot MSCP serve the DSA virtual units, but creating DSA virtual units from MSCP served memebers is supported.  In your case, it will create a backup path in case all direct fibre paths to the node fail, as in the following:&lt;BR /&gt;&lt;BR /&gt;$ sho dev/ful dga6902&lt;BR /&gt;&lt;BR /&gt;&lt;STUFF removed=""&gt;&lt;BR /&gt;&lt;BR /&gt;  I/O paths to device              3&lt;BR /&gt;  Path PGA0.5000-1FE1-500B-89BD  (SIGMA), primary path, current path.&lt;BR /&gt;    Error count                    0    Operations completed                624&lt;BR /&gt;  Path PGA0.5000-1FE1-500B-89B9  (SIGMA).&lt;BR /&gt;    Error count                    0    Operations completed                352&lt;BR /&gt;  Path MSCP  (OMEGA).&lt;BR /&gt;    Error count                    0    Operations completed                  0&lt;BR /&gt;&lt;BR /&gt;$&lt;BR /&gt;&lt;BR /&gt;Jon&lt;/STUFF&gt;</description>
      <pubDate>Thu, 11 Oct 2007 19:00:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084036#M13487</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-10-11T19:00:12Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084037#M13488</link>
      <description>&lt;BR /&gt;Robert Brooks is correct and I was in error. I've been making an assumption concerning shadowing and MSCP serving.  One of the reasons I'm hear is to continue learning.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy Bustamante&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Oct 2007 19:15:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084037#M13488</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2007-10-11T19:15:57Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084038#M13489</link>
      <description>&lt;!--!*#--&gt;Something like this done by your applic ?&lt;BR /&gt;&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;sh dev sd&lt;BR /&gt;&lt;BR /&gt;Device                  Device           Error    Volume         Free  Trans Mnt&lt;BR /&gt; Name                   Status           Count     Label        Blocks Count Cnt&lt;BR /&gt;&lt;BR /&gt;WSYS01$DKA0:            Mounted              0  WSYS01_SYST     600472   502   1&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;copy nl: sd:[000000]wim.lis/alloc=10000&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;sh dev sd&lt;BR /&gt;&lt;BR /&gt;Device                  Device           Error    Volume         Free  Trans Mnt&lt;BR /&gt; Name                   Status           Count     Label        Blocks Count Cnt&lt;BR /&gt;&lt;BR /&gt;WSYS01$DKA0:            Mounted              0  WSYS01_SYST     590552   497   1&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;open/read/shared x sd:[000000]wim.lis&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;del sd:[000000]wim.lis;&lt;BR /&gt;DELETE SD:[000000]WIM.LIS;1 ? [N]: y&lt;BR /&gt;%DELETE-I-FILDEL, SD:[000000]WIM.LIS;1 deleted (10000 blocks)&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;sh dev sd&lt;BR /&gt;&lt;BR /&gt;Device                  Device           Error    Volume         Free  Trans Mnt&lt;BR /&gt; Name                   Status           Count     Label        Blocks Count Cnt&lt;BR /&gt;&lt;BR /&gt;WSYS01$DKA0:            Mounted              0  WSYS01_SYST     590552   500   1&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;close x&lt;BR /&gt;WSYS01/MGRWVW&amp;gt;sh dev sd&lt;BR /&gt;&lt;BR /&gt;Device                  Device           Error    Volume         Free  Trans Mnt&lt;BR /&gt; Name                   Status           Count     Label        Blocks Count Cnt&lt;BR /&gt;&lt;BR /&gt;WSYS01$DKA0:            Mounted              0  WSYS01_SYST     600552   499   1&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 12 Oct 2007 03:48:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084038#M13489</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-10-12T03:48:36Z</dc:date>
    </item>
    <item>
      <title>Re: Disks in cluster erroneously reported as full by VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084039#M13490</link>
      <description>WWhat I wanted to say is that your application may have files in use that are already deleted and thus do not show up in dir.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 12 Oct 2007 03:50:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disks-in-cluster-erroneously-reported-as-full-by-vms/m-p/4084039#M13490</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-10-12T03:50:23Z</dc:date>
    </item>
  </channel>
</rss>

