<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High water marking in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288931#M63111</link>
    <description>Wim,&lt;BR /&gt;If I may . . .&lt;BR /&gt;&lt;BR /&gt;&amp;gt; I disagree. Performance impact yes, &lt;BR /&gt;&amp;gt; blocking activities no. Simply a bad &lt;BR /&gt;&amp;gt; implementation. Even if there are &lt;BR /&gt;&amp;gt; workarounds.&lt;BR /&gt;&lt;BR /&gt;It seems like you are not accepting the reasons for blocking. We can debate the documentation's lack of clarity another time (acutally I agree with you).&lt;BR /&gt;&lt;BR /&gt;Bottom line is if there is another way to do the erase without blocking other IO on the volume, how would you do it?&lt;BR /&gt;&lt;BR /&gt;The goal of the erase is to prevent disk scavenging (there are other goals, but that's the one I am concerned with). What does a given application need? Does the application have to prevent disk scavenging? If one is worried about disk scavenging, then you have to erase before using the disk blocks. No other choice. &lt;BR /&gt;&lt;BR /&gt;If you have another choice, how would you do it? This is not, as you claim, a "bad implementation" nor is it a design flaw or poor practice. Again, to guarantee security with the goal of disk scavenging, what would you do otherwise?&lt;BR /&gt;&lt;BR /&gt;If you allow another IO to occur during the erase, you can bypass the attempted erase function, read the blocks that have not been erased (theoretically *all* of the blocks) and scavenge the data. The only kind of IO I could consider is a write IO. But the read IO's have to be blocked so to be safe, block all IO.&lt;BR /&gt;&lt;BR /&gt;If this is a bad implementation, how would you guarantee security if you allow even one IO during the erase?&lt;BR /&gt;</description>
    <pubDate>Fri, 04 Jun 2004 13:25:38 GMT</pubDate>
    <dc:creator>John Eerenberg</dc:creator>
    <dc:date>2004-06-04T13:25:38Z</dc:date>
    <item>
      <title>High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288912#M63092</link>
      <description>I activate high water marking on a disk.&lt;BR /&gt;Then I create a file of 400.000 blocks on this disk. The command takes about 30 seconds. During this time, f$search hangs and f$file too.&lt;BR /&gt;&lt;BR /&gt;It seems that the disk is locked (but not for dir).&lt;BR /&gt;&lt;BR /&gt;Why ?&lt;BR /&gt;&lt;BR /&gt;Is it possible that a database server (sybase) crashes because of it (e.g. too much outstanding IO for the given quotas) ?</description>
      <pubDate>Thu, 27 May 2004 10:23:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288912#M63092</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-05-27T10:23:37Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288913#M63093</link>
      <description>I guess a writelock has been taken out on the disk so DIR is ok as it just reads. &lt;BR /&gt;&lt;BR /&gt;re sybase - depends on how it handles quota failures - it should cope but...</description>
      <pubDate>Thu, 27 May 2004 10:27:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288913#M63093</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-05-27T10:27:46Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288914#M63094</link>
      <description>No writelock on the disk. I checked and could write a new file to the disk. But I couldn't delete it (hang again).</description>
      <pubDate>Thu, 27 May 2004 10:42:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288914#M63094</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-05-27T10:42:38Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288915#M63095</link>
      <description>No, not a general write lock as in MOUNT/NOWRITE - just a synchronization lock, e.g. on one of the bitmaps.</description>
      <pubDate>Thu, 27 May 2004 10:55:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288915#M63095</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-05-27T10:55:30Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288916#M63096</link>
      <description>You'll need to get a hold of the Kirby McCoy's "VMS File System Internals" book ISBN DP 1-55558-056-4 to read up all the gory details. Notably check out chapter 5.4.5 "Dynamic Highwater Marking".&lt;BR /&gt;&lt;BR /&gt;The basic premiss is easy. Never allow a process to read a block from a fiel that was not writen too that file earlier. The protetcs against 'scavenging'. That is, it protects against soem job just askgn for a bunch of block and then reads those to see if there is something 'interesting' left behind from a prior usage of that block (while it was part of a different, now deleted or truncated file).&lt;BR /&gt;Sounds easy, but ia hard to do right in a full sharing environment. The system has optimizations to avoid writes as much as possibly, notably when it detects 'sequential only access'. However, if you just allocate a bunch of blocks for a non-sequential file, or write a block  for out into the file, the XQP will erase' all intermediate blocks... while holding a volume lock! (This can probably be fixed using a lower level lock, but that is not how it is done today). So writes to a file that is already open can go on, but any volume operation will stall. &lt;BR /&gt;You have to understand the application/file-usage to decide whether high-watermarkign is really needed, and what the expected costs are. You are not the first one to be bitten by this (myself, I get a painfull reminder once every 3 years or so: Hmmm.. is the system down? no? then why is my 'DIR' hanging? Oh duh... HWM!).&lt;BR /&gt;&lt;BR /&gt;Groetjes,&lt;BR /&gt;&lt;BR /&gt;Hein</description>
      <pubDate>Thu, 27 May 2004 10:58:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288916#M63096</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-05-27T10:58:53Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288917#M63097</link>
      <description>Hein,&lt;BR /&gt;he said he could create a new file, but not delete it - isn't the former an operation on the volume, too?&lt;BR /&gt;&lt;BR /&gt;Wim,&lt;BR /&gt;did you just create an empty file or did you write data, too?</description>
      <pubDate>Thu, 27 May 2004 11:04:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288917#M63097</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-05-27T11:04:14Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288918#M63098</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;as I have come to understand things, it is working fine.&lt;BR /&gt;Maybe a monitor IO/item=que for the disk will illustrate.&lt;BR /&gt;Creating a file is OK. (should that be initially so as well, while the SET VOL/HIGH is still active? Or is this assuming the data present before issuing the command was and still is not that important? Or IS this what CREATE is doing: writing VERY much meaningless data? ) But then, the file gets deleted, and the full area of the disk that WAS the file, now first has to be (multiply?) written, so as to release only 'truely erased' disk blocks.&lt;BR /&gt;So, for a big file to delete,  your 'hang' is simply waiting for an IO to complete, that is somewhere at the back of a BIG que of disk IO entries...&lt;BR /&gt;&lt;BR /&gt;correct me if I'm wrong, but this is what I remember of the info way back when, when highwatermarking was introduced (somewhere V5-ish ? )&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Thu, 27 May 2004 12:49:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288918#M63098</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-27T12:49:00Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288919#M63099</link>
      <description>Hello Wim,&lt;BR /&gt;&lt;BR /&gt;well the first question obviously is, do you have an applicaiton requirement to pre-zero &lt;BR /&gt;the diskblocks with Highwatermarking? If not,&lt;BR /&gt;having it enabled will generally incurr a penalty on random access files.&lt;BR /&gt;&lt;BR /&gt;Greetings, Martin</description>
      <pubDate>Thu, 27 May 2004 13:15:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288919#M63099</guid>
      <dc:creator>Martin P.J. Zinser</dc:creator>
      <dc:date>2004-05-27T13:15:26Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288920#M63100</link>
      <description>&amp;gt; "Then I create a file of 400.000 blocks on this disk. The command takes about 30 seconds. During this time, f$search hangs and f$file too.&lt;BR /&gt;It seems that the disk is locked (but not for dir).&lt;BR /&gt;Why ?"&lt;BR /&gt;&lt;BR /&gt;The short answer is the highwater erase, f$search, f$file, etc. are, by design and many good reasons, single threaded through the FCP (File Control Primitive). For the FCP to erase 400,000 blocks takes roughly the time you mentioned. The detailed answer has already been given.</description>
      <pubDate>Thu, 27 May 2004 14:10:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288920#M63100</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-05-27T14:10:04Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288921#M63101</link>
      <description>It is still not clear. There is some kind of volume lock but I can still write (Uwe : with data blocks).&lt;BR /&gt;&lt;BR /&gt;What are exactly the read/write locks that are taken ? A database system is much simplier to understand than RMS.&lt;BR /&gt;&lt;BR /&gt;Second, I find it a bad implementation if applications are stalled because of it. Suppose you have a real time application that need to close a valve but can't do the job because some DBA is creating a big database file (would block the disk for several minutes or even hours).</description>
      <pubDate>Fri, 28 May 2004 01:01:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288921#M63101</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-05-28T01:01:17Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288922#M63102</link>
      <description>Well Wim,&lt;BR /&gt;&lt;BR /&gt;Highwatermarking IS a very-high-security feature, to be used IF YOU NEED IT, explicitly AT THE COST of performance.&lt;BR /&gt;I guess yhis implies that highwatermarking and real-time functionality are mutually exclusive.&lt;BR /&gt;The way out of this dilemma is quite simple:&lt;BR /&gt;have the Sytem Manager make sure that your Highwatermark-requiring data are on another physical drive then your real-time files.&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Fri, 28 May 2004 01:59:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288922#M63102</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-05-28T01:59:34Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288923#M63103</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;I disagree. Performance impact yes, blocking activities no. Simply a bad implementation. Even if there are workarounds.&lt;BR /&gt;&lt;BR /&gt;I don't use it (I put it off where I see it). But used it for testing f$esrach.&lt;BR /&gt;</description>
      <pubDate>Fri, 28 May 2004 02:12:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288923#M63103</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-05-28T02:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288924#M63104</link>
      <description>It is not a bug, it's a feature! &lt;BR /&gt;&lt;BR /&gt;These type of delays are typical of High Water Marking (HWM). If your applications are too sensitive to these delays then you have two options. One, turn off HWM, or, two, control your allocations, either by delaying large allocations to the off hours or breaking up the allocations into small chunks.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Marty</description>
      <pubDate>Tue, 01 Jun 2004 14:19:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288924#M63104</guid>
      <dc:creator>Martin Johnson</dc:creator>
      <dc:date>2004-06-01T14:19:58Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288925#M63105</link>
      <description>Features should be documented. This is an implementation of which the side effects are not documented.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Jun 2004 01:31:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288925#M63105</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-06-02T01:31:48Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288926#M63106</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;  This may not be OpenVMS at all. For some combinations of controller and disk drive, the HWM ERASE function is done in the drive itself. Just an I/O ERASE operation, rather than a series of writes to individual blocks, then wait for it to return (or, to be more precise, one I/O per allocated extent). So, all the stuff you CAN do may be the result of caching, or operations that the drive and/or controller can handle in parallel. Anything else just has to wait. &lt;BR /&gt;&lt;BR /&gt;  Bottom line, this IS the "performance impact" that the documentation talks about. You can argue about other ways of doing things, but they're all tradeoffs. That's what engineering is all about - picking the least worst choice.&lt;BR /&gt;&lt;BR /&gt;  With security, there are always nasty things to think about. For example, suppose we didn't lock the disk and zeroed the allocated blocks in small chunks, obviously that would take MUCH longer to complete. Apart from the cumulative cost of multiple I/O's, we'd have to write actual blocks, so data would need to be buffered and sent over the wires to the disk.&lt;BR /&gt;&lt;BR /&gt;  Worse, maybe it would be possible to request the CREATE asynchronously and access the blocks from a parallel thread, ahead of the ERASE fence, thus defeating HWM?&lt;BR /&gt;&lt;BR /&gt;  The engineers who designed this weren't complete morons, so if they've taken the fairly drastic step of holding a lock for a long time, it definitely wasn't without good reason. If you really want to know the answer, log a case with your local customer support centre and have it escalated into engineering. They will either say "of course! how stupid" and fix it, or they may explain why it has to be that way to guarantee security, or they may say "it has to be that way, but we're not going to tell you why becuase it involves security".&lt;BR /&gt;&lt;BR /&gt;  This is the burden of being OpenVMS. Other operating systems can take short cuts, possibly leaving security or reliability holes around the place, but we have to keep everything secure and failsafe, sometimes at the cost of performance.&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Jun 2004 00:43:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288926#M63106</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2004-06-04T00:43:50Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288927#M63107</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;Whatever the reason, no disk should be locked during a longer interval.&lt;BR /&gt;&lt;BR /&gt;High water marking may not be used in financial market systems, real time applications and high availability systems, unless EVERYBODY is aware of the problem.&lt;BR /&gt;&lt;BR /&gt;The performance impact that I found in the doc only talks about the extra IO for erasing those extents, not about the disk lock. &lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 04 Jun 2004 01:29:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288927#M63107</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-06-04T01:29:27Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288928#M63108</link>
      <description>&amp;gt; High water marking may not be used in financial market systems, real time applications and high availability systems, unless EVERYBODY is aware of the problem&lt;BR /&gt;&lt;BR /&gt;hmmm... beg to differ. &lt;BR /&gt;Highwater marking may be MOST important for those financial market systems.&lt;BR /&gt;The impact is generally NOT found in day to day usage. For a 'normal' sequential file write no extra IO is needed. Hey, that's why the HWM is there! It's when you create the possibility to access beyond HWM that the erase is needed.&lt;BR /&gt;Any other real-time/oltp task on the disk with the files already open can just keep on reading/writing from those files there is no data IO lock (that concept does not even exist!).&lt;BR /&gt;&lt;BR /&gt;Wim, is this question for the system you asked about for the caching? There you indicated sequential files were used a lot and indexed files where used little? Do you have a self-designed / supported data retrieval system to deal with? If so then indeed you will need to learn to deal with the 'fine print' like using the 'SQO' bit.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Jun 2004 03:05:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288928#M63108</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-06-04T03:05:04Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288929#M63109</link>
      <description>&amp;gt;The impact is generally NOT found in day to &amp;gt;day usage.&lt;BR /&gt;&lt;BR /&gt;Any create/fdl (or extent) with a substantial file, any database file creation (sybase). This is very common on our site.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Any other real-time/oltp task on the disk &amp;gt;with the files already open can just keep &amp;gt;on reading/writing from those files &lt;BR /&gt;&lt;BR /&gt;All jobs creating files, re-opening files are blocked. Also very common. I tested with dcl read/key (file was open before create/fdl was started) and found that the procedure was still blocked during 11 seconds !!!&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Wim, is this question for the system you &amp;gt;asked about for the caching? There you &amp;gt;indicated sequential files were used a lot &amp;gt;and indexed files where used little? Do you &amp;gt;have a self-designed / supported data &amp;gt;retrieval system to deal with? If so then ?&amp;gt;indeed you will need to learn to deal with &amp;gt;the 'fine print' like using the 'SQO' bit.&lt;BR /&gt;&lt;BR /&gt;Nope. Just trying to understand things without a cause.&lt;BR /&gt;&lt;BR /&gt;If you try to buy some shares on the stock exchange, close valves of reffineries or other such stuff, you can miss those suspends.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 04 Jun 2004 03:55:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288929#M63109</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-06-04T03:55:59Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288930#M63110</link>
      <description>OK. Understood.&lt;BR /&gt;&lt;BR /&gt;I must speculate that the slow READ/KEY was caused by the 2 to 5 IOS required to do so each had to wait in a large queue for the disk itself. ( 2 to 5 = index root, index lower-level-if-needed, data bucket, RRV-bucket-if needed)&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Jun 2004 04:11:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288930#M63110</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-06-04T04:11:59Z</dc:date>
    </item>
    <item>
      <title>Re: High water marking</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288931#M63111</link>
      <description>Wim,&lt;BR /&gt;If I may . . .&lt;BR /&gt;&lt;BR /&gt;&amp;gt; I disagree. Performance impact yes, &lt;BR /&gt;&amp;gt; blocking activities no. Simply a bad &lt;BR /&gt;&amp;gt; implementation. Even if there are &lt;BR /&gt;&amp;gt; workarounds.&lt;BR /&gt;&lt;BR /&gt;It seems like you are not accepting the reasons for blocking. We can debate the documentation's lack of clarity another time (acutally I agree with you).&lt;BR /&gt;&lt;BR /&gt;Bottom line is if there is another way to do the erase without blocking other IO on the volume, how would you do it?&lt;BR /&gt;&lt;BR /&gt;The goal of the erase is to prevent disk scavenging (there are other goals, but that's the one I am concerned with). What does a given application need? Does the application have to prevent disk scavenging? If one is worried about disk scavenging, then you have to erase before using the disk blocks. No other choice. &lt;BR /&gt;&lt;BR /&gt;If you have another choice, how would you do it? This is not, as you claim, a "bad implementation" nor is it a design flaw or poor practice. Again, to guarantee security with the goal of disk scavenging, what would you do otherwise?&lt;BR /&gt;&lt;BR /&gt;If you allow another IO to occur during the erase, you can bypass the attempted erase function, read the blocks that have not been erased (theoretically *all* of the blocks) and scavenge the data. The only kind of IO I could consider is a write IO. But the read IO's have to be blocked so to be safe, block all IO.&lt;BR /&gt;&lt;BR /&gt;If this is a bad implementation, how would you guarantee security if you allow even one IO during the erase?&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Jun 2004 13:25:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/high-water-marking/m-p/3288931#M63111</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-06-04T13:25:38Z</dc:date>
    </item>
  </channel>
</rss>

