HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Logical I/O to XQP controlled disks

 
SOLVED
Go to solution
David Jones_21
Trusted Contributor

Logical I/O to XQP controlled disks

In the disk driver chapter of the I/O User's Reference, there is a big caveat about doing logical I/O functions to disks with XQP or an ACP present. What, if anything, can get bolloxed up doing direct IO$_READLBLK calls to a SCSI drive?

The situation is we have a 3rd party RAID storage device which is wicked fast but whose firmware lacks a 'disk scrubbing' function. I want to periodically run a program that gradually (over 12 hours) scans the logical drives to provoke revectoring of emerging bad spots in a timely fashion.
I'm looking for marbles all day long.
3 REPLIES
Robert Gezelter
Honored Contributor

Re: Logical I/O to XQP controlled disks

David,

I will freely admit that I have not researched this question in detail, but do have some comments based upon general experience over the years.

Random reads to disk blocks (allocated or unallocated) for the purpose of triggering the detection of correctable CRC failures should have limited effects, basically because you are throwing the data away (you don't care about the contents of the block, you just care that it is readable).

Trhe problems have typically occurred when people try to bypass the ACP/XQP, and thus may not read valid data.

I hope that the above is helpful.

- Bob Gezelter, http://www.rlgsc.com
John Gillings
Honored Contributor

Re: Logical I/O to XQP controlled disks

David,

I've got code that performs IO$_READLBLK's to SCSI disks (but they're mounted /FOREIGN).

I think the point they I/O user's guide is making is that READL and READP are unpredictable because you're going underneath the file system. So, just because some particular piece of data WAS at a particular logical or physical block, doesn't mean it will stay there for all time. Nor can you predict when/if caches are flushed, so you can't rely on the data being up to date, or synchronized with other processes.

For your purposes, none of that matters as all you're trying to do is trigger a revector for a block that's gone bad. Note that you could probably achieve the same effect by performing an ANALYZE/DISK/READ at low priority. Granted, if it's a RAID or mirror set, /READ doesn't guarantee that every physical block on every member is read, but at least it will let you know there is at least one good copy of every allocated block.

If you've got shadowing, a demand merge will read every physical block on every member.
A crucible of informative mistakes
Hein van den Heuvel
Honored Contributor
Solution

Re: Logical I/O to XQP controlled disks


I think the answer is 'nothing much - but we don't want to think about it: you are on your own' (this is my opinion, not HP's as I am no longer an HP employee).

To use Logical block IO you need to pretty much be unsupported space already, and one could argue you'd get what you deserve if you fail to protect yourself against interaction.

I assume is it obvious to all how a WRITELBLK could readily create havoc when an ACP/XQP is supposed to be managing the disk structure. The VIOC/XFC protects itself a little by invalidating its entire cache for a given volume when it sees even a single LBN write to any of the blocks.

Like you say, it is hard to see how a READLBLK would cause a proble for the rest of the system. IF (big if) you had an VBN cache with deferred write then READLBLK could deliver out-of-sync data, but that would be your own problem no?

The only supports interfaces to Logical Blocks I can think of are

1) the Start block for a contiguous file through the ACP Statistics Block in the SBK$L_STLBN field,
or via RMS in the XAB$L_SBN fiel:d RMS Ref Man $XABFHC 11.16.

2) placement control for creates/move in FIB$L_LOC_ADDR Placement logical block number (LBN), when the FIB$C_LBN and the FIB$V_EXACT flags are set.
Similar through RMS XABALL fields.

As bonus I'll include a little program I wrote with a IO$_WRITELBLK. The purpose of the program was simply to manage repeatable IO test by flushing the XFC at selected times.

Hope this helps some,
Hein.

#include RMS
#include IODEF
#include
struct FAB fab;
struct XABFHC xab;
main()
{
/* This program will invalidate all cached vbns for all files on a selected
** volume. The VMS VBN cache (VIOC) has no mechanisme to associate an LBN
** back to a VBN and thus plays it safe by invalidating all cached files
** for that disk. To get a valid LBN on a disk, the program creates or
** opens a contigeous file for which RMS will return the LBN for VBN 1 in
** a provided XABFHC.
**
** Needs LOG_IO priv and must point TEST_DEVICE to device to be nuked
** Have fun, Hein van den Heuvel 1993
*/
int i, stat;
short iosb[4];
char buf[512] = "No more cache";
char name[] = "TEST_DEVICE:NOCACHE.TMP";

fab = cc$rms_fab;
xab = cc$rms_xabfhc;
fab.fab$l_fop = FAB$M_CTG|FAB$M_CIF; /* need contigeous file */
fab.fab$b_fns = strlen(name);
fab.fab$l_fna = &name;
fab.fab$l_alq = 1;
fab.fab$l_xab = &xab;

stat = sys$create( &fab);
stat = sys$close (&fab);
fab.fab$l_fop = FAB$M_UFO;
stat = sys$open( &fab);
if (!(stat&1)) return stat;
stat = sys$qiow ( 0, fab.fab$l_stv, IO$_WRITELBLK, iosb, 0, 0,
buf, 512, xab.xab$l_sbn, 0, 0, 0);
if (!(stat&1)) return stat;
return iosb[0];
}