- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Logical I/O to XQP controlled disks
Operating System - OpenVMS
1753518
Members
4922
Online
108795
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2005 11:54 AM
тАО12-12-2005 11:54 AM
In the disk driver chapter of the I/O User's Reference, there is a big caveat about doing logical I/O functions to disks with XQP or an ACP present. What, if anything, can get bolloxed up doing direct IO$_READLBLK calls to a SCSI drive?
The situation is we have a 3rd party RAID storage device which is wicked fast but whose firmware lacks a 'disk scrubbing' function. I want to periodically run a program that gradually (over 12 hours) scans the logical drives to provoke revectoring of emerging bad spots in a timely fashion.
The situation is we have a 3rd party RAID storage device which is wicked fast but whose firmware lacks a 'disk scrubbing' function. I want to periodically run a program that gradually (over 12 hours) scans the logical drives to provoke revectoring of emerging bad spots in a timely fashion.
I'm looking for marbles all day long.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2005 02:20 PM
тАО12-12-2005 02:20 PM
Re: Logical I/O to XQP controlled disks
David,
I will freely admit that I have not researched this question in detail, but do have some comments based upon general experience over the years.
Random reads to disk blocks (allocated or unallocated) for the purpose of triggering the detection of correctable CRC failures should have limited effects, basically because you are throwing the data away (you don't care about the contents of the block, you just care that it is readable).
Trhe problems have typically occurred when people try to bypass the ACP/XQP, and thus may not read valid data.
I hope that the above is helpful.
- Bob Gezelter, http://www.rlgsc.com
I will freely admit that I have not researched this question in detail, but do have some comments based upon general experience over the years.
Random reads to disk blocks (allocated or unallocated) for the purpose of triggering the detection of correctable CRC failures should have limited effects, basically because you are throwing the data away (you don't care about the contents of the block, you just care that it is readable).
Trhe problems have typically occurred when people try to bypass the ACP/XQP, and thus may not read valid data.
I hope that the above is helpful.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2005 02:29 PM
тАО12-12-2005 02:29 PM
Re: Logical I/O to XQP controlled disks
David,
I've got code that performs IO$_READLBLK's to SCSI disks (but they're mounted /FOREIGN).
I think the point they I/O user's guide is making is that READL and READP are unpredictable because you're going underneath the file system. So, just because some particular piece of data WAS at a particular logical or physical block, doesn't mean it will stay there for all time. Nor can you predict when/if caches are flushed, so you can't rely on the data being up to date, or synchronized with other processes.
For your purposes, none of that matters as all you're trying to do is trigger a revector for a block that's gone bad. Note that you could probably achieve the same effect by performing an ANALYZE/DISK/READ at low priority. Granted, if it's a RAID or mirror set, /READ doesn't guarantee that every physical block on every member is read, but at least it will let you know there is at least one good copy of every allocated block.
If you've got shadowing, a demand merge will read every physical block on every member.
I've got code that performs IO$_READLBLK's to SCSI disks (but they're mounted /FOREIGN).
I think the point they I/O user's guide is making is that READL and READP are unpredictable because you're going underneath the file system. So, just because some particular piece of data WAS at a particular logical or physical block, doesn't mean it will stay there for all time. Nor can you predict when/if caches are flushed, so you can't rely on the data being up to date, or synchronized with other processes.
For your purposes, none of that matters as all you're trying to do is trigger a revector for a block that's gone bad. Note that you could probably achieve the same effect by performing an ANALYZE/DISK/READ at low priority. Granted, if it's a RAID or mirror set, /READ doesn't guarantee that every physical block on every member is read, but at least it will let you know there is at least one good copy of every allocated block.
If you've got shadowing, a demand merge will read every physical block on every member.
A crucible of informative mistakes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2005 02:52 PM
тАО12-12-2005 02:52 PM
SolutionI think the answer is 'nothing much - but we don't want to think about it: you are on your own' (this is my opinion, not HP's as I am no longer an HP employee).
To use Logical block IO you need to pretty much be unsupported space already, and one could argue you'd get what you deserve if you fail to protect yourself against interaction.
I assume is it obvious to all how a WRITELBLK could readily create havoc when an ACP/XQP is supposed to be managing the disk structure. The VIOC/XFC protects itself a little by invalidating its entire cache for a given volume when it sees even a single LBN write to any of the blocks.
Like you say, it is hard to see how a READLBLK would cause a proble for the rest of the system. IF (big if) you had an VBN cache with deferred write then READLBLK could deliver out-of-sync data, but that would be your own problem no?
The only supports interfaces to Logical Blocks I can think of are
1) the Start block for a contiguous file through the ACP Statistics Block in the SBK$L_STLBN field,
or via RMS in the XAB$L_SBN fiel:d RMS Ref Man $XABFHC 11.16.
2) placement control for creates/move in FIB$L_LOC_ADDR Placement logical block number (LBN), when the FIB$C_LBN and the FIB$V_EXACT flags are set.
Similar through RMS XABALL fields.
As bonus I'll include a little program I wrote with a IO$_WRITELBLK. The purpose of the program was simply to manage repeatable IO test by flushing the XFC at selected times.
Hope this helps some,
Hein.
#include RMS
#include IODEF
#include
struct FAB fab;
struct XABFHC xab;
main()
{
/* This program will invalidate all cached vbns for all files on a selected
** volume. The VMS VBN cache (VIOC) has no mechanisme to associate an LBN
** back to a VBN and thus plays it safe by invalidating all cached files
** for that disk. To get a valid LBN on a disk, the program creates or
** opens a contigeous file for which RMS will return the LBN for VBN 1 in
** a provided XABFHC.
**
** Needs LOG_IO priv and must point TEST_DEVICE to device to be nuked
** Have fun, Hein van den Heuvel 1993
*/
int i, stat;
short iosb[4];
char buf[512] = "No more cache";
char name[] = "TEST_DEVICE:NOCACHE.TMP";
fab = cc$rms_fab;
xab = cc$rms_xabfhc;
fab.fab$l_fop = FAB$M_CTG|FAB$M_CIF; /* need contigeous file */
fab.fab$b_fns = strlen(name);
fab.fab$l_fna = &name;
fab.fab$l_alq = 1;
fab.fab$l_xab = &xab;
stat = sys$create( &fab);
stat = sys$close (&fab);
fab.fab$l_fop = FAB$M_UFO;
stat = sys$open( &fab);
if (!(stat&1)) return stat;
stat = sys$qiow ( 0, fab.fab$l_stv, IO$_WRITELBLK, iosb, 0, 0,
buf, 512, xab.xab$l_sbn, 0, 0, 0);
if (!(stat&1)) return stat;
return iosb[0];
}
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP