- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- I/O on VMS
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2004 03:21 AM
тАО06-18-2004 03:21 AM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2004 03:27 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2004 05:46 AM
тАО06-18-2004 05:46 AM
Re: I/O on VMS
http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.PDF says, in reference to Logical I/Os:
"Non-DSA disk devices can read or write up to 65,535 bytes in a single request. DSA devices connected to an HSC50 can transfer up to 4 billion bytes in a single request. In all cases, the maximum size of the transfer is limited by the number of pages that can be faulted into the process' working set, and then locked into physical
memory."
(DSA devices would be those on MSCP-speaking controllers.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2004 05:51 AM
тАО06-18-2004 05:51 AM
Re: I/O on VMS
Let's start a competition: who can DEMONSTRATE (verifiably) the largest transfer?
<8-])
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-19-2004 10:00 AM
тАО06-19-2004 10:00 AM
Re: I/O on VMS
> IIRC, record IO (using RMS) has a limit of 32K bytes per IO.
> You may be able to do MUCH more if you bypass RMS - but I wouldn't recommend that.
This is not exactly right.
The operative word being 'record IO'.
The maximum RECORD size is close to 32K, but the actual IO size can be up to 127 blocks (of 512 bytes) for sequential files and 63 blocks for indexed and relative.
RMS is also willing and able to do unbuffered IO throught he SYS$READ and SYS$WRITE calls with a max of 127 blocks (16 bits unsigned) for the 'normal' RAB. On Alpha, by using a RAB64' you can specify an 32 bit size for up to buffer size up to 2**31-1 bytes. This may be limited by the targetted device.
http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_032.html#read_service_routine
Wim recommends against not using RMS but that really depends on the application. For 'records' in a file, RMS can help a lot. (that buffering, sharing, transparently handling records crossing buffer boundaries, read ahead, file extents for writes and so on).
Even for blocks in a file you may want to use RMS record mode (with UDF records, to get read-ahead / write-behind).
But for minimal CPU usage, you may want to go down to block IO through SYS$READ / SYS$WRITE and there are several good reasons to use the VMS native IO function: SYS$QIO(W).
If you need further help, then please describe you application/intended use in more detail.
Also... if yo think about going low level then be sure to check out the IO Reference manual: http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.HTMl
Hope this helps,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2004 07:57 AM
тАО06-20-2004 07:57 AM
Re: I/O on VMS
My mistake, you're right on record size vs. IO size - which doesn't have to be the same.
On bypassing RMS: yes, it depends. If your application doesn't have to interact with 'native'(RMS-using) applications, I think it's Ok to bypass RMS. But otherwise? What about locking, journalling....
(I'm not that familiar with low-level issues of RMS, thanks Hein for the link)
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-21-2004 05:19 AM
тАО06-21-2004 05:19 AM
Re: I/O on VMS
"$ ANAL/SYS
SDA> read iodef
SDA> SHOW DEVICE $1$DGA12
[stuff appears on screen]
SDA> EXAM UCB+UCB$L_MAXBCNT
UCB+00190: 00000000.00020000 "........"
SDA> EVAL 20000 / ^D512 ! bytes per block
Hex = 00000000.00000100 Decimal = 256"
I took a brief look at the listings for DKDRIVER and DUDRIVER for 7.3-2.
DUDRIVER seems to have had an upper limit of 2^24 bytes introduced at 6.2 to work around a problem related to path switching. And for a disk served by the VMS MSCP Server, the maximum seems to be 127 blocks.
The default for DKDRIVER seems to be 127 blocks, but it can do as high as 256 blocks if the port hardware can support it.