- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- What determines Avg read i/o size
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 05:06 AM
тАО09-08-2010 05:06 AM
What determines Avg read i/o size
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 07:35 AM
тАО09-08-2010 07:35 AM
Re: What determines Avg read i/o size
VMS BACKUP? Or one of the various SAN tools?
What's the BACKUP command?
BACKUP itself has sensitivities to quotas, as well as means to throttle its activities. (There are days I'd like to throttle BACKUP myself, but that's another discussion.) Check for differences here. (I have the current BACKUP quota recommendations posted at the HoffmanLabs web site.)
With VMS BACKUP, disk cluster size, and application and system default extent size, and the results of multiple and incrementally-extended log files and such, all mixed together to produce disk fragmentation and I/O bottlenecks.
Have a look at the file size distribution on the disks you're comparing, and at the relative degrees of disk (and file) fragmentation. The Freeware DFU tool can get you some of that, as can the DFO tool's analyzer features, as cam various other tools.
VMS boxes used as application development boxes can contain frequently-revised and often tending toward small files and incrementally-extended log files and parallel users all churning away on shared storage, and the ensuing fragmentation.
Writing as somebody that does development as well as operations and tuning work, developers aren't the folks you want to share a server with, and are most definitely not part of a configuration you want to repurpose as a comparative benchmark.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 08:05 AM
тАО09-08-2010 08:05 AM
Re: What determines Avg read i/o size
Does SHOW RMS show the same settings?
Is this backup as a copy tool, or backup to a save set? Is that average IO size from the input device, or to the output device?
When writting to a saveset backup backup uses RMS $WRITE and I think it listens to SET RMS
Highwater marking seting the same on the output devices (shouldn't matter for the sequential writes, but just in case...)
Fragmentation similar?
hth,
Hein
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 08:05 AM
тАО09-08-2010 08:05 AM
Re: What determines Avg read i/o size
$ back/image device: /ignore=inter/nocrc -
/block=65535/io_load=8 nla0:nulltest. -
/save/noassist
I will have to look at the other items you indicated but information I have from HP is that the read io size is derived from the /block setting. Using the same block size setting, I see a different read io size.
I am trying to understand why this is happening.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 08:24 AM
тАО09-08-2010 08:24 AM
Re: What determines Avg read i/o size
Show RMS settings matched on both sides.
Devices initialized with /nohigh qualifier
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 08:29 AM
тАО09-08-2010 08:29 AM
Re: What determines Avg read i/o size
The /BLOCK size sets the saveset and the transfer buffer size. BACKUP still has to fill that buffer.
Check your disk and your file fragmentation. (And if I was called in for this, I'd also have a look at fragmentation within the RMS file structures, as those can get clogged, too. But that's not a contributor here, though.)
For grins, I might well create a BACKUP /IMAGE copy of the input disks (which inherently defragments it) and then compare the original and the copy in another round of NLA0: output tests.
You do recognize and intend that the BACKUP saveset produced by the cited BACKUP command is permitted to contain silent data corruptions, which is a reasonable choice given that this is for testing and targets the NLA0: null device but ill-suited to production, of course.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 11:00 AM
тАО09-08-2010 11:00 AM
Re: What determines Avg read i/o size
What it the problem that you are really trying to solve? Do you backup to the null device in production, or do you believe that backup up to the null device is a faithful reproduction of behaviour observed in production?
Please be aware that the NL: device is relatively SLOW for many usages because RMS will decide to use RECORD mode, and issue an IO per record instead of block mode.
This is very visible and measurable with a simple COPY of a large file.
NL: output is slower than most disks.
Anyway...
>>>This is a purely read speed test so no writing taking place.
And the T4 observations were based on that?
>>> Show RMS settings matched on both sides.
Thanks for checking because they DO matter, notably on output.
I just tested with the LD TRACE option:
$ ld create/size=1000000 dka0:[temp]tmp.dsk
$ ld create lda1
$ ld connect dka0:[temp]tmp.dsk lda1
$ ld trace lda1
$ init lda1 /clus=1024/ind=beg/nohigh/head=1000/max=1000 temp
$ mount lda1 temp temp
%MOUNT-I-MOUNTED, TEMP mounted on _BUNDY$LDA1:
$ set rms /block=50 /sys
$ back sys$help: temp:[000000]help.bck/save
$ ld show/trace lda1
:
End Time Elaps Pid Lbn Bytes Iosb Function
---------------------------------------------------------------------
13:20:11.234475 00.000136 0000009A 53842 25600 NORMAL WRITEPBLK|EXFUNC
13:20:11.236546 00.000136 0000009A 53892 25600 NORMAL WRITEPBLK|EXFUNC
13:20:11.236766 00.000120 0000009A 53942 25600 NORMAL WRITEPBLK|EXFUNC
13:20:11.239003 00.000111 0000009A 53992 25600 NORMAL WRITEPBLK|EXFUNC
13:20:11.241107 00.000131 0000009A 54042 25600 NORMAL WRITEPBLK|EXFUNC
$ set rms/blo=127/sys
$ back sys$help: temp:[000000]help.bck/save
$ ld show/trace lda1
:
13:20:39.744468 00.001021 0000009A 158050 65024 NORMAL WRITEPBLK|EXFUNC
13:20:39.749063 00.000396 0000009A 158177 65024 NORMAL WRITEPBLK|EXFUNC
13:20:39.753607 00.000152 0000009A 158304 65024 NORMAL WRITEPBLK|EXFUNC
13:20:39.758493 00.000388 0000009A 158431 65024 NORMAL WRITEPBLK|EXFUNC
13:20:39.758555 00.000247 0000009A 158558 :
$ write sys$output 127*512
65024
If you really want to figure this out, then I recommend using LD TRACE on your real input disk, not a container as in my example.
>> Devices initialized with /nohigh qualifier
Thanks for checking. Mostly important for output, but can perhaps surprisingly effect read also, when reading after the current HWM for a file. But that would happen just once pretty much.
Guess the next thing to also check is Fragmentation.
Good luck,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-08-2010 11:37 AM
тАО09-08-2010 11:37 AM
Re: What determines Avg read i/o size
One other piece of information that i have just discovered. I was able to review read i/o size for a device on our old storage that is current on the same port as this test device. The read i/o size using the same backup command is 125 for the device on the old storage vs 62 for the device on the new. I am also trying to determine is something is set differently on the storage port side.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-09-2010 11:07 AM
тАО09-09-2010 11:07 AM
Re: What determines Avg read i/o size
Jur (lddriver author)