1753491 Members
4884 Online
108794 Solutions
New Discussion юеВ

Re: Direct I/O Size

 
Mario Abruzzi
Contributor

Direct I/O Size

I am doing a performance analysis of several VMS systems and have noticed that IOSIZE tends to vary from about 5 to 128 (or more) pages per IO. My question is, in general, what things in VMS govern the size of a given direct IO.
7 REPLIES 7
David B Sneddon
Honored Contributor

Re: Direct I/O Size

Mario,

The size of any direct I/O will be governed
largely by the application/program that is doing
the I/O.
What applications are you running?
My number one rule for performance analysis is
"know your applications".
Without knowing what the application does, how do
you determine that what you see is normal or not?
I have this very issue at the moment, people
looking at performance and making incorrect
judgements because they don't understand the
application.

Regards
Dave
Mario Abruzzi
Contributor

Re: Direct I/O Size

Dave,

The performance analysis is comparative in that I am comparing selected IO metrics for the same systems before and after converting a SAN from RAID1 to RAID5. The idea is to see the effect on the RAID type. I believe that I do not necessarily need to know the application intimately because I am comparing it against itself. While doing the analysis I noticed the variation in IOSIZE. I was wondering the general underlying reason.

The largest IOSIZEs are from the SWAPPER process, which makes sense. The other applications are transaction-based and have much smaller IOs typically.
David B Sneddon
Honored Contributor

Re: Direct I/O Size

Mario,

I wouldn't expect to see any difference in I/O
size due to a different underlying RAID configuration.
The only difference I would expect to see would be
in throughput. I suspect that using controller
based RAID would hide a lot of stuff from VMS.
(unless you are gathering stats from the controllers)


Regards
Dave
Galen Tackett
Valued Contributor

Re: Direct I/O Size

I think drivers or other parts of the I/O subsystem (F11X? RMS?) can also limit the size of direct I/O operations. I remember discussing this with HP support when I was comparing throughput of a straight UltraSCSI 320 controller with software RAID, to that of a SmartArray 5304A.

In that particular case, I/O sizes through the SmartArray topped out at 127 blocks. I/O sizes to the software RAID DPAnn virtual devices typically were much larger.

HP support told me that the SCSI driver imposed the limit of 127 blocks per physical I/O. They said that this limit dated back to older SCSI hardware which couldn't handle larger operations, though modern hardware typically can.

From what I saw and was told in that investigation, the 127 block limit probably isn't terribly significant with modern optimizing controllers that can often combine multiple QIOs into a single physical transfer.

(ust for fun I tried poking the UCB of the SCSI device to increase the maximum transfer size. This had no effect that I could observe--not even crashing the system! :-) --so maybe the number 127 is hard coded into the driver somewhere. This kind of tweaking is not recommended on a production system, of course. :-)
Robert Gezelter
Honored Contributor

Re: Direct I/O Size

Mario,

An often forgotten element is the RMS buffering parameters. These can be defaulted at the file, process, and system levels. The SHOW RMS command will display the process and System defaults. Note that the process values ARE NOT propogated to subprocesses.

The settings can be also managed at a file level in the RMS data structures.

I hope that the above is helpful.

- Bob Gezelter, http://www.rlgsc.com
Jan van den Ende
Honored Contributor

Re: Direct I/O Size

Mario,

one more thing that _MIGHT_ cause difference is... fragmentation.
And that includes both disk file fragmentation, and internal file fragmentation.

If the OS needs to do ANY data reading, then it will TRY to get as big a chunck as it can, because the time for the extra overhead of a bigger IO is relatively very small compared to the IO itself.
And the extra data might just be needed by a next request.

Any such IO however will never be bigger than one fragment.

For write IO's it really is (nearly) totally decided by the applic, it would be surprising to see differences there.

hth,

Proost.

Have one on me.

Jan

Don't rust yours pelled jacker to fine doll missed aches.
Hein van den Heuvel
Honored Contributor

Re: Direct I/O Size



> My question is, in general, what things in VMS govern the size of a given direct IO.

What VMS version? XFC or VFC in use

In general it is the application and NOT the system except for XFC read-ahead.
In many Unix implemenations the IO size is basically 8KB or so, based on the file buffer cache behaviour. Not so under VMS.

Typicaly the applicaiton buffer size is a DB page size, or an RMS buffer size. As pointed out, the RMS buffer size can be defaulted at file, process or system level. SHOW RMS will give a first indication. Typically you'll se 16 blocks = 8KB for older VMS versions, 32 pagelettes = 16 KB for more recent work.

For indexed and relative file, the IO size is teh 'BUCKET SIZE'. While a given (indexed) file can have multiple bucket sizes, they often do not and DIR/FULL output or F$FILE(file,"BKS") can be used.

Please not that VMS systems often do 1 block IO for FILE HEADERS (and RMS Prologue blocks, and some DIRECTORY activity).
And RMS applications often write or read only just as much data as there really is.
A 2000 byte file will be read with a 4-block = 2048 byte IO. NO more, no less.


hth,
Hein.