Operating System - OpenVMS
1751857 Members
5696 Online
108782 Solutions
New Discussion юеВ

Re: Cluster size of large disks

 
Uwe Zessin
Honored Contributor

Re: Cluster size of large disks

Hm, I thought that the 'sector size' of a SCSI disk is usually 512 bytes when working with a server. Some storage arrays use 520 bytes, but that's beyond this thread's topic.

Today (and it's been that way for a long time), disks are addressed by logical block numbers, not physical addresses. Again: expect the inner geometry to be different that what is reported through the interface - see my previous response.

If the total number of blocks on the disk is not a multiple of clusters, then, at least in the past, the last (incomplete) cluster is assigned to BADBLK.SYS. Nice trick to avoid special handling code.
.
Hein van den Heuvel
Honored Contributor

Re: Cluster size of large disks

Robert> "I'm confused -- comments about the irrelevance of cluster size compared to sector size -- if you choose a cluster size that is not an integer multiple of the sector size"

But it is... a sector (in VMS land) is 512 bytes, and a cluster is a whole number of sectors by law.
http://en.wikipedia.org/wiki/Sector

Bob G seems to refer to cylinders. Back when I was young. Ok... way back... disks had a fixed number of sectors in a cylinder, often an odd number, and you could play with allocation matching a cylinder and so on.

But when by the time (10+ years ago) got down to 3.5" disks, disks became zoned (banded) with the outside zones having twice more cyl/sec then the inside. So forget about adapting/expoiting that.

However... next came smart controllers with stipesets and raidset and CHUNK sizes. I am a firm believer in making clustersize and chunk sizes have large common denominators. For example: cluster size 16, chunk size 128 but for other applications possibly clustersize 512, chunksize 64.

hth,
Hein.
Robert Gezelter
Honored Contributor

Re: Cluster size of large disks

Hein,

I may perhaps have written unclearly (reviewing my post I said "sectors" instead of "blocks" at one point in my posting. The terminology I am using is:

- BLOCK == 512 bytes (one or more sectors)
- TRACK == the set of blocks/sectors on a single heads pass over the media
- CYLINDER == the set of TRACKs which can be accessed without repositioning the heads (note that this definition works for both for canonical drives (1 head/surface) and other variations (some drives have more than one head/surface).

The point I was trying to get across is that clusters are only file allocation units, they do not (directly) affect whether a disk operation will span track or cylinder boundaries (and yes, I am aware of how RMSDFMBC is affected by cluster size, but that does not affect the basic concept).

Optimal selection of cluster size truly depends on what the file population on the disk is. If you are storing small files (e.g., command files, source listings, emails), then small cluster sizes produce the least breakage. Even modest cluster sizes (e.g., 11) can produce 20-30% breakage.

On the other hand, if one is storing a huge RMS file or DBMS database, then a far larger cluster size is appropriate. In this I agree with the point (that I think) you are making.

In summary, I would consider the controller imposed preferences to be a factor similar in import to the physical disk geometry issues, and try for a balance between them. In any event, the nature of the files stored on the volume is tremendously important. Large amounts of breakage reduce the overall efficiency of the storage system in many ways, and are to be avoided if possible.

- Bob Gezelter, http://www.rlgsc.com
Antoniov.
Honored Contributor

Re: Cluster size of large disks

Uwe gave a full description of basilar disk concepts. He's a big expert in this area.
Cluster size is called allocation unit in other lans (e.g. Windoze); it simply means how many record system i/O reads togheter.
Read above about performance consideration.

Antonio Vigliotti
Antonio Maria Vigliotti
Robert Gezelter
Honored Contributor

Re: Cluster size of large disks

Antoniov,

With all due respect, I must disagree with your last post.

In OpenVMS, cluster factor is almost entirely a factor in how the disk is organized, NOT the determinor of how many blocks are read at a time (there are numerous other parameters which directly affect the size of IO operations).

The only exception of which I am aware is the implicit one with respect to the fact that physical IO operations, at the driver level, must be to physically contiguous sections of the disk, but this is a consequence of fragmentation, not specifically tied to cluster size.

- Bob Gezelter, http://www.rlgsc.com
Paul Hansford
Occasional Advisor

Re: Cluster size of large disks

>The application is a third party
>application that uses RMS and I've
>been told that the cluster size must
>be 51 or lower.

Is it that the application requires a cluster size of less than 51, or does the application have hardcoded limit on allocated file size.

I once had tp defrag system disk, using disk to disk image backup restore.

The target disk on my backup disk was very large (about 100gb) and I restored to original system disk with noinit.
Hence the cluster size was now large, (and minimum allocated size)

When I attempted to boot, I received the following: (or similar).

%DECnet-I-LOADED, network base image loaded, version = 05.12.00

%DECnet-W-NOOPEN, could not open SYS$SYSROOT:[SYSEXE]NET$CONFIG.DAT

I logged a call with Compaq, and was reliably informed that, The loader for Decnet base image tries to load net$config.dat, but has a hardcoded file size limit of, I think, 128 blocks. My net$config.dat was larger than 128 blocks. I believe this problem occured on OpenVMS 7.3-1, I don't know if it is still an issue, but my point being that even Compaq/HP designed OS code which could not handle files over a specific hardcoded size.

Jeroen Hartgers_3
Frequent Advisor

Re: Cluster size of large disks

The option set cluster size is available since open vms 7.2.

If you make the clustersize smaller you have to extent the indexf.sys manualy during the init, and you must probaly change your max files.

There is only 1 exception. If you restore a system disk you can not use backup/noinit because the disk wil not be a bootable disk after this action.

Or you could make partitions depents of you controler
Uwe Zessin
Honored Contributor

Re: Cluster size of large disks

Forgive me, but all that doesn't make sense to me:

> The option set cluster size is available since open vms 7.2.

That's the first release to support a BITMAP.SYS > 255 blocks.

I have been able to use INITIALIZE/CLUSTER for _many_, _many_ releases, at least V4.x.

> If you make the clustersize smaller you have to extent the indexf.sys...

Why? The storage allocation bitmap is in BITMAP.SYS, not INDEXF.
Maximum files is the size of the file header allocation bitmap in INDEXF.SYS.

> There is only 1 exception.

Again, I don't understand that. BACKUP is supposed to properly update the boot block if you use BACKUP/IMAGE
.
Karl Rohwedder
Honored Contributor

Re: Cluster size of large disks

BACKUP/NOINIT:

I am quite sure, that a copied systemdisks using BACKUP/IMAGE/NOINIT on order to change some relevant disk parameters and the new systemdisks booted perfectly (The /IMAGE is responsible for the correct backlinking... of VMS directories).

regards Kalle