1745862 Members
4681 Online
108723 Solutions
New Discussion юеВ

Re: init problem

 
SOLVED
Go to solution
Cfabio
Frequent Advisor

init problem

Hi

I'm trying to initialize a disk. My disk have Total blocks: 1048576000

I know that the default maximum_files is:
volume size in blocks /((cluster factor + 1)*2)

So with a cluster_size of 4 -> 104857600

I would want to increase this value, so I'm thinking to use 157286400.

I've given the commands:
init /cluster_size = 4 /header = 157286400 /maximum_files = 157286400 $1$DGA150: EVA-D5

mount $1$DGA150: EVA-D5

With show dev $1$DGA150: /full I have now:
Cluster size 4
Maximum files allowed 16711679


Why I obtain a different number?
The help says:"If /LIMIT is specified and no value is set for /MAXIMUM_FILES, the default is 16711679 files" but this isn't my case...

Regards
9 REPLIES 9
Jan van den Ende
Honored Contributor
Solution

Re: init problem

Cfabio,

the formula you cite used to be correct, and sufficient.
Nowadays disks have become big enough that other limiting factors are reached, and you ran into that one.
I do not have the exact formula at hand right now, but IIRC it is something like
(2**24 - 2**14 - 1) (I 'm not sure of the 14 here)
By the way, EVA would much prefer cluster size as multiples of 16, and furthermore, the maximum disk size is related to cluster size in such way that cluster=4 only allows for 0.5 TB, where 8 allows 1 TB. Starting at VMS 8.4 "disks" (also when presented as such by SANs) can be 2 TB, but that requires at least cluster=16.

Multiple reasons to upscale your cluster size!

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
P Muralidhar Kini
Honored Contributor

Re: init problem

Hi Cfabio,

Discussion on a similar topic -
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=975346

Hope this helps.

Regards,
Murali
Let There Be Rock - AC/DC
Shriniketan Bhagwat
Trusted Contributor

Re: init problem

Hi Cfabio,

The init command you used here looks OK to me. Can you try with default cluster size?
The maximum size you can specify for any volume is:

(volume size in blocks)/(cluster factor + 1)

You may want to try with different values of cluster factor.

Regards,
Ketan
Shriniketan Bhagwat
Trusted Contributor
Jan van den Ende
Honored Contributor

Re: init problem

@Ketan,

>>>
Can you try with default cluster size?
<<<

Unless the default cluster size recently changed without me noticing that, it is still the archaic value of 3.

Ever since maximum disk sizes grew over about 500 MEGAbytes (as in, less than one tenth of a percent of current sizes) that value turned out to be about the worst possible value, and even before then, when memory allocation chunks became more .5 K, it did not exactly map nicely.

Just NEVER use that default!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Jon Pinkley
Honored Contributor

Re: init problem

Cfabio,

Well, in this case it looks like a documentation error in HELP INIT. It should state "The maximum size you can specify for any volume is as follows: The smallest of (volume size in blocks)/(cluster factor + 1) or the absolute maximum value 16711679." As Jan and the thread that Murali referenced stated, the absolute maximum number of files supported on either ODS-2 or ODS-5 volumes is 16711679 and has been for a long time. It can't be increased by much, because File IDs have only 24 bits for the file number, and 2^24 = 16777216. 16711679 is (2^24) - (2^16) - 1. The 2^16 is probably "slack" to prevent overflows, etc.

If you really need more than 16 million files, you will need to have more devices. The FID does allow for Relative Volume Numbers, so you can create a bound volume set (BVS) and possibly have more than 16 million in the BVS, but I would not recommend using bound volume sets unless you really know what you are doing.

Jon
it depends
Hein van den Heuvel
Honored Contributor

Re: init problem

The documentation could be a little more clear and mention that the default for MAX_FILES with /LIMIT is also the absolute maximum for MAX_FILES.

http://h71000.www7.hp.com/doc/84final/9996/9996pro_130.html#blue_117

Ketan ( Jan) the default cluster size is simple 1-millionth of MAXBLOCKS. Since each normal file has at least one cluster of storage, that also defines the maximum for MAX_FILES at 1M, which is much less than Fabio hoped to get.

The default for max_files ( without /limit ) is based on an average of at least 2 clusters per file. Simple.

That +1 in the formulas is the HEADER block which each file, with or without allocation, occupies in INDEXF.SYS.

Murali has provided a good link where this extra 16M files limit is already discussed.

Fabio, if you really need that many files, then you many have to consider re-creating that 500GB in smaller chunks.
for up to 160M files, you'll need 10 chunks ( 157,286,400 / 16,711,679 ).
You can then re-combine the chunks with a MOUNT/BIND if so desired into a single namespace, but I would not go there too quickly.

If the drive space has to remain as it is, then you can consider using the LD device driver to create smaller sub-disks in container files or in LBN ranges.

You have to ask yourself what the value, and costs (backup), of a single large name space are.
Maybe there is a natural division?
Maybe search lists to re-combine where/when needed?

You may also want to take a huge step back and reconsider the provenance of the need for all those files.
Maybe files were used when rows in tables, or record in files would be better suited?
Maybe 'blobs' in a database are more desirable as a storage method?
Wat 'value' does a file offer for the application: space, name, attributes.

Maybe files can be bundled (and unbundled) in large groups on the fly in ZIP files, BACKUP containers, or OpenVMS (TEXT) Library files.

Bundling would certainly also be a lot more space effective.

Hope this helps some!
Hein van den Heuvel
HvdH Performance Consulting
P Muralidhar Kini
Honored Contributor

Re: init problem

Jon, Hein,

>> Well, in this case it looks like a documentation error in HELP INIT.
>> It should state "The maximum size you can specify for any volume is
>> as follows: The smallest of (volume size in blocks)/(cluster factor + 1)
>> or the absolute maximum value 16711679
Yes, thats a good point.

The HELP message probably could have explicitly said for /MAXIMUM_FILES
* 16711679 is the maximum value
* if we gave a value greater than that, 16711679 would be taken by default.

The text that you have given says it all.

I will log a internal problem report for the documentation change.

Regards,
Murali
Let There Be Rock - AC/DC
Jon Pinkley
Honored Contributor

Re: init problem

"smaller of" would probably be better than "smallest of", since there are only two quantities involved.

The point that needs to be clear is that it is very easy to hit the absolute limit with current disk sizes.
it depends