- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: init problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 01:44 AM
тАО06-30-2010 01:44 AM
I'm trying to initialize a disk. My disk have Total blocks: 1048576000
I know that the default maximum_files is:
volume size in blocks /((cluster factor + 1)*2)
So with a cluster_size of 4 -> 104857600
I would want to increase this value, so I'm thinking to use 157286400.
I've given the commands:
init /cluster_size = 4 /header = 157286400 /maximum_files = 157286400 $1$DGA150: EVA-D5
mount $1$DGA150: EVA-D5
With show dev $1$DGA150: /full I have now:
Cluster size 4
Maximum files allowed 16711679
Why I obtain a different number?
The help says:"If /LIMIT is specified and no value is set for /MAXIMUM_FILES, the default is 16711679 files" but this isn't my case...
Regards
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 02:25 AM
тАО06-30-2010 02:25 AM
Solutionthe formula you cite used to be correct, and sufficient.
Nowadays disks have become big enough that other limiting factors are reached, and you ran into that one.
I do not have the exact formula at hand right now, but IIRC it is something like
(2**24 - 2**14 - 1) (I 'm not sure of the 14 here)
By the way, EVA would much prefer cluster size as multiples of 16, and furthermore, the maximum disk size is related to cluster size in such way that cluster=4 only allows for 0.5 TB, where 8 allows 1 TB. Starting at VMS 8.4 "disks" (also when presented as such by SANs) can be 2 TB, but that requires at least cluster=16.
Multiple reasons to upscale your cluster size!
hth
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 02:32 AM
тАО06-30-2010 02:32 AM
Re: init problem
Discussion on a similar topic -
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=975346
Hope this helps.
Regards,
Murali
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 02:49 AM
тАО06-30-2010 02:49 AM
Re: init problem
The init command you used here looks OK to me. Can you try with default cluster size?
The maximum size you can specify for any volume is:
(volume size in blocks)/(cluster factor + 1)
You may want to try with different values of cluster factor.
Regards,
Ketan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 02:56 AM
тАО06-30-2010 02:56 AM
Re: init problem
Below are the few ITRC threads which discuss on the similar topic.
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1250428
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=976903
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1431785
May be help full.
Regards,
Ketan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 03:30 AM
тАО06-30-2010 03:30 AM
Re: init problem
>>>
Can you try with default cluster size?
<<<
Unless the default cluster size recently changed without me noticing that, it is still the archaic value of 3.
Ever since maximum disk sizes grew over about 500 MEGAbytes (as in, less than one tenth of a percent of current sizes) that value turned out to be about the worst possible value, and even before then, when memory allocation chunks became more .5 K, it did not exactly map nicely.
Just NEVER use that default!
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 06:05 AM
тАО06-30-2010 06:05 AM
Re: init problem
Well, in this case it looks like a documentation error in HELP INIT. It should state "The maximum size you can specify for any volume is as follows: The smallest of (volume size in blocks)/(cluster factor + 1) or the absolute maximum value 16711679." As Jan and the thread that Murali referenced stated, the absolute maximum number of files supported on either ODS-2 or ODS-5 volumes is 16711679 and has been for a long time. It can't be increased by much, because File IDs have only 24 bits for the file number, and 2^24 = 16777216. 16711679 is (2^24) - (2^16) - 1. The 2^16 is probably "slack" to prevent overflows, etc.
If you really need more than 16 million files, you will need to have more devices. The FID does allow for Relative Volume Numbers, so you can create a bound volume set (BVS) and possibly have more than 16 million in the BVS, but I would not recommend using bound volume sets unless you really know what you are doing.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 06:18 AM
тАО06-30-2010 06:18 AM
Re: init problem
http://h71000.www7.hp.com/doc/84final/9996/9996pro_130.html#blue_117
Ketan ( Jan) the default cluster size is simple 1-millionth of MAXBLOCKS. Since each normal file has at least one cluster of storage, that also defines the maximum for MAX_FILES at 1M, which is much less than Fabio hoped to get.
The default for max_files ( without /limit ) is based on an average of at least 2 clusters per file. Simple.
That +1 in the formulas is the HEADER block which each file, with or without allocation, occupies in INDEXF.SYS.
Murali has provided a good link where this extra 16M files limit is already discussed.
Fabio, if you really need that many files, then you many have to consider re-creating that 500GB in smaller chunks.
for up to 160M files, you'll need 10 chunks ( 157,286,400 / 16,711,679 ).
You can then re-combine the chunks with a MOUNT/BIND if so desired into a single namespace, but I would not go there too quickly.
If the drive space has to remain as it is, then you can consider using the LD device driver to create smaller sub-disks in container files or in LBN ranges.
You have to ask yourself what the value, and costs (backup), of a single large name space are.
Maybe there is a natural division?
Maybe search lists to re-combine where/when needed?
You may also want to take a huge step back and reconsider the provenance of the need for all those files.
Maybe files were used when rows in tables, or record in files would be better suited?
Maybe 'blobs' in a database are more desirable as a storage method?
Wat 'value' does a file offer for the application: space, name, attributes.
Maybe files can be bundled (and unbundled) in large groups on the fly in ZIP files, BACKUP containers, or OpenVMS (TEXT) Library files.
Bundling would certainly also be a lot more space effective.
Hope this helps some!
Hein van den Heuvel
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 06:23 AM
тАО06-30-2010 06:23 AM
Re: init problem
>> Well, in this case it looks like a documentation error in HELP INIT.
>> It should state "The maximum size you can specify for any volume is
>> as follows: The smallest of (volume size in blocks)/(cluster factor + 1)
>> or the absolute maximum value 16711679
Yes, thats a good point.
The HELP message probably could have explicitly said for /MAXIMUM_FILES
* 16711679 is the maximum value
* if we gave a value greater than that, 16711679 would be taken by default.
The text that you have given says it all.
I will log a internal problem report for the documentation change.
Regards,
Murali
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2010 06:35 AM
тАО06-30-2010 06:35 AM
Re: init problem
The point that needs to be clear is that it is very easy to hit the absolute limit with current disk sizes.