- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Disk Initialization Parameters for good perfor...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 03:31 PM
тАО05-24-2010 03:31 PM
Re: Disk Initialization Parameters for good performance
This disk was giving HEADERFULL a couple of months ago and was highly fragmented above 90% (DFG Defrag), so I proceeded to make a backup (disk to disk), creating a new one ($1$ DGA75) with the parameters that referred. Then I presented this new disk as the original and removing the fragmented. We currently no have HEADEFULL symptom, but again had to defragment using the same method ( for ist large size and time to defragment ) This disk along with others, who have also defragmented, they are processed and updated in batch mode, and the ejecution batch time have risen excessively. My question or doubt come from knowing whether any initialization parameters used may be impacting these processing times.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 08:07 PM
тАО05-24-2010 08:07 PM
Re: Disk Initialization Parameters for good performance
You need to initialize the disk based on your requirement.
Below are few information about initializing the disk.
The file system allocates/deallocates disk space in multiples of the Cluster Size. If a disk has a Cluster Size of 18 (as in your case), no file is smaller than 18 blocks. A large cluster size helps reduce file fragmentation, but may be wasteful of disk space. / MAXIMUM_FILES qualifier defines the maximum number of files that the volume may hold. It actually is used to create a bitmap of the size given internally to INDEXF.SYS. Each bit in this bitmap maps to a file header in INDEXF.SYS. If the bit is 0 (zero) the file header is available and may be used to create a new file. If the bit is 1 (one) the header is in use. This bitmap decreases the time needed to allocate an unused file header; by checking the state of individual bits in the bitmap, a sequential search of INDEXF.SYS for a free header is avoided. The number of bits in this bitmap determines the maximum number of file headers which the disk could ever have. It does not determine the number of headers the disk has. The actual number of headers is determined by /HEADERS qualifier and by the expansion of INDEXF.SYS.
The default value for MAXIMUM_FILES is usually sufficient. It is determined from the size of the disk using the following algorithm:
MAXIMUM_FILES = (((DISK_SIZE_IN_BLOCKS + 4095)/4096) + 254)/255
If the disk is going to be used for many small files, this qualifier can be use to increase the value. Specifying a large value for MAXIMUM_FILES has very little impact on disk space.
When INDEXF.SYS is created on the freshly initialized volume, only some file headers are actually created. The default number created is 16. The value specified for /HEADER qualifier changes that number. The default value of 16 file headers is enough for about 10 files to be created, which is generally too small. Once these 16 file headers are in use, the next file create or expand also must expand INDEXF.SYS to make room for more file headers. INDEXF.SYS is limited to about 50 extents (the actual value is between 28 and 77). Once these 50 expansions are done, any further attempts to expand INDEXF.SYS (i.e. another file create) fails with the "SYSTEM-F-HEADERFULL, file header is full" error. The number of headers specified should be based on an estimate of the number of files and directories that are to be on the disk. Each header requires on block of disk space (512 bytes). Specifying too many headers wastes disk space. Specifying too few headers causes INDEXF.SYS expansion and fragmentation. This can impact performance and may lead to the HEADERFULL error.
Regards,
Ketan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 08:22 PM
тАО05-24-2010 08:22 PM
Re: Disk Initialization Parameters for good performance
HEADERFULL is a typical case, when the disk is heavily fragmented and files are having many extents. If the INDEXF.SYS, can no longer be extended, as a temporarily workaround, you can delete some unwanted or temporary files from the volume. This will free the headers associated with the deleted files for reuse, eliminating the immediate need to extend the index file. You can also use BACKUP to save/restore the volume or defrag the volume to get rid of the HEADERFULL.
Regards,
Ketan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 08:34 PM
тАО05-24-2010 08:34 PM
Re: Disk Initialization Parameters for good performance
There is 40% free space right now right?
Let's say you are willing to waste 5% in roundup.
That would be 671088640 / ( 20 * 736000 = 45 blocks per file.
To me that would suggest a cluster size of at least 32, maybe 64.
>> My question now is whether the initialization in this manner could be negatively affecting the performance, because currently we are having some slowness.
That's a big leap. I suspect you know a lot more to make you think in this direction. Please help us help you and explain some of your inputs/thinking.
High window turn rate?
Ugly MONI FCP, FILE pictures?
>> and the ejecution batch time have risen excessively.
Let us hear some more... Just execution time increase with similar CPU time and Direct IO counts? That could be slower IOs... or less efficient XFC cache. But maybe just something completely different changed over time. Like more XFC memory pressure and less files cached, or 'duplicate key chains' building in RMS indexed files, or...
>> highly fragmented above 90%
I like DFU to study that.
Commands like $ DFU REPORT DISK and $ DFU SEARCH/FRAG=MINI=500
What are you doing to minimize fragmentation at the source, rather than try to fight it post fact? File pre-allocation? SET RMS/EXTEN=XXX?
You may want to split the volume such as to have one for smaller growing and shrinking files where those can fight it out amongst themselves and a second part to let the large (RMS indexed) file grow in piece with large cluster size and good contiguous-bets-try allocation and extends. The second drive would never needed to be defragged. The first would be hopeless and can just be left, but if you want, a defragger will have an easy time with many smaller, un-opened files.
The volume is labeled 'RMS'. Does it hold large, growing files?
I like manually pre-extending them when they are almost out of the allocated space. [ I have some tools to quickly show space in use and free per area, and an other tool to extend a specific area by a specific number of blocks (no 65K limit... I've pre-extended files in production from 40GB to 60GB ) ]
Good luck!
Regards,
Hein van den Heuvel
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 10:12 PM
тАО05-24-2010 10:12 PM
Re: Disk Initialization Parameters for good performance
You don't say how your performance is bad now, only that you had the HEADERFULL error.
Is the performance bad for reads or just writes? If it's just writes, are you sure that the cache batteries in your SAN array are still functional and holding charge?
Steve
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2010 11:09 AM
тАО05-25-2010 11:09 AM
Re: Disk Initialization Parameters for good performance
I've done a LOT of testing with various arrays. I use the DISK_BLOCK freeware tool to place differing loads on arrays. What I tend to find is that with most current SAN storage arrays (MSA arrays, EVA Storage Arrays, and XP disk arrays) it is better to use a cluster size that is a power of 2. So, 2, 4, 8, 16, 32, 64, 128 and so on.
Essentially by using a power of two as the cluster size, the OpenVMS disk cluster tends to be better aligned with the cache segments within the SAN storage array.
Additionally, if the volume will have a large I/O request rate, it is often better to use a large cluster size. Though this may "waste" space, as the size of physical drives increases and the cost per MB decreases, we can start to focus more on the performance gain over the cost. We need to continue to balance them, of course. But the cost has less of an impact in the equation.
Let's use two examples. If we have a user volume with a lot of small files and reports, then a lower cluster size (16) might make more sense, since the chance of wasting space is greater. But if we have a data volume with a lot of RMS files, we might want to consider a much larger cluster size (64 or 128) to help decrease the I/O request rate against this volume.
So, what does a larger cluster size help improve performance? Well, most of the current arrays are designed to handle a moderate number of large I/O requests. By moving the cluster size to a larger size we tend to influence things such as the size of extents that are allocated, the default bucket sizes of RMS files. And so on.
And of course, we hit the point of diminishing returns quite quickly. But I would note that 64 is a much better number than 18. OpenVMS will often do one I/O request instead of 3.5 (plus change) I/o requests to move the same amount of data.
How much does all this matter? Well, in my testing, I tend to see about a 5 to 10 percent improvement in throughput by increasing the cluster size.
And of course it depends on the workload. But a rule of thumb would be to use a power of two for the cluster size and try to balance the need to decrease I/O requests against the cost of the "wasted" space.
Also, the impact does depend on the technology of the SAN storage array. EVA storage arrays can sustain more I/O requests on a single host port and the XP disk array. But the XP disk array can handle FAR larger I/O requests (based on the size of the data transferred). So, the answer is very much array dependent.
So, while I agree with the answers in general, since OpenVMS no longer manages the storage and needs to depend on the SAN storage controller, an awareness of what works best with that environment is needed to provide the correct size of the cluster for a specific volume.
Hope that helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2010 04:12 AM
тАО05-26-2010 04:12 AM
Re: Disk Initialization Parameters for good performance
Using a cluster size that is a power of two does not directly impact the I/O transfer size. It only indirectly impacts the I/O transfer size for each I/O request.
What are some of the ways it impacts the I/O transfer size? It will directly impact the default RMS bucketsize. That will help RMS files, especially indexed files. But it will not impact much else. It will indirectly impact the fragmentation. Larger extents can help reduce fragmentation, though on a volume with MANY small files it will tend to waste space.
But Hein was correct to point out that it only indirectly impacts the I/O transfer size. If you application does single block I/O requests, that is exactly what OpenVMS will do.
As in all things OpenVMS, your mileage will vary. I recommend testing this in your own environment. And of course, that can be easier said than done.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2010 05:23 AM
тАО05-26-2010 05:23 AM
Re: Disk Initialization Parameters for good performance
Experience with previous similar questions leads me to be extremely skeptical that the disk volume structures have anything to do with the underlying performance problems.
Until statistics are collected and benchmarks are established, these sorts of "fix that" or "add an SSD" discussions are typically futile. The actual performance-limiting factors can lurk in most any spot within the typical complex application or operating system environment. (This is also why RT-ish OSs are starting to appear again in widespread use, too.)
With OpenVMS, you should have some T4 data.
And if you can't spin a few extra disks on your server here, go address that. A typical laptop can have an SSD in this capacity range, or can have HDD capacities well beyond 344 GB.
Why do I mention evaluating the configuration? Disk hardware vibration in storage array can be a massive performance factor; even moderate vibration that really trashes access and transfer speeds across a disk shelf. I've seen a full failure trash a full shelf, and retries on more subtle vibration can still slam throughput.
Measure.
Don't guess.
(Or guess, but then measure. And then compare.)
Go get that T4 data.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2010 07:35 AM
тАО06-08-2010 07:35 AM
Re: Disk Initialization Parameters for good performance
- « Previous
-
- 1
- 2
- Next »