Operating System - OpenVMS
1752583 Members
4527 Online
108788 Solutions
New Discussion

Re: Do we need a disk optimization ?

 
Ofery
Occasional Advisor

Do we need a disk optimization ?

Hello,

 

We are running Alpha VMS 6.2 under Charom emulator & the disks' space is virtual under Netapp.

Currently we have a third party product, PerfectDisk, which is running weekly & optimize all the users' disks.

 

I wonder if its really effective to optimize the disks (fragmentation) in this case ?

 

In the old days, when we had a real Alpha machine & real physical disks, it was very effective. But in our case, the VMS is emulation & the disk is virtual.

 

Regards /Ofer

 

7 REPLIES 7
Volker Halle
Honored Contributor

Re: Do we need a disk optimization ?

Ofer,

 

whether you need disk optimization for reducing the run-time effect of disk fragmentation in your configuration might be questionable. But if what your application needs is contiguous disk space, then this would be independant of your emulated and virtualized disk configuration and might still require defragmentation.

 

Volker.

Hoff
Honored Contributor

Re: Do we need a disk optimization ?

I'm going to guess there's a reason why you're asking about this.  Have you collected some performance data?   If not, start there.  Performance issues could be secondary to fragmentation, or could be the emulator or host system or underlying hardware that's limiting your performance.  Emulator environments are much more complex.  Traced down one that was secondary to something that was going on in another fibre channel zone in another operating system on another box that happened to be sharing the same controller as the VMS emulation...  Get some data.  Look at the trends.  See if defrag is an issue or not.

 

Yes, a virtual Alpha with a virtual disk will have virtual fragmentation.

 

Whether that matters depends greatly on what you're up to; on your application activity, and on your sensitivity to the overhead involved, and at how effective the emulator is at serving up the virtual storage.  The old metrics for looking at fragmentation are still in play; virtually.  But if the emulator hauls the whole virtual disk into host virtual memory — that's not always so good, should the emulator crashes or the power is cut — then you'll run at near-memory speeds, even with a fragmented disk.  So... it depends.


If your virtual disks are virtually big enough and there's virtually no reason not to have big virtual disks with lots of virtual free space, then setting the allocation and extent defaults will reduce the occurance of fragmentation. 

 

I'd probably just look for badly fragmented and heavily-accessed (hot) files and check those, and then BACKUP /IMAGE once in a while.  Maybe once a quarter or once a year, such as when you would normally create a whole-disk, verified, offline, complete, restorable fire-safe backup of your environment.

 

I'd spend more time ensuring I had good backups and related tasks, and related.

Ofery
Occasional Advisor

Re: Do we need a disk optimization ?

Hello,

 

The optimization procedure is running for years every week, probably before we moved the VMS from real physical Alpha & disks to CHARON/Netapp emulator environment.

So, in principle, is it effective to de-fragment disks in such environment ?

Or we can say that to de-fragment disks in such environment is needless because its virtual, it will not increase the disks' performance & its waste of time & resources.

 

Regards /Ofer

 

MarkOfAus
Valued Contributor

Re: Do we need a disk optimization ?

It's a virtual machine with a virtual disk on a host machine, so it would depend to a degree on the sort of storage used.

 

The true difference is in the host serving the virtual machine. We have such beasts and the disk caching done means a lot of IO is purely in memory.

 

In fact, if your virtual machine running Alpha emulation is using SSDs then defragmentation is not only useless, it's degrading performance for no reason.

 

abrsvc
Respected Contributor

Re: Do we need a disk optimization ?

In fact, if your virtual machine running Alpha emulation is using SSDs then defragmentation is not only useless, it's degrading performance for no reason.

 

For the most part, I would agree with this.  But, my question would be this: 

 

The "virtual" system mimics the real one as we all agree.  That means that fragmentation would also occur.  Aside from the performance of the system, which I will agree won't  be a problem, I would argue that file fragmentation WILL be a problem from the standpoint of increased file header sizes and winddow pointers.  Also,  wouldn't even the "virtual" disks have limits on the number of available disk clusters etc?

 

Defragmentation has more to it than just performance.

 

Dan

MarkOfAus
Valued Contributor

Re: Do we need a disk optimization ?

The key here is the OP is not sure why it's running defragmentation. I think it would be sound policy to investigate IF this is needed - especially if you're running SSDs as your virtual disks.

 

But regardless of the disk design, if you are having issues with file extents, headers and so on, then as it's a virtual disk, design and initialize a new disk to meet your requirements.

 

It is silly to continue some disk schema that is no longer suitable just so you can run defragmentation.

The_Doc_Man
Advisor

Re: Do we need a disk optimization ?

Whether you need disk optimization depends on whether you need to create AND APPEND very often, and how big the files are that you would append.

 

On our systems, we create lots of small files that will (usually) fit into a single disk-cluster (Windows:  Think "Allocation Unit").   The odds are VERY HIGH that if we delete a file, another small file will find that slot quickly because VMS doesn't have a "wastebasket" for recently deleted files.

 

The only reason WE would need to defrag a disk went away some years ago when we started using ORACLE on a back-end SOLARIS box and just use some client software on our Integrity boxes.  We don't have huge files any more because our big files migrated to the back-end box.

 

You need to defrag when you have to append to files a lot (because the <close><wait-a-while><append><repeat> cycle just about guarantees a bunch of short fragments) or when you have to create a contiguous file.  If you are working on some of the larger SCSI-2 drives with 136 Gb formatted, your allocation unit is on the order of 288 blocks, which is a LOT of text - about 144 Kb - it will take you a while to fragment a file on such a disk.

 

Usually, if you have a defragger package, it includes an analysis tool.  Look for "largest contiguous free space" and consider the largest contiguous file you need to build.  Then use any performance anlysis tool including just letting the MONITOR command for MON IO to see if, over the course of a day, you have a large number of "split IO" counts.  The split I/O occurs when you have a file that isn't contiguous and you have to read/write past the allocation boundaries.

 

If you don't create contiguous files, don't create a lot of large files, and don't have a lot of split I/O, you probably don't need to defrag.  Even if your system "churns" a bunch of small files, you won't need to defrag.

 

Security+ Certified; HP OpenVMS CSA (v8)