Operating System - OpenVMS
1829853 Members
2310 Online
109993 Solutions
New Discussion

Re: Set volume/limit - real world experience

 
Rob Young_4
Frequent Advisor

Set volume/limit - real world experience


John Gillings in another thread writes:

> Note that there are circumstances where the
> command will legitimately fail with
>DEVICEFULL - that means there is insufficient
>contiguous space for the new BITMAP.SYS file.
>I imagine this would be rare in the real
>world.

Rare is relative. In a cluster with 190
mounted shadowsets, we were seeing it. Hence,
I created a procedure (see snippets in other
thread) that took into account via DFU the
largest free chunk and did the right thing.

Here is a peak at some disks that couldn't handle an outright set volume/limit:

$ set volume/limit=696320000 $1$DGA594:
$ set volume/limit=706560000 $1$DGA600:
$ set volume/limit=1239040000 $1$DGA552:
$ set volume/limit=1085440000 $1$DGA588:
$ set volume/limit=1239040000 $1$DGA474:
$ set volume/limit=1423360000 $1$DGA510:
$ set volume/limit=1239040000 $1$DGA522:
$ set volume/limit=527155200 $1$DGA61:
$ set volume/limit=394444800 $1$DGA72:

etc.

The problem of course in a limited downtime
window, we can't afford to "fool around"
figuring out what a good fit is (converting
pre-7.3-2 disks). So an automated method
had to be created. The procedure
I created caught a volume that was so severely
fragmented (largest extent 1000 blocks),
we thought it best to copy over by hand,
set volume/limit on new volume :-) ...

Rob


14 REPLIES 14
John Gillings
Honored Contributor

Re: Set volume/limit - real world experience

Rob,

The maximum possible size of BITMAP.SYS is 65535 blocks. That's for a cluster size of 8. So if you have at least one extent larger than 65535 you can SET VOLUME/LIMIT to the maximum size. As your cluster size goes up, the required size of the BITMAP.SYS goes down. A close enough calculation is:

65535*8/cluster-size

DFU or DEFRAG SHOW can tell you the size of the largest extent (the DEFRAG SHOW command does not require a license PAK).

These days, I'd say that if you don't have a contiguous extent larger than 65535 on a disk, then the failure of SET VOLUME/LIMIT is not your highest priority!

Keep this in perspective. That's only 32MB - a tiny fragment on a modern disk. If you have to spend time worrying over a few cents worth of disk space there's something very wrong!

>(largest extent 1000 blocks),
>we thought it best to copy over by hand,
>set volume/limit on new volume :-)

I'd recommend working out the required extent size, then searching the disk for a file containing at least one extent that large. COPY the file somewhere else and DELETE the original. Now you can SET VOL/LIMIT, after which you may be able to copy the file back (but expect it to be more fragmented).

Of course, this issue will fade into oblivion over time because anyone INITIALIZEing a disk on V7.3-2 today is adding /LIMIT from the very beginning, right?...

A crucible of informative mistakes
Garry Fruth
Trusted Contributor

Re: Set volume/limit - real world experience

It would be nice if /LIMIT=maxvalue were the default with INITIALIZE.
John Gillings
Honored Contributor

Re: Set volume/limit - real world experience

re Garry,

>It would be nice if /LIMIT=maxvalue were the default with INITIALIZE.

The maximum possible value IS the default for both the INITIALIZE/LIMIT and SET VOLUME/LIMIT comands. HP recommends that you use the default.

Under "normal" conditions, I can't imagine a good reason to choose a lower limit. Worst case is to use a few MB of extra disk space. The potential benefits hugely outweigh the cost.

Rob apparently has some very badly fragmented disks, so he's chosen to set the volumes to lower limits.

Personally, I'd be fixing the underlying issue, but this is OpenVMS, so you have the freedom to choose what you think is best for your circumstances.
A crucible of informative mistakes
Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience

> DFU or DEFRAG SHOW can tell you the size of the largest extent
> (the DEFRAG SHOW command does not require a license PAK).

Right. And from Malcolm's thread, I dropped this DCL
snippet out to show how to go about getting that on the
fly:

$ get_largest_free_extent:
$ subroutine
$ dfu report 'p1' /output=dfu.tmp ! p1 = disk
$ pipe type/nopage dfu.tmp | search sys$pipe "largest free extent" | -
( read sys$input a ; define/job/nolog tlog &a)
$ large_fe_line = f$trnlnm("tlog")
$ large_fe_line = f$edit(large_fe_line,"TRIM,COMPRESS")
$ largest_free_extent == -
f$element(0,"",f$edit(f$element(1,":",large_fe_line),"TRIM"))
$ delete/nolog dfu.tmp;
$!
$ endsubroutine


> These days, I'd say that if you don't have a contiguous extent larger than
> 65535 on a disk, then the failure of SET VOLUME/LIMIT is not your highest priority!

Fragmentation in some of our environments is a fact of life, and in most
cases inconsequential. And no we don't run defraggers.

> Keep this in perspective. That's only 32MB - a tiny fragment on a modern disk.
> If you have to spend time worrying over a few cents worth of disk space there's
> something very wrong!

It isn't a worry about a few cents of disk space it is a fact that many of these disks
are pretty fragmented.

>>(largest extent 1000 blocks),
>>we thought it best to copy over by hand,
>>set volume/limit on new volume :-)

> I'd recommend working out the required extent size,
> then searching the disk for a file containing at least one
> extent that large. COPY the file somewhere else and DELETE the original.
> Now you can SET VOL/LIMIT, after which you may be able to copy the
> file back (but expect it to be more fragmented).

Impractical. We have a short downtime window and need to accomplish as
much as possible. We don't have time to "fool around" moving files around
to fix a problem. The fix was/is to create a procedure that would determine
the limit of /limit (ha). It took less than 5 minutes to SET VOLUME/LIMIT on
a large number of shadowsets running this procedure.


> Of course, this issue will fade into oblivion over time because anyone
> INITIALIZEing a disk on V7.3-2 today is adding /LIMIT from the very beginning,
> right?...

Of course. But doesn't help someone with many hundreds of disks created pre-7.3-2

Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience


> Rob apparently has some very badly
> fragmented disks, so he's chosen to set
> the volumes to lower limits.

Didn't have much of a choice. On a severely
fragmented disk, we know the problem.
Work arounds are wall clock time. Perhaps if
one has to work their way through a few dozen
disks - have at it! I agree. Find a file with
a large extent, move it , set volume/limit , move it back.

The idea of a max_value isn't a bad one...

A numeric would do what a numeric should:

/LIMIT=numeric_goes_here

else

/LIMIT=MAXIMUM_OBTAINABLE

Would trigger an algorithm that would find
the largest extent, do the right thing
and then spit out an informational something
along the lines of:

SET-I-LIMIT "Limit set to 348435000"

or whatever. Wouldn't have to write a
script to handle set volume/limit on
badly fragmented disks.

Rob
Jan van den Ende
Honored Contributor

Re: Set volume/limit - real world experience

John Gillings wrote:


Of course, this issue will fade into oblivion over time because anyone INITIALIZEing a disk on V7.3-2 today is adding /LIMIT from the very beginning, right?...


Yes, of course, but...
even nowadays questions from people who combined some disks into RAIDsets over 1 T, and so 'hitting' the wall", are seen regularly.

And disk technology is still on fast-forward...

It is not too speculative to assume that in the foreseeable future single disks of > 1 T will be coming, and then this whole /LIMIT scheme will look rather silly.

VMS is clearly not (yet?) prepared for > 1 T disks.

Any chance that VMS will be there before the disk manufacturers are?

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
John Gillings
Honored Contributor

Re: Set volume/limit - real world experience

Rob,

>We have a short downtime window and need to accomplish as much as possible.

Maybe I'm missing something here. Non-destructive analysis of the disk doesn't need to be done during down time. It should be possible to work out what needs to be done on the live system, and even script it so that you can keep your downtime windows very short. Are you really THAT close to the limits of storage and processing power that you can't find 100K contiguous blocks?

re: Jan

> Any chance that VMS will be there before the disk manufacturers are?

Too late for that! >1TB disks are here now. It will be a while before OpenVMS catches up.

However, note that the 1TB limit is per volume. It is already possible to have bound volume sets with total sizes over 1TB, so in theory you could have a >1TB disk in a storage subsystem, partition it into <1TB chunks then glue them back with bound volume sets (YUK! Two of my LEAST favorite storage concepts together - partitions and bound volumes).

Work is well underway on an OpenVMS file system that will support much larger storage volumes.
A crucible of informative mistakes
Robert Brooks_1
Honored Contributor

Re: Set volume/limit - real world experience

With respect to disk sizes in excess of 1 TB...

If you feel that it's important for VMS to support disk volumes larger than 1TB, please make your feelings known to VMS management.
Customer statements count much more than suggestions from VMS Engineering in this regard.

I can say with a high degree of certainty that unless a strong case is made, VMS will *not* support disks larger than 1TB.


-- Rob
Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience


> Are you really THAT close to the limits of
> storage and processing power that you can't
> find 100K contiguous blocks?

Yes. And in some cases never will. I have
a number of archive disks that have DBs that
have used all of the disk but 15000-17000
blocks. So there is another reason for a tool
or method (/limit=maximum_obtainable) that
is or would be handy.

It really is a matter of inconvience. To
get around that inconvenience because I don't
have time to "fool around", a homegrown tool
exists to get around size or fragment
issues with BITMAP.SYS. This tool (as
described elsewhere) works and takes a few
minutes to run.
Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience


Rob wrote:

> I can say with a high degree oI can say with a
> degree of certainty that unless a strong case is made,
> VMS will *not* support disks larger than 1TB.

This is probably not a bad thing at all as VMS is an
Enterprise OS. Say you had a RAID5 made up of 1 TByte disks,
maybe useful for archive but for production a very bad thing
as wall clock time on a hardware failure would be quite painful
to deal with to rebuild a very large RAID5.
Likewise, imagine if that 1 TByte disk
has a number of LUNs carved out of it.
But am I against volumes > 1 TByte? No.


John wrote:

> However, note that the 1TB limit is per volume.
> It is already possible to have bound volume sets with total
> sizes over 1TB, so in theory you could have a >1TB disk in
> a storage subsystem, partition it into <1TB chunks then glue
> them back with bound volume sets (YUK! Two of my LEAST
> favorite storage concepts together - partitions and bound volumes).

> Work is well underway on an OpenVMS file system that will
> support much larger storage volumes.

This would be a good thing. My hope is it is something that can be
front-ended with a very nice GUI (it is 2005). My other hope is it
includes RAID Software for OpenVMS - the new generation .. such that:

- It is bundled (wouldn't be surprised if it needs a shadowing
license to turn on. Paying back an investment).

- DPAs become much easier to set up (GUI helps alot here) and
with RAID 0+1 at the host and RAID at the storage level , we
would be very confident to run with 1, 2 , 5 TByte volumes in
production (looking to the future of course).

- An RMS database for mount/dismount activity that can be manipulated
at a command line and/or the GUI. (really tired of all the
homegrown mount and dismount procedures I've come across. Some so
painfully obsfucated it takes a good deal of time to study and understand
them. All in the hopes that you do the right thing when building
shadowsets or mounting disks on reboot. Sprinkled with f$getdvi ,
"exists", "avail", "mount", etc.) Yes,
I'm aware of the OpenVMS management tool.
I'm thinking about an improvement on that
with DCL infrastructure and multiple
commands to manipulate the mount database.
Jan van den Ende
Honored Contributor

Re: Set volume/limit - real world experience

Rob,


This would be a good thing. My hope is it is something that can be
front-ended with a very nice GUI (it is 2005).


... as long as this does _NOT_ imply the use of M$ or *IX frontends to manipulate the disks!!

And on complex scripts for MOUNT funtionality vs a GUI for it:

In my experience GUIs do fine for straightforward functionalities (including a sequence of tests and just few different procedure roads), but as soon as serious complexity comes in (and take that to include _ANYTHING_ the GUI-builders did not build into the GUI!), then you are back to scripting, and _THEN_ that tends to be just more cumbersome than just plain scripting.

-- might well be attributable to my tndency to avoid GUIing, but it is what I also see with collegues in the much more GUI oriented Unix, and even in the highly GUI-based Mickey world.

fwiw,

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Ian Miller.
Honored Contributor

Re: Set volume/limit - real world experience

I agree with you about GUIs. They tend to encourage people who don't know what they are doing to do things they should not.

I've not seen a good case for a single volume >1Tb. There is normally a way to split up the data across volumes. Putting it all on a single volume is a way of avoiding the thought and planning involved in using multiple volumes.
____________________
Purely Personal Opinion
Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience


> ... as long as this does _NOT_ imply the
>use of M$ or *IX frontends to manipulate the >disks!!

Both of course. But a GUI for a number
of reasons:

1) Quick one-offs that don't require
recall of commands from a command line.

As an aside to to support the above... anyone
that has worked with SMIT (AIX's management
interface) and looked at the log files from
that actual command that ran - it is a
scary thing. A whole mish-mash of dash ,
upper and lower case characters, etc.

2) A visual - show me things and use color
so my eyeballs can pick out the obvious.

>And on complex scripts for MOUNT
>funtionality vs a GUI for it:

>In my experience GUIs do fine for
>straightforward functionalities (including a
>sequence of tests and just few different
>procedure roads), but as soon as serious
>complexity comes in (and take that to
>include _ANYTHING_ the GUI-builders did not
>build into the GUI!), then you are back to
>scripting, and _THEN_ that tends to be just
>more cumbersome than just plain scripting.

I'm sure there are many counter-examples to
the above - but the first that comes to
mind is storage software. You can do a lot
with command scripting from the host
side, but on the storage frame itself
(thinking of a particular vendor) your
interaction is via a GUI.

>-- might well be attributable to my tndency
>to avoid GUIing, but it is what I also see
>with collegues in the much more GUI oriented
>Unix, and even in the highly GUI-based
>Mickey world.

Databases are front-ended
by GUIs. Applications for the most part
are GUI front-ends (except for "legacy"
character cell apps). An OS needs a
scripting language as Microsoft found out.

Yes, you need command line and scripting
to do things that are seriously outside the
scope of a GUI for a lot of
things. The GUI would give you
something highly supportable (assuming your
day-to-day doesn't drift outside it).
Rob Young_4
Frequent Advisor

Re: Set volume/limit - real world experience

> I agree with you about GUIs. They tend to encourage people
> who don't know what they are doing to do things they should not.

Bah. Thinking of storage software again - there are read-only
users. Badness isn't a GUI problem. Inept users can screw
things up regardless of interface (if given the capability,
you can't screw things up if your access is read-only).

> I've not seen a good case for a single volume >1Tb. There is normally
> a way to split up the data across volumes. Putting it all on a single volume
> is a way of avoiding the thought and planning involved in using multiple volumes.

At the rate data is growing for most of us, it becomes a management
headache. Some really old disks I have seen in the 2 GByte range are finally
going away. You can't hardly get 36 GByte drives with the storage I'm
thinking of... all this to say what was common 10 years ago is laughable today.
Will TeraByte volumes be common 10 years from now? Maybe not, but I wouldn't be a bit
surprised if they were.