Operating System - OpenVMS
1753701 Members
5078 Online
108799 Solutions
New Discussion юеВ

Re: SET VOLUME/LIMIT failing

 
SOLVED
Go to solution
Malcolm Wade
Valued Contributor

Re: SET VOLUME/LIMIT failing

Output from DFU is attached; it doesn't look fragmented to me!
Rob Young_4
Frequent Advisor

Re: SET VOLUME/LIMIT failing


Nope.. not at all.

Bust out one shadowset member and try
set volume/limit
with ust a physical mounted (like the
documentation shows).

Rob
Malcolm Wade
Valued Contributor

Re: SET VOLUME/LIMIT failing

Been there, done that, same problem!

hp are investigating.

Malcolm
John Gillings
Honored Contributor

Re: SET VOLUME/LIMIT failing

This case has been reported to engineering

It looks like the expansion limit on this volume HAS been increased to about the right ballpark given the volume and cluster sizes.

I suspect we've got a boundary condition in the arithmetic for this specific case that is resulting in a bogus message.

It was a "W" level message which means "the operation might have worked, but we're not certain". I'll update this thread when we find out for sure.
A crucible of informative mistakes
Rob Young_4
Frequent Advisor

Re: SET VOLUME/LIMIT failing


Ah.. your show dev/full shows that
expansion size is already 2 billion. No
wonder it is failing.

Total blocks 104857600 Sectors per track 128
Total cylinders 6400 Tracks per cylinder 128
Logical Volume Size 104857600 Expansion Size Limit 2150449152


Here is what I think you are trying to
do.. expand to a larger drive. Are one
of the two drives in that shadowset
larger than the other?

Dismount the smaller (and skip
to set_vol: below).

If they are the
same size, dismount one, add a larger
drive. Copy completes, dismount the smaller
drive.

Now expand the shadowset to the size
of the remaining large drive (and watch
logical volume size go to total block
size):

$ set_vol:
$ set volume/size dsannn:


Rob
Rob Young_4
Frequent Advisor

Re: SET VOLUME/LIMIT failing


Ah ... modify that
last one, I'm missing a point.

Just dismount a drive and add a larger
one, copy completes, dismount the smaller
and set volume/size.
Ian Miller.
Honored Contributor

Re: SET VOLUME/LIMIT failing

The disk cluster size is also relevant to this problem. Try init the disk with a smaller cluster size.
____________________
Purely Personal Opinion
Jan van den Ende
Honored Contributor

Re: SET VOLUME/LIMIT failing

Ian,

as I understand, a _BIG_ clustersize is NOT an issue!
Only clustersizes below 8 limits the LIMIT & SIZE values to (1/8 TB) * (clustersize).

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Ian Miller.
Honored Contributor

Re: SET VOLUME/LIMIT failing

I am told that cluster size > 8 then there is a problem. Try with cluster size = 8.
____________________
Purely Personal Opinion
John Gillings
Honored Contributor
Solution

Re: SET VOLUME/LIMIT failing

This has been reported to OpenVMS engineering who have confirmed this is a minor bug in XQP. It's in the process of being fixed.

The bug is purely cosmetic - the message is bogus.

The first SET VOLUME/LIMIT on a volume with a cluster size greater than 8 works correctly and no message is issued. The second and subsequent SET VOLUME/LIMIT commands don't change anything, since the volume is already at the maximum expansion limit. However, the commands issue the message:

%SET-E-NOTSET, error modifying _DSA224:
-SYSTEM-W-DEVICEFULL, device full; allocation failure

The command should just return (as it does for cluster size 8 and lower).

Note that there are circumstances where the command will legitimately fail with DEVICEFULL - that means there is insufficient contiguous space for the new BITMAP.SYS file. I imagine this would be rare in the real world. Simple to check... see SHOW DEV/FULL and check that the expansion limit. The exact value for a fully expanded volume will vary with cluster size, but for maximum expansion it should be a 10 digit number beginning with 21. If you're already there, you can ignore the message. If not, then BITMAP.SYS needs a contiguous extent larger than 65536*8/cluster-size blocks.
A crucible of informative mistakes