Operating System - OpenVMS
1748036 Members
4575 Online
108757 Solutions
New Discussion юеВ

Re: MSA1000 maximum units/Luns?

 
SOLVED
Go to solution
Jan van den Ende
Honored Contributor

Re: MSA1000 maximum units/Luns?

Re Steve:

>>>
Rooted logicals are supported, but can lead to a mess if you don't realize what you're doing.
<<<

I beg to differ on that !!!

Rooted logicals are a GREAT thing to have things organised, and device-independant.
Look at the way VMS itself is implemented:
many different drive technologies have been used over the years, yet it has always been just SYS$SYSROOT & friends.
And any backup of a system disk can be rolled back on any drive and just work, pretty much independent of hardware on source and destination system.

I really LOVE rooted logicals!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Jur van der Burg
Respected Contributor

Re: MSA1000 maximum units/Luns?

> The LD Driver isn't supported by HP, it's there as an offering that may be of benefit if I remember rightly.

That's correct. But I know that LD is used to build VMS.... So if there's a problem with it it will be fixed in no time.

And if you ever find an issue then let me know and I'll fix it. But there have been very few issues with it.

Jur (LDdriver author).
Steve Reece_3
Trusted Contributor

Re: MSA1000 maximum units/Luns?

>>>Rooted logicals are a GREAT thing to have things organised, and device-independant. <<<

I agree, provided that you realize that it's a rooted logical and don't try and do things like an image restore on your rooted logical. A case of know your environment and understand its limitations.

>>>That's correct. But I know that LD is used to build VMS.... So if there's a problem with it it will be fixed in no time.

And if you ever find an issue then let me know and I'll fix it. But there have been very few issues with it. <<<

I'm grateful to you for making it available Jur. It's dragged me out of the clag on several occasions. However, some environments are such that freeware just isn't allowed. It's supported software or nothing, even if it's really useful freeware.
In the case that the OP is describing, using the MSA1000 functionality to present the right sized disks for the purpose is possibly a far better route than having big disks ("because it can") and then splitting them up into the sizes required with the LD driver or rooted logicals.

Just because you can doesn't mean that you should...
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Unfortunately its a matter of adding more servers to access this MSA1000 and adding 10+ more LUNs (that apparently it can't handle).
I have to be careful of the cluster size of the disks (thats one concern of going to HUGE disks) and I can't goto ODS-5 until I can get them to test everything (full battery of tests and burn in) before they would even consider using ODS-5. The end users would also need to test all their scripts, etc - since they would be going to a case sensitive device then. I can't use anything we haven't extensively tested in a test environment or that wouldn't be supported by HP - as they are the main hands on support as this is an offsite (overseas) environment. Rooted logicals could possibly work but we would still have to contend with the disk cluster size (still would have to get them to test in a test environment).

I have used Raid software and even though it does work I would rather not add that. I really didn't like it - added 20 minutes to a normal boot up on one cluster we managed. Was great when we got EMC hooked to that cluster with proper sized devices.

Thanks for the input.
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Steve hit the nail on the head - supported software is a requirement. Anything we do with non-supported software we have to leave the support to the end customer. So far it has bit us in the backside a few times trying to support them with non-supported.

I just read up on the LD driver and that is great. Just played with it on a small 35gig drive and created a small partition in no time. Now to try to get a spot on a test cluster and play with it between clustered noded.

Wonder why its not officially supported.
Great Job on it.
Jan van den Ende
Honored Contributor

Re: MSA1000 maximum units/Luns?

Peter,

>>>
I have to be careful of the cluster size of the disks (thats one concern of going to HUGE disks)
<<<

But, it looks like you are confusing two unrelated things here. ODS-5 is NOT in play here!

The big clustersize for big volumes requirement was lifted with VMS 7.2 ( > 10 years ago!) The 7.2 + restriction for 1 TB is cluster >= 8. And with MSA (as with all SAN) there are other reasons to go multiples of 16 anayway, giving you a VMS-side limit of 2 TB (but AFAIK you need 8.4 to go over 1 TB anyway).

So, THIS is no argument against big 'presented' disks.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Jon Pinkley
Honored Contributor

Re: MSA1000 maximum units/Luns?

Peter,

What version(s) of VMS are you using? As Jan said, any version of VMS .ge. 7.2 can handle extended bitmaps, which will let you use a cluster size of 8 on a 1 TB disk (or 4 on a 512 GB disk or 2 on 256 GB disk or 1 on a 128GB disk.) So just the fact that you are using ODS2 is not an issue, as long as you are using VMS 7.2 or higher.

LDDRIVER and DFU are two utilities that in my opinion should be on every VMS system. There were plans at one time to include them as supported software, but the those plans changed. Both tools were written by Digital employees as internal tools, and then made available on the freeware CDs. Now they are available from Jur's site http://www.digiater.nl

If you use LD devices, it is possible to have all "disks" used by a server on a single LUN, but there are less complications if you can use one for the system disk, and one for your user partitions (LD devices). For example it makes it much easier to backup just the system disk when upgrading.

If you are going to use writable LD devices, and you don't want the disks to be rebuilt every time the system reboots, you will need to make sure they get dismounted before the device their container file is on. In other words if lda101 has its containter file on $1$dga2, then lda101 should be dismounted (and ideally disconnected) before $1$dga2 is dismounted.

Unfortunately, sys$system:shutdown.com does not have a user callout for dismounting disks. The SYSHUTDWN.COM can do this, but this is executed prior to user processes being stopped and the removal of installed images, so either you need to put all that logic in your SYSHUTDWN.COM, or modify SYS$SYSTEM:SHUTDOWN.COM to do the dismounts or to callout to a command procedure of yours that will. Be aware that it is at least possible to create LD devices on other LD devices (like the cat in a hat), so if you want to be absolutely sure that devices are dismounted in the correct order, you will need to parse the output of $ ld show /all or write a program to do something similar.

We have an EVA, so the only thing I normally use LD devices for is to emulate CD-ROM devices. The LD devices can be write protected and saved in metadata in the container file header, so even if the device is mounted without the /nowrite qualifier, the disk will still be mounted write-locked (just like a real CD-ROM device would behave). For disks that are mounted /nowrite, you don't need to worry about the order in which the devices are dismounted.

Here's an example of nested LD devices (just for demonstration, I don't recommend doing this)

$ pipe free | search/nowin sys$pipe lda ! free is Hunter Goatley's FREE disk space
$4$LDA101: LEVEL1 50344 (51%) 49656 (49%) 100000
$4$LDA102: LEVEL2 25329 (51%) 24671 (49%) 50000
$4$LDA103: LEVEL3 221 ( 1%) 24779 (99%) 25000
$ ld show /all
%LD-I-CONNECTED, Connected _$4$LDA101: to $1$DGA2210:[000000_LD_DSK]LEVEL1.DSK;1
%LD-I-CONNECTED, Connected _$4$LDA102: to $4$LDA101:[000000_LD_DSK]LEVEL2.DSK;1
%LD-I-CONNECTED, Connected _$4$LDA103: to $4$LDA102:[000000_LD_DSK]LEVEL3.DSK;1
$

For these to be cleanly dismounted, you must first dismount and preferably disconnect the disk that has no active container files on it, in this case $4$lda103:, then $4$lda102, then finally $4$lda101:

LD devices work fine in a cluster, but there is nothing analogous to the configure process to automatically create the devices on other cluster nodes. LD devices can not be MSCP served, but if the device that their container file is created on is MSCP served, it has the same effect.

To create LD devices clusterwide, you can do something like the following:

$ mcr sysman set environment/cluster
SYSMAN> set profile /priv=all
SYSMAN> do ld connect $1$DGA2210:[000000_LD_DSK]LEVEL1.DSK lda101 /share
SYSMAN> do mount/system lda101 level1
SYSMAN> do ld connect $4$LDA101:[000000_LD_DSK]LEVEL2.DSK lda102 /share
SYSMAN> do mount/system lda102 level2
SYSMAN> do ld connect $4$LDA102:[000000_LD_DSK]LEVEL3.DSK lda103 /share
SYSMAN> do mount/system lda103 level3
SYSMAN> exit

I think you will find that LD devices are well worth the effort to learn to use them. They are extremely useful for testing, and when extra features like tracing are not active, they have low overhead as well.

Jon
it depends