Operating System - OpenVMS
1827519 Members
3264 Online
109965 Solutions
New Discussion

MSA1000 maximum units/Luns?

 
SOLVED
Go to solution
Peter Zeiszler
Trusted Contributor

MSA1000 maximum units/Luns?

I am inheriting helping support an environment that has a MSA1000 as its storage. There is plans in work to add 3 new servers to the MSA1000. Current environment is 2 VMS servers attached with 31 units/luns being presented to the VMS.
Someone on a conf call last night said that the MSA1000 had a maximum of 32 units it can present.
I can't find this in the msa100 user guide or the cli guide.
Does anyone know if this is true and what document contains that information?

Looking into this because if it is true then I have to reconfigure the current environment to be able to add the new VMS servers.
16 REPLIES 16
Hein van den Heuvel
Honored Contributor
Solution

Re: MSA1000 maximum units/Luns?



There are probably several places where this is documented. My first hit was in the QUICKSPECS:

receipts@boynerewards.com


Drives Supported Up to 42 drives
Maximum Capacity 12TB (42 drives x 300GB)
Logical Drives (LUN) Up to 32 Logical Drives
Maximum Logical Drive size 2.0TB


http://h18000.www1.hp.com/products/quickspecs/11033_div/11033_div.html

Too often I see Logical units being 'sized' on the quaint notion that 50GB or 200GB is a 'nice number' and that all LUN shall have the
size. Why?
Embrace the striping! Make'm big! Make'm as you need them, not restricted by some odd number ( 2.0 TB, the max, is not odd :-)

fwiw,
Hein
Richard Brodie_1
Honored Contributor

Re: MSA1000 maximum units/Luns?

Bill Hall
Honored Contributor

Re: MSA1000 maximum units/Luns?

Peter,

The maximum number of LUNs is 32. You can find it in this Product Overview at http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c00358098〈=en&cc=us&taskId=101&prodSeriesId=377751&prodTypeId=12169.

It's also in the SAN Design Reference Guide part 3, http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c00310436/c00310436.pdf.

Bill
Bill Hall
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Thanks guys.
Hoff
Honored Contributor

Re: MSA1000 maximum units/Luns?

You could also look to move to DAS controllers (current-generation PCIe stuff is faster than the FC in this old SAN gear) and (where applicable) shadow the shared storage with the other boxes or with this old MSA SAN box; this if you've got GbE or preferably 10 GbE NICs. If you're lacking PCIe and (fast) DAS and (10 GbE) here, you might well be able to scrounge an EVA here for small outlay; the (used) EVA-class widgets are dropping in price. (The MSA1500 upgrade path won't help here as that too is limited to 32 logical units.)
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Limited funds for the project. So one follow up question since I haven't had to do this. If I create a HUGE disk (i.e. 1T) how do I partition that into smaller drives?
We still use ODS-2 so have to be aware of block sizes (yes - I know - old stuff).
Bill Hall
Honored Contributor

Re: MSA1000 maximum units/Luns?

No cost options would be to use rooted logicals (logical roots) or use the LD driver. The some cost option would be to use HP RAIDSoftware forOpenVMS. I believe you can create a single member array with up to 64 partitions.

Bill
Bill Hall
Hoff
Honored Contributor

Re: MSA1000 maximum units/Luns?

Presuming you're not on an ancient OpenVMS version, both ODS-2 and ODS-5 file systems will correctly address one-tebibyte disks.

It's not until you're looking to upgrade to 1.5 and 2.0 terabyte disk spindles (or present similarly-sized synthetic disks) that this one-tebibyte addressing limit is (again) a problem.

The upcoming V8.4 release will reportedly allow 2.0 terabyte disk spindles; I'd expect that addressing limit will probably technically be two tebibytes.

Within the one tebibyte spindles (real disks or synthetic), you can probably most easily use concealed rooted logical names. Or yes, you can host-partition using LD or such.
Steve Reece_3
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Yes, it's limited to presenting 32 LUNs. Remember also that the maximum partition size with VMS 8.3 and below is 1TB - if you have a partition bigger than that VMS won't understand it properly.

You could use the LD driver to create smaller partitions and you could use rooted logicals. You could also use the SW-RAID layered product, but that adds to complexity, cost and CPU overhead in doing disk IO (since the CPU on the VMS box has to work out where to write the data.)

When I was using these things (MSA1000s) day in, day out, I tended to size things for what my clients needed. They didn't need three 1TB disks. They typically needed a disk to boot from (maybe 100GB), a disk for command procedures and logs (maybe 5GB) and the rest was split into the big volumes that they needed.

Whilst others may say, "don't be scared, embrace the striping/raid and make big volumes", that's not the only way and may not be suitable for your site. I would advocate create the right size disks on the MSA1000 and not let logical disk drivers, rooted logicals and any other that stuff get in the way. The LD Driver isn't supported by HP, it's there as an offering that may be of benefit if I remember rightly. Rooted logicals are supported, but can lead to a mess if you don't realize what you're doing. Use the MSA1000 for what it was intended by presenting the right size disks for the right job.

Steve
Jan van den Ende
Honored Contributor

Re: MSA1000 maximum units/Luns?

Re Steve:

>>>
Rooted logicals are supported, but can lead to a mess if you don't realize what you're doing.
<<<

I beg to differ on that !!!

Rooted logicals are a GREAT thing to have things organised, and device-independant.
Look at the way VMS itself is implemented:
many different drive technologies have been used over the years, yet it has always been just SYS$SYSROOT & friends.
And any backup of a system disk can be rolled back on any drive and just work, pretty much independent of hardware on source and destination system.

I really LOVE rooted logicals!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Jur van der Burg
Respected Contributor

Re: MSA1000 maximum units/Luns?

> The LD Driver isn't supported by HP, it's there as an offering that may be of benefit if I remember rightly.

That's correct. But I know that LD is used to build VMS.... So if there's a problem with it it will be fixed in no time.

And if you ever find an issue then let me know and I'll fix it. But there have been very few issues with it.

Jur (LDdriver author).
Steve Reece_3
Trusted Contributor

Re: MSA1000 maximum units/Luns?

>>>Rooted logicals are a GREAT thing to have things organised, and device-independant. <<<

I agree, provided that you realize that it's a rooted logical and don't try and do things like an image restore on your rooted logical. A case of know your environment and understand its limitations.

>>>That's correct. But I know that LD is used to build VMS.... So if there's a problem with it it will be fixed in no time.

And if you ever find an issue then let me know and I'll fix it. But there have been very few issues with it. <<<

I'm grateful to you for making it available Jur. It's dragged me out of the clag on several occasions. However, some environments are such that freeware just isn't allowed. It's supported software or nothing, even if it's really useful freeware.
In the case that the OP is describing, using the MSA1000 functionality to present the right sized disks for the purpose is possibly a far better route than having big disks ("because it can") and then splitting them up into the sizes required with the LD driver or rooted logicals.

Just because you can doesn't mean that you should...
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Unfortunately its a matter of adding more servers to access this MSA1000 and adding 10+ more LUNs (that apparently it can't handle).
I have to be careful of the cluster size of the disks (thats one concern of going to HUGE disks) and I can't goto ODS-5 until I can get them to test everything (full battery of tests and burn in) before they would even consider using ODS-5. The end users would also need to test all their scripts, etc - since they would be going to a case sensitive device then. I can't use anything we haven't extensively tested in a test environment or that wouldn't be supported by HP - as they are the main hands on support as this is an offsite (overseas) environment. Rooted logicals could possibly work but we would still have to contend with the disk cluster size (still would have to get them to test in a test environment).

I have used Raid software and even though it does work I would rather not add that. I really didn't like it - added 20 minutes to a normal boot up on one cluster we managed. Was great when we got EMC hooked to that cluster with proper sized devices.

Thanks for the input.
Peter Zeiszler
Trusted Contributor

Re: MSA1000 maximum units/Luns?

Steve hit the nail on the head - supported software is a requirement. Anything we do with non-supported software we have to leave the support to the end customer. So far it has bit us in the backside a few times trying to support them with non-supported.

I just read up on the LD driver and that is great. Just played with it on a small 35gig drive and created a small partition in no time. Now to try to get a spot on a test cluster and play with it between clustered noded.

Wonder why its not officially supported.
Great Job on it.
Jan van den Ende
Honored Contributor

Re: MSA1000 maximum units/Luns?

Peter,

>>>
I have to be careful of the cluster size of the disks (thats one concern of going to HUGE disks)
<<<

But, it looks like you are confusing two unrelated things here. ODS-5 is NOT in play here!

The big clustersize for big volumes requirement was lifted with VMS 7.2 ( > 10 years ago!) The 7.2 + restriction for 1 TB is cluster >= 8. And with MSA (as with all SAN) there are other reasons to go multiples of 16 anayway, giving you a VMS-side limit of 2 TB (but AFAIK you need 8.4 to go over 1 TB anyway).

So, THIS is no argument against big 'presented' disks.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Jon Pinkley
Honored Contributor

Re: MSA1000 maximum units/Luns?

Peter,

What version(s) of VMS are you using? As Jan said, any version of VMS .ge. 7.2 can handle extended bitmaps, which will let you use a cluster size of 8 on a 1 TB disk (or 4 on a 512 GB disk or 2 on 256 GB disk or 1 on a 128GB disk.) So just the fact that you are using ODS2 is not an issue, as long as you are using VMS 7.2 or higher.

LDDRIVER and DFU are two utilities that in my opinion should be on every VMS system. There were plans at one time to include them as supported software, but the those plans changed. Both tools were written by Digital employees as internal tools, and then made available on the freeware CDs. Now they are available from Jur's site http://www.digiater.nl

If you use LD devices, it is possible to have all "disks" used by a server on a single LUN, but there are less complications if you can use one for the system disk, and one for your user partitions (LD devices). For example it makes it much easier to backup just the system disk when upgrading.

If you are going to use writable LD devices, and you don't want the disks to be rebuilt every time the system reboots, you will need to make sure they get dismounted before the device their container file is on. In other words if lda101 has its containter file on $1$dga2, then lda101 should be dismounted (and ideally disconnected) before $1$dga2 is dismounted.

Unfortunately, sys$system:shutdown.com does not have a user callout for dismounting disks. The SYSHUTDWN.COM can do this, but this is executed prior to user processes being stopped and the removal of installed images, so either you need to put all that logic in your SYSHUTDWN.COM, or modify SYS$SYSTEM:SHUTDOWN.COM to do the dismounts or to callout to a command procedure of yours that will. Be aware that it is at least possible to create LD devices on other LD devices (like the cat in a hat), so if you want to be absolutely sure that devices are dismounted in the correct order, you will need to parse the output of $ ld show /all or write a program to do something similar.

We have an EVA, so the only thing I normally use LD devices for is to emulate CD-ROM devices. The LD devices can be write protected and saved in metadata in the container file header, so even if the device is mounted without the /nowrite qualifier, the disk will still be mounted write-locked (just like a real CD-ROM device would behave). For disks that are mounted /nowrite, you don't need to worry about the order in which the devices are dismounted.

Here's an example of nested LD devices (just for demonstration, I don't recommend doing this)

$ pipe free | search/nowin sys$pipe lda ! free is Hunter Goatley's FREE disk space
$4$LDA101: LEVEL1 50344 (51%) 49656 (49%) 100000
$4$LDA102: LEVEL2 25329 (51%) 24671 (49%) 50000
$4$LDA103: LEVEL3 221 ( 1%) 24779 (99%) 25000
$ ld show /all
%LD-I-CONNECTED, Connected _$4$LDA101: to $1$DGA2210:[000000_LD_DSK]LEVEL1.DSK;1
%LD-I-CONNECTED, Connected _$4$LDA102: to $4$LDA101:[000000_LD_DSK]LEVEL2.DSK;1
%LD-I-CONNECTED, Connected _$4$LDA103: to $4$LDA102:[000000_LD_DSK]LEVEL3.DSK;1
$

For these to be cleanly dismounted, you must first dismount and preferably disconnect the disk that has no active container files on it, in this case $4$lda103:, then $4$lda102, then finally $4$lda101:

LD devices work fine in a cluster, but there is nothing analogous to the configure process to automatically create the devices on other cluster nodes. LD devices can not be MSCP served, but if the device that their container file is created on is MSCP served, it has the same effect.

To create LD devices clusterwide, you can do something like the following:

$ mcr sysman set environment/cluster
SYSMAN> set profile /priv=all
SYSMAN> do ld connect $1$DGA2210:[000000_LD_DSK]LEVEL1.DSK lda101 /share
SYSMAN> do mount/system lda101 level1
SYSMAN> do ld connect $4$LDA101:[000000_LD_DSK]LEVEL2.DSK lda102 /share
SYSMAN> do mount/system lda102 level2
SYSMAN> do ld connect $4$LDA102:[000000_LD_DSK]LEVEL3.DSK lda103 /share
SYSMAN> do mount/system lda103 level3
SYSMAN> exit

I think you will find that LD devices are well worth the effort to learn to use them. They are extremely useful for testing, and when extra features like tracing are not active, they have low overhead as well.

Jon
it depends