Online Expert Day - HPE Data Storage - Live Now
April 24/25 - Online Expert Day - HPE Data Storage - Live Now
Read more
Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Software RAID on VMS

SOLVED
Go to solution
Vladimir Fabecic
Honored Contributor

Software RAID on VMS

Hello
I have questions from one costomer about software RAID for VMS. They would like to make RAID0 stripe sets using volumes from XP box.
I have no experiance with that (I mean not in VMS). As far as I know there are two ways: "bound VMS disk-sets" and "HP RAID Software for OpenVMS".
VMS wizard does not recomend use of bound-volume sets (http://h71000.www7.hp.com/wizard/wiz_9470.html).
HP RAID Software for OpenVMS costs money (licenses).
Does anybody have experiance with that?
OS should be 8.2 and cluster envirenment.
In vino veritas, in VMS cluster
14 REPLIES
Joseph Huber_1
Honored Contributor

Re: Software RAID on VMS

Could You explain what
"using volumes from XP box."
means ?

If You mean NFS or Samba mounted disks, then
neither solution will work.

And VMS bound volumes are not stripe sets !
Bound volumes are a means of combining small disks to form a larger logical volume, not a stripe set.

(Todays disks are large, and one rather wants the opposite: have smaller volumes on a large physical disks, i.e. partitions or container files.)
http://www.mpp.mpg.de/~huber
Vladimir Fabecic
Honored Contributor

Re: Software RAID on VMS

Let me explain. Customer has EVA on primary location. That means lots of I/O since there are lots of disks in disk group.
On second location they have XP 10000
They can have maximum of 8 disks per group on XP 10000. And they want lots of I/O. So they are thinking about software RAID (stripes).
They want to have several logical disks from XP box in stripe set.
In vino veritas, in VMS cluster
Karl Rohwedder
Honored Contributor

Re: Software RAID on VMS

Hi,

we were using SW-Raid together with host based shadowing on some DS10 systems (V7.3-2, V8.2) to perform Raid-0 sets. Works without problems.

Acc. to the SPD the following drivers are supported with Sw-Raid:

DUDRIVER For Digital Storage Architecture (DSA) disks, including MSCP-served Disks.
DKDRIVER For SCSI disks
DRDRIVER For StorageWorks RAID Array 200 series controllers, also known as SWXCR
DKQDRIVER For HGx connected disks

We tried Sw-Raid out of LD devices, which gives some errors from time to time (currently under inverstigation from the LD maintainer, but HP refused to look into that).
regards Kalle
Joseph Huber_1
Honored Contributor

Re: Software RAID on VMS

Vladimir, o.k., I sought XP box being a Windows machine :-)

So, You really want stripe sets for performance, then bound volume sets are no alternative to (software-) raid.

I don't know actual costs of license+software-kit, maybe a hardware raid controller does not cost more ?
http://www.mpp.mpg.de/~huber
GuentherF
Trusted Contributor

Re: Software RAID on VMS

RE: Karl

"We tried Sw-Raid out of LD devices, which gives some errors from time to time (currently under inverstigation from the LD maintainer, but HP refused to look into that)."

"Mr. HP" may have refused. I discussed it (problem with IO$_NOP) with Jur and it has been fixed on both ends (LD amd RAID SW). If there's more I am all ears.

In general: RAID SW for OpenVMS is used at many sites to improve I/O performance across storage controllers. If the resulting stripe set is too large it can be partitioned down to smaller virtual disks which behave like 'real' disks to applications.

/Guenther
GeSha
Occasional Visitor

Re: Software RAID on VMS

That do you need stripe XP LUN's for?
Actually, you must buy license for RAID software. Moreover, most likely you need to buy it for each node in cluster. Otherwise, only one node will handling stripe sets.
Perhaps, you want to get higher performance. Realy, this you can increase it twise.
I don't know why do you need create volume set using XP LUN's. You can cleate LUN of the suitable size.
But, in cluster you will have higher agregated performance with no RAID sets, because multiple nodes through number of FC card and XP FC ports will access to single LUN. But, this way may result performance degradation due to replicating cache between XP ports.
I did not find 9470 wizard, and I don't know, why volume sets are not recomended. Perhaps, because known restrictions of volume sets.
Hein van den Heuvel
Honored Contributor
Solution

Re: Software RAID on VMS

Vladimir wrote "Does anybody have experiance with that?
OS should be 8.2 and cluster envirenment."

I have experience with both volume sets, sotware striping and hardware striping and found that often software striping has most performance potential, notably because multipel adapters/controllers can be involved on top of multiple disks.

Joseph wrote " I don't know actual costs of license+software-kit, maybe a hardware raid controller does not cost more ? "

An XP storage solution is a million dollar multiple cabinet undertaking.. a little more than 'just a raid controller'.


OPENVMS HARDWARE wrote "I did not find 9470 wizard, and I don't know, why volume sets are not recomended. Perhaps, because known restrictions of volume sets."

[OPENVMS HARDWARE... if you feel you need to use a fake name, then please be kind enough to sign of with a first name/country]

Well, mostly volume sets do not 'nicely' load balance and backup / restore can get nasty. The only automatic load balancing done in voluem sets is that new files are allocated on de volume with most free space.
So, IF the volumes start out with similar free space, AND your application makes and uses lots of smallish files (like a program development environment, or an Email/Document solution) THEN volume sets give cheap and effective load balancing.

Volume sets also offer file placement, and even AREA placement within indexed files, for interesting manual load balancing opportunities.

Stripe sets just chop up the LBN space in chunks, alternating those over the volumes.
This is likely to give a near perfect IO balance over time. But whether it actually works well for a given application depends on that application. For example, single stream applications may still work only withing one chunk at a time, thus effectively only touching a single volume at a time. Also, you can have 'bad luck' when IOs get split up over volumes where 'just a little more' froma single volume would have been faster. I suppose that picking a cluster size to divide nicely into the chunksize might help a little when many files are used.

I still like stripesets, don't get me wrong, but volume sets might also do the job just fine. As with every performance question 'it depends'. Vladimir did not describe the intented use at al, just the tools. Without that information we can only speculate. The question is almost like 'which one is better, a single 5 pound hammer, or five 1 pound hammers.'.

Hope this helps some,
Hein van den Heuvel
HvdH Performance Consulting.
Sheldon Smith
Honored Contributor

Re: Software RAID on VMS

A VMS bound volume set is a balanced concatentation of drives, it is not striping. With a bound volume set, one disk is the master and contains the MFD for the set. All files are "written" to the master volume. When VMS creates a new file, it places it on the volume of the set with the most free space. The key thing here is that each file is contained on an individual drive.

Host-based "RAID Software for OpenVMS" provides the striping which you seek. Alternatively, what about creating a LUSE on the XP array with the LDEVs from separate parity groups?

Note: While I work for Hewlett Packard Enterprise, all of my comments (whether noted or not), are my own and are not any official representation of the company.
----------
If my post was useful, click on my KUDOS! thumb below!
Hein van den Heuvel
Honored Contributor

Re: Software RAID on VMS

>> With a bound volume set, one disk is the master and contains the MFD for the set.

YES.

>> All files are "written" to the master volume.

NO.

Directories, just like files, are allocated on the volume with most free blocks at the time of creation.
Once a file, or directory, is allocated on a given volume, that is it's header is allocated in the INDEXF.SYS for that volume and growth will happen on that volume until it is full. Whne full an extention header can be allocated on an other volume and allocation for that file continues there.

>> The key thing here is that each file is contained on an individual drive.

Except for indexed file with multiple areas and explicit volume placement in the FDL during creation.

>> Alternatively, what about creating a LUSE on the XP array with the LDEVs from separate parity groups?

LUSE has potential, but seems to stripe in very large chunks, much like VMS volume sets really. Here is a very readable article picturing a lot of this:
http://h71028.www7.hp.com/ERC/downloads/5982-7883EN.pdf

hth,
Hein.
Vladimir Fabecic
Honored Contributor

Re: Software RAID on VMS

Thanks everybody.
Hein is right. I did not describe the intented use.
That cluster (2 x ES80 with 24GB RAM each and later some other machines) will have to run many (about 20) Oracle databases. So it will use large files and will need much I/O. And customer demands that file system would be ODS2 (maybe I will change that).
As I said, they have EVA with 60 disks in disk group on other location. They would like to have performance like they have with EVA.
In vino veritas, in VMS cluster
Bill Hall
Honored Contributor

Re: Software RAID on VMS

Vladimir,

We migrated off HSG80 and HSJ50 controllers to an XP1024 in February 2003 (VMS V7.3-1). At that time, only Open-E LDEVs (approx. 13-14 GB) were tested and documented as supported by the XP support group on VMS.

We had been using volumes of 60 to 100GB at the time. We could not go back to managing hundreds of small volumes (not to mention we have at least one data file that is typically in the 16GB+ size and wouldn't fit on an Open-E) so we bought RAID licenses for our systems.

Our XP1024 was configured with a large number of 7D+1P array groups. The array groups broken into Open-Es (13-14GB each). Most of our Open-Es are bound as 6 member RAID-0 sets. Each of the member Open-Es are from a different array group on the XP. RAID of course allows you to tailor your chunk size as you see fit.

We also have a "low end" XP10000. It was purchased with only one ACP and we were told it only supports four disk array groups. We were told we would have to purchase another ACP to use any of the eight disk array groups.

Bill
Bill Hall
Robert Gezelter
Honored Contributor

Re: Software RAID on VMS

Vladimir,

While you are engaged in reconfiguring your disk farm, consider that the conversion can be done without interrupting availability.

Host Based Shadowing can be used to continue operations uninterrupted, a usage that I spoke on at the HP Technology Forum in Orlando, Florida last year. The slides from that presentation can be found at http://www.rlgsc.com/hptechnologyforum/2005/1146.html

Bound volume sets remain a useful feature, but not as useful when physical volume sizes were far smaller.

- Bob Gezelter, http://www.rlgsc.com
Jur van der Burg
Respected Contributor

Re: Software RAID on VMS

Re Guenther

>We tried Sw-Raid out of LD devices, which
>gives some errors from time to time (currently
>under inverstigation from the LD maintainer,
>but HP refused to look into that)."

>"Mr. HP" may have refused. I discussed it
>(problem with IO$_NOP) with Jur and it has
>been fixed on both ends (LD amd RAID SW). If
>there's more I am all ears.

Karl is already running LD V8.2 which should fix these issues. I asked him for traces to get to the bottom. HP may not support LD, but I do. If Hostbased raid is involved I know where to find you....

Jur.
EdgarZamora_1
Respected Contributor

Re: Software RAID on VMS


We did some extensive performance testing (Rdb databases) on GS1280 with XP12000 using Raid software stripesets out of LD devices. This was recommended by our HP Platinum team. The bottom line results were that performance was just a little bit less than what we were getting out of the EVA5Ks (Raid-1 on the backend). I would say that the combination of using HP Raid software and LD disks was a viable solution for storage coming out of the XP box.