Operating System - OpenVMS
1753727 Members
4620 Online
108799 Solutions
New Discussion юеВ

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

 
SOLVED
Go to solution
Hein van den Heuvel
Honored Contributor
Solution

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

>> I don't know exactly what the testprogram does. As far as I know it allocates 1MB of memory and writes it 100 times to disk.

Fair enough, at some point you may want to find a source/details. No rush.

>> The testprogram helps me to get a qick first impression if I change something on the system.

That's a very good thing.
Now, did those write times change going from 8.3 - 8.4 with the same hardware?
That may be obvious to you, but it was not yet clear to me from what you wrote so far.

>> The result from this program is quite close to writes from backup image savesets, disk copies, copies, and oracle Rdb accesses.

Nice.


>> Our Environment are severel clusters based on rx7640 with OpenVMS 8.4 and latest patches.

Clear. Can you still boot to an 8.3 system in that?

>> All these systems are connected with two 4 GB FC-Adapter through a Brocade switch to the EVA 4400.

It doesn't come any faster.

Hein

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hein,

we have a different HW (rx3600 single system) running with VMS 8.3 connected to the same SAN but using FATA-Disks.
The speed is nearly the same (5-10% less) compared with our Cluster with VMS 8.4 and FC-Disks.

As I mentioned earlier ths throughput dropps to 50% if the targe disk is a VMS shadowset (2 members). It seems that the overall IO is limited somehow...

Thomas
Cass Witkowski
Trusted Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I'm looking at the throughput.

412,809 blocks is ~201 MB
2115017us is 2.115017 seconds

or 95 MB per second. Not 40 MB/s

12,121 I/Os per 2.11 seconds is 5,731 IO/s per second.

I know on an old EVA3000 I could get 30,000 1KB blocks written per second so the I/O rate seems low.

The test program is writing to a disk. Is the disk mounted foreign or does it mount as a OpenVMS volume? If it is mounted as a OpenVMS Volume then you would need to be writting to a file. Was the file pre created of a certain size? Is high water marking on the volume disabled? It
is on by default.

If the disk is mounted foreign then you are writing to the raw disk. What is the VRAID of the disk? How many disks are in the disk group?

Cass


Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Cass,

you are right with your calculation. I was not ware of this difference.
I write 100 times 1 MB to disk (OpenVMS Volume Structe 5) this leads to 100MB. Why VMS makes IO for 200 MB is not clear to me.

The space is not preallocated.
Of course I can tune something as you already mentioned: pre-allocation, high water marking and and maybe caching. But this is not the point. The overall IO disk performance is quite bad.

EVA configuration: 45 FC-Disks (BF146DA47A) in the diskgroup. Tests with disks configured Vradi5 (normal setup) and Vradi0 (makes no difference in speed). Write-back cache is enabled, otherwise performance is about 1 MByte :-(

Thomas




Robert Gezelter
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Thomas,

It may be elementary, but could you post a SHOW DEVICE/FULL on the subject disk volume (obviously, names can be blanked out).

A recent post noted that the space is not pre-allocated. Pre-allocation, high water marking, and other parameters can produce dramatically higher overhead IO and consequently lower performance.

As an aside, many have heard my comment about the difference in BACKUP performance when extend is the default, whereas even writing the save set over in-node DECnet (FAL honors the process defaults for RMS parameters set using SET RMS) can produce performance increases measured in orders of magnitude.

Details count. Without knowing precisely what the test program is doing, one does not fully know if one is measuring what one desires to measure (This is analogous to tare weight).

- Bob Gezelter, http://www.rlgsc.com

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I don't think that tuning arrangements will solve the problem. For sure the will help to increase performance, but not in that amount what I expect.

I think some setting is not ok at the Firmware setting of the FC-Adapters (queue depth, execution throttle, interrupt coalescing...).

Thank you so far,
Thomas
Steve Reece_3
Trusted Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hi Thomas,

I've not gone through your numbers, but have you checked that the cache batteries on your EVA are working and charged? Typically, cache batteries that are not charged will kick the disk array into writing directly to disk such that there is no risk of losing data. If your cache batteries are not charged you should, therefore, get some loss of throughput in general terms.

Steve

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Thank you Steve,

yes I checked that.
I turned some disks into write-through mode and the performance dropped to 1 MByte.
In fact you can't use the EVA4400 without a working cache !

Hoff
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

That QLogic looks to be a 4 Gbps board, and your bandwidth looks to be 0.3125 Gbps. Bad, obviously.

Some background on the EVA4400 FC SAN storage controller series performance:

http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-8473ENW.pdf

Usual throttle with these things are the host and controller HBA speeds (and 8 Gbps adapters are current), and often out with the rotating rust; those widgets will get you roughly 200 IOPs and 150 IOPSs for (and multiplied by however big your average transfers might be, as a gross estimate) for 15KRPM and 10KRPM electro-mechanical fossil-vintage power-glutton archaic antique storage.

More speed means more spindles, or replacing the fossil-era hardware with outboard or with inboard solid-state storage. HP offers this storage as SFF modules on ProLiant boxes and as mezzanine boards for c-Class BladeSystems, though I don't know off-hand if these are supported for OpenVMS.

And for the configuration, have a look at the HP ISEE Configuration Collector (HPCC) stuff; that might get you a better view into your environment.

As for confirming the speed settings on the hosts, here is the brute-force approach for determining the speed of the link; pending an SDA extension or SDA update, this is how you can see the speed:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1129186

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I found this thread to check the speed of the FC-Interface already. It shows me the value 3 (4GB) for all Interfaces.

Will check who the HPCC works, thanks !

Thomas