Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

 
SOLVED
Go to solution

Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hello,

our new environment with FC-Disks in a EVA4400 shows a throughput of about 40 MByte/s
(using copy, backup and Oracle/Rdb database access). I think that is to less and there is a problem somewhere.
For the test a simple connection form the VMS-System to the switch and form the switch to the EVA is used. Speed is checked with evaperf and the monitor utility on the Switch. There are no errors on the Fibre Channel.

Is it possible that something goes wrong concerning the Fibre Channel IO ?
23 REPLIES 23
labadie_1
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hello

use the FC Sda extension

$ ana/sys
fc perf ...

to do some measurements

SDA> fc

FibreChannel SDA Extension output is best viewed in at least 132 columns

Supported commands:

FC ADDRESS_LIST
FC KPB
FC NAME_LIST
FC PERFORMANCE [/CSV] [/NOCOMPRESS] [/RSCC | /SYSTIME] [/DISK] [/TAPE] [/ALL
] [/CLEAR] [device-name]
FC PROBE_LIST
FC QUEUES [/LIST]
FC SCDT
FC SET DEVICE [device-name]
FC SET ERL /SIZE=entry-count [/ALL]
FC SET FILTER [/ASC[=list]] [/COMMAND[=list]] [/ESTATUS[=list]]
[/FCPSTATUS[=list]] [/WWID[=list-quoted-strings]]
[/MATCH={AND | OR | NOR}] [/APPEND] [/ALL]
FC SET RING_BUFFER /SIZE=entry-count
[/FCP|/SLOW|/ERROR|/INTERRUPTS|/MBX|/IOCB] [/ALL]
FC SET WTID /WWID=quoted-string [/CAP=cap-value] [/[NO]WAIT]
Press RETURN for more.
SDA>

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

This is what I get writing 100 * 1 MB:

$1$dga3005 (write)
Using EXE$GQ_SYSTIME to calculate the I/O time
accumulated write time = 2115017us
writes = 12809
total blocks = 412809

LBC <2ms <8ms <16ms <32ms <128ms
=== ======== ======== ======== ======== ========
1 3208 - - 1 - 3209
32 6378 16 4 - 2 6400
64 3200 - - - - 3200

12786 16 4 1 2 12809

The Performance is about 40 MByte/s with our testprogram. If the disk is VMS-Shadowed the spead will be nearly the half...

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Same program writing to a shadowed disk:

FibreChannel Device Performance Data
------------------------------------

$1$dga3100 (write)
Using EXE$GQ_SYSTIME to calculate the I/O time
accumulated write time = 3552016us
writes = 15121
total blocks = 414962

LBC <2ms <4ms <8ms <16ms <32ms <64ms
=== ======== ======== ======== ======== ======== ========
1 5360 - - - 2 - 5362
16 188 - 1 - - - 189
32 6298 1 77 17 10 1 6404
64 3164 1 1 - - - 3166

15010 2 79 17 12 1 15121


FibreChannel Device Performance Data
------------------------------------

$1$dga4100 (write)
Using EXE$GQ_SYSTIME to calculate the I/O time
accumulated write time = 3181009us
writes = 15121
total blocks = 414962

LBC <2ms <4ms <8ms <16ms <32ms <64ms
=== ======== ======== ======== ======== ======== ========
1 5361 - - - - 1 5362
16 188 - 1 - - - 189
32 6299 10 74 15 3 3 6404
64 3165 - - 1 - - 3166

15013 10 75 16 3 4 15121

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I did further test with the local SCSI-Disks on our rx7640.
The write speed is about 1.2 MByte/sec.
This leads me to the point that there is no specific problem with the FC, I think there is a general disk I/O problem.

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Is it possible that VMS on Itanium does "Interrupt Coalescing for I/O Performance Gains" ? If yes can I turn it off ?
I read about this and tried it on our Alpha systems, but the application performance was quite bad by changing the default values !
Jim_McKinney
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Regarding FC interrupt coalescing, see the output of

$ mc sys$etc:fc$cp

Early on, any interrupt coalescing was "off" by default - that may have changed in recent versions of VMS.

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

fc$cp does not work on Itanium...
Hein van den Heuvel
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hi Thomas,

Good work, trying to capture generically observed complaints with a concrete and reproducible test.
Please help me understand the test program/result matrix a little better.

I suspect that you use $QIO into a pre-allocated file, thus avoiding any (RMS) buffering and file allocation variables. Correct?

And the source data for the write is simple, predictable, pre-filled (or not-filled) memory buffers right? It is not freshly read data is it?

From the matrix it looks like you used a mix of IO sizes in the same run. Correct?

What about the write pattern? Simple linear?
I suspect you are using a using a random picked size from a list following a target distribution, to a random target in range. Correct?

I like the result presentation in general, bit given the observerd time ranges, the < 2ms seems too coarse.
Perhaps also display an average time with a decimal fraction for each LBC group?

Have you compared with the XFC enabled for the target file(s)?
$ SET FILE /CACHING=NO_CACHING ...

Is this new with 8.4, or observed under 8.3 already but shown under 8.4 to make clear you are using the 'latest and greatest' ?

Now < 2ms is less than the rotational delay for a fast disk: 15Krpm is 60,000 / 15,000 = 4 ms per revolution. So you are apparently measuring the Write-Back Control Caching performance, not the spindle performance.
That's fine but needs to be recognized.

An aggregate write of 40 MB is interestingly close to per-logical-unit write cache sizes typically deployed by controller caches.
How many spindles behind the unit to 'suck up / sink ' the data from the cache?

What platform? Itanium/Alpha? RX7640?

Is this 1gb fibre, or more?

For 1gb, that 40MB makes a good dent into the max throughput.

Are the shadowed IOs going over a single FC or two?

Now sure what to make of the SCSI io number. Those seem ridiculously low.

Hope this helps some,
Hein van den Heuvel
HvdH Performance Consulting

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hello Hein,

I don't know exactly what the testprogram does. As far as I know it allocates 1MB of memory and writes it 100 times to disk.

The testprogram helps me to get a qick first impression if I change something on the system. The result from this program is quite close to writes from backup image savesets, disk copies, copies, and oracle Rdb accesses.

Our Environment are severel clusters based on rx7640 with OpenVMS 8.4 and latest patches. All these systems are connected with two 4 GB FC-Adapter through a Brocade switch to the EVA 4400.

I'm currently checking the settings on the FC-Adapter (EFI -> Fibre Channel Driver).

Hein van den Heuvel
Honored Contributor
Solution

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

>> I don't know exactly what the testprogram does. As far as I know it allocates 1MB of memory and writes it 100 times to disk.

Fair enough, at some point you may want to find a source/details. No rush.

>> The testprogram helps me to get a qick first impression if I change something on the system.

That's a very good thing.
Now, did those write times change going from 8.3 - 8.4 with the same hardware?
That may be obvious to you, but it was not yet clear to me from what you wrote so far.

>> The result from this program is quite close to writes from backup image savesets, disk copies, copies, and oracle Rdb accesses.

Nice.


>> Our Environment are severel clusters based on rx7640 with OpenVMS 8.4 and latest patches.

Clear. Can you still boot to an 8.3 system in that?

>> All these systems are connected with two 4 GB FC-Adapter through a Brocade switch to the EVA 4400.

It doesn't come any faster.

Hein

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hein,

we have a different HW (rx3600 single system) running with VMS 8.3 connected to the same SAN but using FATA-Disks.
The speed is nearly the same (5-10% less) compared with our Cluster with VMS 8.4 and FC-Disks.

As I mentioned earlier ths throughput dropps to 50% if the targe disk is a VMS shadowset (2 members). It seems that the overall IO is limited somehow...

Thomas
Cass Witkowski
Trusted Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I'm looking at the throughput.

412,809 blocks is ~201 MB
2115017us is 2.115017 seconds

or 95 MB per second. Not 40 MB/s

12,121 I/Os per 2.11 seconds is 5,731 IO/s per second.

I know on an old EVA3000 I could get 30,000 1KB blocks written per second so the I/O rate seems low.

The test program is writing to a disk. Is the disk mounted foreign or does it mount as a OpenVMS volume? If it is mounted as a OpenVMS Volume then you would need to be writting to a file. Was the file pre created of a certain size? Is high water marking on the volume disabled? It
is on by default.

If the disk is mounted foreign then you are writing to the raw disk. What is the VRAID of the disk? How many disks are in the disk group?

Cass


Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Cass,

you are right with your calculation. I was not ware of this difference.
I write 100 times 1 MB to disk (OpenVMS Volume Structe 5) this leads to 100MB. Why VMS makes IO for 200 MB is not clear to me.

The space is not preallocated.
Of course I can tune something as you already mentioned: pre-allocation, high water marking and and maybe caching. But this is not the point. The overall IO disk performance is quite bad.

EVA configuration: 45 FC-Disks (BF146DA47A) in the diskgroup. Tests with disks configured Vradi5 (normal setup) and Vradi0 (makes no difference in speed). Write-back cache is enabled, otherwise performance is about 1 MByte :-(

Thomas




Robert Gezelter
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Thomas,

It may be elementary, but could you post a SHOW DEVICE/FULL on the subject disk volume (obviously, names can be blanked out).

A recent post noted that the space is not pre-allocated. Pre-allocation, high water marking, and other parameters can produce dramatically higher overhead IO and consequently lower performance.

As an aside, many have heard my comment about the difference in BACKUP performance when extend is the default, whereas even writing the save set over in-node DECnet (FAL honors the process defaults for RMS parameters set using SET RMS) can produce performance increases measured in orders of magnitude.

Details count. Without knowing precisely what the test program is doing, one does not fully know if one is measuring what one desires to measure (This is analogous to tare weight).

- Bob Gezelter, http://www.rlgsc.com

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I don't think that tuning arrangements will solve the problem. For sure the will help to increase performance, but not in that amount what I expect.

I think some setting is not ok at the Firmware setting of the FC-Adapters (queue depth, execution throttle, interrupt coalescing...).

Thank you so far,
Thomas
Steve Reece_3
Trusted Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Hi Thomas,

I've not gone through your numbers, but have you checked that the cache batteries on your EVA are working and charged? Typically, cache batteries that are not charged will kick the disk array into writing directly to disk such that there is no risk of losing data. If your cache batteries are not charged you should, therefore, get some loss of throughput in general terms.

Steve

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Thank you Steve,

yes I checked that.
I turned some disks into write-through mode and the performance dropped to 1 MByte.
In fact you can't use the EVA4400 without a working cache !

Hoff
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

That QLogic looks to be a 4 Gbps board, and your bandwidth looks to be 0.3125 Gbps. Bad, obviously.

Some background on the EVA4400 FC SAN storage controller series performance:

http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-8473ENW.pdf

Usual throttle with these things are the host and controller HBA speeds (and 8 Gbps adapters are current), and often out with the rotating rust; those widgets will get you roughly 200 IOPs and 150 IOPSs for (and multiplied by however big your average transfers might be, as a gross estimate) for 15KRPM and 10KRPM electro-mechanical fossil-vintage power-glutton archaic antique storage.

More speed means more spindles, or replacing the fossil-era hardware with outboard or with inboard solid-state storage. HP offers this storage as SFF modules on ProLiant boxes and as mezzanine boards for c-Class BladeSystems, though I don't know off-hand if these are supported for OpenVMS.

And for the configuration, have a look at the HP ISEE Configuration Collector (HPCC) stuff; that might get you a better view into your environment.

As for confirming the speed settings on the hosts, here is the brute-force approach for determining the speed of the link; pending an SDA extension or SDA update, this is how you can see the speed:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1129186

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

I found this thread to check the speed of the FC-Interface already. It shows me the value 3 (4GB) for all Interfaces.

Will check who the HPCC works, thanks !

Thomas

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

What Cass wrote made me think.
I checked the testprogram and it writes only 100 MB. I checked the FC counters again and it shows me that 200 MB are written.

I turns out that this was caused bei highwater marking enabled.

Considering this the throughput is not that bad.
Cass Witkowski
Trusted Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

So if you turn of high water marking does the I/O rate go up?

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

Yes, it doubles !
Hoff
Honored Contributor

Re: Poor performance with EVA4400 and OpenVMS 8.4 using QLogic ISP2422

If you really want to cry, run some tests with SSD. That stuff is screaming fast.