Operating System - OpenVMS
1748205 Members
4601 Online
108759 Solutions
New Discussion юеВ

Re: DLM & RMS Multistreaming

 
Hein van den Heuvel
Honored Contributor

Re: DLM & RMS Multistreaming


hein>So Try GBC + MSE under 7.3 (or better!)
>Sorry, but a first time when we saw the
>problem was after upgrading from 7.2-1 to 7.3.

Well that is really annoying because that's when it was supposed to get better.
You really _did_ have global buffer on huh?
Did you verify with say INSTALL LIST?
Or with ANAL/SYST... SHOW PROC xxx/RMS=(FAB,GBH)?
How about switching rms stats: SET FILE/STAT
Followed by MONI RMS (or my rms_stats freeware)

Anyway, this all sounded sufficiently suspicious that I forwarded this topic to RMS Engineering and here is the reply (thanks Bob!)
------ begin -----
I took a look at the discussion, as well as the customer's
example code. His LCK1.C program which is setting the
MSE bit in order to allow global buffer use looks like it is
indeed utilizing the new (V7.3) system level fork lock for
concurrent reads as expected. In executing his program,
I find that there are no locks being accessed once the
initial system level CR locks are obtained (using MONITOR
DLOCK/LOCK).

Obviously, this only occurs if global buffers are enabled on
the file since the system level fork lock for concurrent reads
is only implemented on files with global buffers. Without the
global buffers enabled, I see the expected lock conversions
(5000+ per "monitor lock" sample interval on my system).

His other settings in addition to MSE sharing and global buffers
set to a large enough value to contain the file (nql, rrl, nlk) are
necessary as well. Without these settings, we see the typical
enq/deq of the query lock.

Is the customer certain that he has performed his testing with
global buffers set to a nonzero value? I had to manually set
the file since the program itself does not specify a GBC.

As far as I can tell, the new V7.3 features are working as designed
and locking is virtually eliminated with the No Query Lock and
system level concurrent read fork lock. I actually performed my
testing on an Opal release (X9VW) using his lck1.c program, but
I am not aware of any problems with the V7.3-V7.3-1 releases
that would prevent this.
------ end ------



Ruslan R. Laishev
Super Advisor

Re: DLM & RMS Multistreaming

Hi Hein, Bob.

Can you explain me what is difference in use locks between "multi-FAB" and "multi-RAB" accessing to a shared file ?

Thanks.

Hein van den Heuvel
Honored Contributor

Re: DLM & RMS Multistreaming


Multi-fab sharing within a single process is exactly like multiple processes sharing the same file. The will be no awareness that it is in fact the same program. So RMS will map the global buffers (and optional statistics section ) multiple times and so on.

Multi-rab sharing allows rms to share the same local and global buffers (and bdbs and so on) but just have independednt RABs and Irab to be able to retain independend context (notably for $GET sequetial, update and deletes).

As an addeed twist, SHR=GET + FAC=GET would normally NOT use global sections, but toss in MSE and it will (even if there is really only a single stream).

Note: While I'd love to see a resolution posted here evenatually I believe we are now (back) to a point in the discussion where formal support wil need to be engaged, if a probblem remains.

In other words: This is all the 'free advise' you are going to get from me, bob,...

Cheers,

Hein.

Ruslan R. Laishev
Super Advisor

Re: DLM & RMS Multistreaming

I see some other interesting effect:
http://starlet.deltatel.ru/~laishev/lck.c - this prog. shows DIRIO:0 and BUFIO:0 counters after first thread ran over the file. It's looks as expected.

But http://starlet.deltatel.ru/~laishev/lck1.c - permanetly shows non-zero DIRIO. The test plain-text file is about ~300 blocks, global buffers count = 512. What is a possible reason to performs DIRIO?

Thanks, Hein.