Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2005 09:30 AM
04-11-2005 09:30 AM
Re: RWCLU
As I understand it, remastering is done maximum every 8 seconds. Right ?
So, if I take snapshot every second, how can I miss 90% of the remasterings ? Is the program doing something else than documented ?
If I remember correctly, the maximum size of the lock trees was 4 blocks. So PE1 will not be very effective.
And no, this cluster is a 2 node production cluster (plus quorum station) with 2 equal power nodes. None of the 2 with priority : in fact it is a 49% - 51% cluster : 1 node is slightly more important for business than the other (the node in the building of the users).
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2005 09:51 AM
04-11-2005 09:51 AM
Re: RWCLU
> If I remember correctly, the maximum size
> of the lock trees was 4 blocks. So PE1
> will not be very effective.
How did you determine the size of the lock trees?
It's true that VMS will use sequenced messages rather than block data transfers to remaster lock trees which are of size 5 or less. But PE1 still works. For example, if you set PE1 to 3, that would limit remastering to trees of size 3 or less. Now it's true that moving such small trees would be quick and unlikely to be noticed up at the user level.
Do you observe any real performance problems with locking, or are you just concerned with the high remastering counts?
> this cluster is a 2 node production
> cluster (plus quorum station) with 2
> equal power nodes.
Equal-powered nodes with matching shares of the same workload can have a tendency to have lock mastership move back and forth between them. Sometimes one node happens to go a bit faster at some point (only 10 more lock requests per second is needed) and takes over lock mastership of a busy lock tree, but then the additional load slows it down just slightly to where the other node may now have 10 more requests per second on the same tree and thus takes lock mastership of the tree back.
> None of the 2 with
> priority : in fact it is a 49% - 51%
> cluster : 1 node is slightly more
> important for business than the other
> (the node in the building of the users).
With only 2 nodes, it is unlikely that one node could overwhelm the other with lock requests, so you could likely get away with, for example, giving one a LOCKDIRWT value of 1 and the other a LOCKDIRWT value of 2. The node with LOCKDIRWT=2 would tend to become and stay the lock master for any and all shared resource trees.
Alternatively, you could use a non-zero value for PE1 to discourage remastership thrashing between nodes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2005 06:09 PM
04-12-2005 06:09 PM
Re: RWCLU
I "think" the chain distribution output of lck stat give an idea of what the longest lock structures are. There is 1 of 4, 82 of 3, 920 of 2 and 6702 of 1.
I monitor the counters every 10 minutes and found indeed a maximum of about 35-40 remasterings per minute. Per node and at the same time on both nodes. But once it was 790 in 10 minutes, well above the 5 per 8 seconds.
But my main question is : if I take snapshot every second, how can I miss 90% of the remasterings ?
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2005 02:01 AM
04-13-2005 02:01 AM
Re: RWCLU
> of lck stat give an idea of what the
> longest lock structures are. There is 1 of
> 4, 82 of 3, 920 of 2 and 6702 of 1.
That actually shows the number of resource blocks (RSBs) chained off each entry in the Resource Hash Table. This table is used for quick lookups based on resource name. The actual resource trees could be (and most likely are) much larger. All the chain lengths show you is how well the SYSGEN paramter RESHASHTBL is sized compared with your total number of resources. To keep lookups quick, keeping the bulk of the chain lengths between 2 and 4 is considered a good range. (See Roy G. Davis' book VAXcluster Principles, starting at page 6-49.)
You can get an idea of the size of at least your most-active lock trees your by looking at the "Tot Locks" column in the output of SDA> LCK SHOW ACTIVE.
> I monitor the counters every 10 minutes
> and found indeed a maximum of about 35-40
> remasterings per minute. Per node and at
> the same time on both nodes. But once it
> was 790 in 10 minutes, well above the 5
> per 8 seconds.
Could you share with us the specific output you're looking at for remastering counts?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2005 02:23 AM
04-13-2005 02:23 AM
Re: RWCLU
The top of "Total locks" :
----------------------------------------------
RSB Address Tot Locks SubRSB Act Node Resource Name
----------------- --------- ------ ----- ------ ----------------------
---------
FFFFFFFF.7E229650 1447 2275 1447 SPVMX2 F11B$vDISK2
FFFFFFFF.7E3AC850 454 131 454 SPVMX2 F11B$vDISK10
FFFFFFFF.7E22AD50 290 674 290 SPVMX2 F11B$vCONF
FFFFFFFF.7E35B950 187 266 187 SPVMX2 F11B$vDISK1
FFFFFFFF.7E756450 116 14 116 SPVMX2 DSM$00001_LOCK_
FFFFFFFF.7E3AF750 101 29 101 SPVMX2 F11B$vDISK9
FFFFFFFF.7E736A50 85 0 85 SPVMX2 DSM$00001_ARB.....
FFFFFFFF.7E6C9250 83 0 83 SPVMX2 DSM$00001_GV$$ARB.
FFFFFFFF.7E3AEB50 75 140 75 SPVMX2 F11B$vDISK8
FFFFFFFF.7E242150 63 9 63 SPVCS2 RMS$l=.....CONF
...
File: DISK$CONF:[CONFI
G.AUTHORIZE]RIGHTSLIST.DAT;2
FFFFFFFF.7E267550 51 5 51 SPVMX2 RMS$n=.....CONF
...
File: DISK$CONF:[CONFI
G.AUTHORIZE]SYSUAF.DAT;2
FFFFFFFF.7E518950 44 0 44 SPVMX2 CACHE$cmDISK10 l.
..
FFFFFFFF.7E23D250 38 188 38 SPVMX2 F11B$vSPVMX2_SYS
FFFFFFFF.7E22D750 37 2 37 SPVMX2 SYS$SYS_ID.P....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2005 03:27 AM
04-13-2005 03:27 AM
Re: RWCLU
> output of show lock/sum and made the
> difference for "tree moved to this node"
> and "tree moved to other node"
Sounds reasonable to me. Do your calculations match the "Lock Tree Outbound Rate" and "Lock Tree Inbound Rate" as shown by MONITOR RLOCK?
> RSB Address Tot Locks SubRSB Act Node Resource Name
> ----------------- --------- ------ ----- ------ ----------------------
---------
> FFFFFFFF.7E229650 1447 2275 1447 SPVMX2 F11B$vDISK2
OK, so your most-active lock tree at this point had 1,447 locks in it.
Oops! On second glance, it looks like you have a version of the LCK extension which had a bug -- note the "Tot Locks" and "Act" fields are always the same. And the "SubRSB" count is higher than the "Tot Locks" field, which seems unlikely. (I don't know exactly what version this bug was fixed in, but I see it is corrected on an 8.2 system.) In the interim, the SubRSB field could at least give you a clue by giving you a lower bound on the number of locks.
An appendix in "The Book of Ruth" (Internals & Data Structures book) explains VMS lock use, and this resource name "F11B$vDISK2" indicates this is a Files-11 Volume Allocation lock for a disk with label DISK2 (so there is apparently a lot of file creation/extension activity taking place on that volume).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2005 02:02 AM
04-14-2005 02:02 AM
Re: RWCLU
Is it possible that remastering is not always moving the complete lock tree but only a part of it ? That would explain why I don't find them. E.g. not the locks for disk2 move but only the locks for disk2 directory wim. So, a new lock tree is created and nothing is moved.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2005 05:00 AM
04-14-2005 05:00 AM
Re: RWCLU
But there are different RMS lock trees for different files on a disk compared with the Files-11 locks for the disk itself -- they are not in a heirarchy as your question implied.
For example, the Device Lock used to arbitrate access to a disk for mounting and so forth has a resource name of the form SYS$
- « Previous
-
- 1
- 2
- Next »