- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: MSCP performance.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 01:40 AM
04-12-2004 01:40 AM
MSCP_BUFFER parameter information:
Feedback information.
Old value was 312, New value is 312
MSCP server I/O rate: 2 I/Os per 10 sec.
I/Os that waited for buffer space: 10021
I/Os that fragmented into multiple transfers: 26916
I would think that with counts that high, it would have suggested a higher value for MSCP_BUFFER .
Of course the VAX is limited to 10Mb/half network/disk access (Alpha's are 100Mb/full), but it just seems to be very sluggish. Any tuning hints to help this situation? Yes the systems have to remain on the VAX platform for now :-(
Thanks in advance,
Art
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 02:33 AM
04-12-2004 02:33 AM
Re: MSCP performance.
Also, check sysmwcnt to make sure you aren't doing any sysgem faults (look at $ monitor page. system faults should average near zero: 0.1/sec or less).
If you are doing backup with /block=32767, I wouldn't expect much more then 40 IO's/sec (with approx 60-70 IO's/sec being the max).
Of course, 128MB of RAM will make the above a little tight if too much memory needs to be consumed by various tasks.
john
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 03:02 AM
04-12-2004 03:02 AM
Re: MSCP performance.
Thanks again,
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 03:05 AM
04-12-2004 03:05 AM
Re: MSCP performance.
It's 1995 all over again,
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 04:27 AM
04-12-2004 04:27 AM
Re: MSCP performance.
is the MSCP link somehow shared with a DECnet circuit so that you can access the DECnet line counters on both nodes?
$ MCR NCP SHOW KNOW LINES COUNTERS
In the past when I thought that the speed was too slow I was able to detect problems in the network infrastructure (triple termination, exceeded cable length, ...) that way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 04:37 AM
04-12-2004 04:37 AM
Re: MSCP performance.
$ mcr ncp show know line count
Known Line Counters as of 12-APR-2004 12:34:01
Line = ISA-0
>65534 Seconds since last zeroed
12124698 Data blocks received
1918395 Multicast blocks received
0 Receive failure
>4294967294 Bytes received
855675874 Multicast bytes received
0 Data overrun
18590116 Data blocks sent
124221 Multicast blocks sent
686907 Blocks sent, multiple collisions
4422802 Blocks sent, single collision
81400 Blocks sent, initially deferred
>4294967294 Bytes sent
12781118 Multicast bytes sent
378 Send failure, including:
Excessive collisions
>65534 Collision detect check failure
0 Unrecognized frame destination
0 System buffer unavailable
0 User buffer unavailable
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 06:21 AM
04-12-2004 06:21 AM
Re: MSCP performance.
Another thing I sometimes try is run a test with DTSEND on both nodes - testing both directions.
It might be necessary to assign a username/password to the DTR object on the remote node. Then I do:
$ mcr dtsend
_Test: DATA/NODENAME=remote/PRINT/SECONDS=60/SPEED=10000000
I am doing this from memory, but there is online help and there are different test available - see /TYPE=
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2004 06:58 AM
04-12-2004 06:58 AM
Re: MSCP performance.
You may want to check if your ethernet has a hardware problem somewhere.
Personal opinion/guideline: One can see collisions and be just fine; one should never see excessive collisions. Maybe this is just a sideaffect of mscp_buffer being too small.
> Ok thanks...I'll set MSCP_BUFFER on the Alpha's to 2048.
If you have the memory, I would do a min_mscp_buffer=2048 on the VAXes and a min_mscp_buffer=4096 on the Alphas. Both are overkill, but I think they are worth it. Saved my bacon once when a CI adaptor failed and a VAX started MSCP'ing over the ethernet.
> On sort of the same topic, is it possible to designate which node will do the serving?
The LAVc switches between all available ethernet controllers (actually any supported cluster interface) using the least busy path. Even though you have only one 10Mb/s card on the VAX, if your Alphas have two or more NICs, packets will be sent over the least busy path. There is a way to control traffic by nic card, but not by system (short of mscp_serve_all=0, I haven't seen it). Personal opinion, if the max speed is limited to 10Mb/s, I would not worry about which system actually does the MSCP serving; the overhead is extremely low given your configuration.
fwiw - I have mscp_server_all turned on for all my nodes and clusters whether they need it or not (just make sure cluster_authorize.dat is correctly setup). The overhead this causes is debatable (I've never seen it be a problem no matter what workload I throw at it -- your mileage may vary). The extra redundancy this gives is invaluable for me.
john
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2004 04:33 AM
04-13-2004 04:33 AM
Re: MSCP performance.
You might also want to check for SCS credit waits on the SYSAP connection between the MSCP disk class driver on the VAX and the MSCP server in the Alphas. Use $SHOW CLUSTER/CONTINUOUS and ADD CIRCUITS,CONNECTIONS,REM_PROC_NAME,CR_WAITS
and look for large values which tend to increase over time in the CR_WAITS field.
> is it possible to designate which node will do the serving?
Yes. The MSCP_LOAD parameter can be used to control this. Originally MSCP_LOAD was a binary switch: 0 meant no serving and 1 meant enable serving. This was expanded to retain these two original values but also allow you to specify a load capacity rating for a node. If you set the MSCP_LOAD parameter significantly higher on one node, it will tend to be preferred as the server. The units are in nominal capacity in I/Os per second. The default value of 1 corresponds to a fixed value of 340 on Alpha (for those with VMS source listings, this code is in file [MSCP.LIS]MSCP.LIS, routine LM_INIT_CAPACITY). Anything above 1 is used as the actual load capacity value, so a value of 2 is the lowest possible fixed value, and can be used on a node if you wish to avoid MSCP-serving (except as a last resort) on that node. To avoid any MSCP-serving on a node at all, ever, you would set MSCP_LOAD to zero.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2004 05:00 AM
04-13-2004 05:00 AM
Re: MSCP performance.
If you are using an ethernet switch you need to make sure the duplex and speed on the port on the switch that VAXs are connected to match that of the VAXs. If the VAX ethernet port is half duplex make sure the switch port that it is connected to is also half. You will see late collisions if the VAX is half and the switch is full.
Also if you are using a cut through switch then you could see a lot of runt or short packets that can chew up bandwidth.
If you are not using a switch at all but a repeater then you could be overoading the VAXs with all the network traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 01:35 AM
04-14-2004 01:35 AM
Re: MSCP performance.
Uwe - DTSEND - somehow, I'm totally unfamiliar with this diagnostic!! I'll check into it. What protocol is it actually using to do tests? Decnet?
Regarding setting MSCP_BUFFER on the Alpha's AND the VAX's, does this setting come into play on the VAX side? The VAX's are not (actively) serving any disk, the only local storage is the system disk and a page/swap disk.
Network topology - I'm using CentreCom twisted pair transceivers on the AUI ports of the 4000-105A's which are connected into an HP switch (the network folks say they have locked the ports at 10-Half). A Gig fiber uplink to a switch, down another Gig fiber link to another HP switch to the Alpha's running 100-Full. I wanted to get the VAX's onto the same switch as the Alpha's but there's a lack of free ports currently. There's a SQE test switch on the transceivers that is in the wrong position which is why I see Collision Detect Check failures. In the past this has never really been a "problem", just a max'ed counter.
Anyways, thanks again...I hope to be able to reboot the Alpha's this weekend for new MSCP sysgen settings, hopefully also get the VAX's over to the other switch.
Cheers,
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 02:39 AM
04-14-2004 02:39 AM
Re: MSCP performance.
It is handy to serve your local disks on the VAXen so other nodes can have access to them. Makes it more convenient so you don't have to login to that node, or alternatively, use sysman, etc. I happen to prefer setting up my systems this way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 02:43 AM
04-14-2004 02:43 AM
Re: MSCP performance.
See
http://h71000.www7.hp.com/doc/73final/documentation/pdf/DECNET_OVMS_NET_UTIL.PDF
chapter 4
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:12 AM
04-14-2004 04:12 AM
Re: MSCP performance.
yes it uses DECnet. I forgot to mention the /SIZE qualifier. It is a nice way to put load onto a link and test the throughput without being limited by the speed of some underlying disks or tapes.
Ian, thank you for providing a pointer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:19 AM
04-14-2004 04:19 AM
Re: MSCP performance.
I ran it against the node doing the MSCP serving:
_Test: data/print/stat/seconds=10/node=xxxxxx/size=512/type=seq
%NET-S-NORMAL, normal successful completion
Test Parameters:
Test duration (sec) 10
Target node "xxxxxx"
Line speed (baud) 1000000
Message size (bytes) 512
Summary statistics:
Total messages XMIT 14071 RECV 0
Total bytes XMIT 7204352
Messages per second 1407.10
Bytes per second 720435
Line thruput (baud) 5763480
%Line Utilization 576.348
I wish I could utilize my paycheque at 576% !!
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:28 AM
04-14-2004 04:28 AM
Re: MSCP performance.
_Test: data/print/stat/seconds=10/node=xxxxxx/size=512/type=seq/speed=10000000
%NET-S-NORMAL, normal successful completion
Test Parameters:
Test duration (sec) 10
Target node "xxxxxx"
Line speed (baud) 10000000
Message size (bytes) 512
Summary statistics:
Total messages XMIT 17134 RECV 0
Total bytes XMIT 8772608
Messages per second 1713.40
Bytes per second 877260
Line thruput (baud) 7018080
%Line Utilization 70.181
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:33 AM
04-14-2004 04:33 AM
Re: MSCP performance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:39 AM
04-14-2004 04:39 AM
Re: MSCP performance.
%NET-S-NORMAL, normal successful completion
Test Parameters:
Test duration (sec) 10
Target node "yyyyyy"
Line speed (baud) 100000000
Message size (bytes) 512
Summary statistics:
Total messages XMIT 17957 RECV 0
Total bytes XMIT 9193984
Messages per second 1795.70
Bytes per second 919398
Line thruput (baud) 7355184
%Line Utilization 7.355
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-14-2004 04:43 AM
04-14-2004 04:43 AM
Re: MSCP performance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-15-2004 05:11 AM
04-15-2004 05:11 AM
Re: MSCP performance.
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-15-2004 05:14 AM
04-15-2004 05:14 AM
Re: MSCP performance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2004 06:29 AM
04-20-2004 06:29 AM
Re: MSCP performance.
No, it doesn't affect the client side of the connection.
> What is CR_WAIT, it looks like it's waiting alot. <
SCS uses credit-based flow control. When an SCS connection is set up, each side gives the other a certain number of credits. Each credit represents the ability to handle a received message. Messages can be sent until you run out of credits. Credits are returned when messages have been handled by the other node and acknowleged. CR_WAIT events are cases where you've had to hold off on sending another message because you have to wait for credit(s) to be returned.
If the credit wait is on the MSCP$DISK-to-VMS$DISK_CL_DRVR SYSAP connection, increase the MSCP_CREDITS parameter on the remote (MSCP-serving) node (the one with the MSCP$DISK SYSAP on its end).
If the credit wait is on the VMS$VAXcluster SYSAP (which handles lock requests, among other things), increase the CLUSTER_CREDITS parameter on the remote node.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2004 10:14 AM
04-20-2004 10:14 AM
Re: MSCP performance.
> | COUNTERS |
> +--------+--------+------+
> | MSGS_S | MSGS_R | CR_W |
> +--------+--------+------+
> | 4 | 4 | 0 |
> | 25006 | 25006 | 0 |
> | 976324 | 976324 | **** |
> | ****** | ****** | 1 |
> +--------+--------+------+
By the way, to get rid of the asterisks and see the true counts, you can increase the width of these fields under SHOW CLUSTER/CONTINUOUS. For example:
SET CR_WAITS/WIDTH=8
SET MSGS_SENT/WIDTH=10
SET MSGS_RCVD/WID=10
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-21-2004 07:39 AM
04-21-2004 07:39 AM
Re: MSCP performance.
I successfully cleared up all the "problem" counters, no more waiting, fragmenting etc. but in the end the performance is the same.
I think the limitation is 10Mb/half duplex ethernet on the VAX. My test backup (reading from one served DGA device and writing to another) is ~600MB in 24 minutes quite consistently. I guess you could say I'm doing twice that - reading 600MB and writing 600MB in those 24 minutes, which would equal ~.82MB/sec ... not that far off from theoretical 10Mb ethernet.
Is that all there is? Was there ever a 100Mb ethernet card/module for VAX 4000-105A's?
Don't wanna be in 1995 anymore,
Art