<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MSCP performance. in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245424#M62545</link>
    <description>&amp;gt;  378 Send failure, including: Excessive collisions&lt;BR /&gt;You may want to check if your ethernet has a hardware problem somewhere. &lt;BR /&gt;Personal opinion/guideline: One can see collisions and be just fine; one should never see excessive collisions. Maybe this is just a sideaffect of mscp_buffer being too small.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Ok thanks...I'll set MSCP_BUFFER on the Alpha's to 2048.&lt;BR /&gt;If you have the memory, I would do a min_mscp_buffer=2048 on the VAXes and a min_mscp_buffer=4096 on the Alphas. Both are overkill, but I think they are worth it. Saved my bacon once when a CI adaptor failed and a VAX started MSCP'ing over the ethernet.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; On sort of the same topic, is it possible to designate which node will do the serving? &lt;BR /&gt;The LAVc switches between all available ethernet controllers (actually any supported cluster interface) using the least busy path. Even though you have only one 10Mb/s card on the VAX, if your Alphas have two or more NICs, packets will be sent over the least busy path. There is a way to control traffic by nic card, but not by system (short of mscp_serve_all=0, I haven't seen it). Personal opinion, if the max speed is limited to 10Mb/s, I would not worry about which system actually does the MSCP serving; the overhead is extremely low given your configuration.&lt;BR /&gt;&lt;BR /&gt;fwiw - I have mscp_server_all turned on for all my nodes and clusters whether they need it or not (just make sure cluster_authorize.dat is correctly setup). The overhead this causes is debatable (I've never seen it be a problem no matter what workload I throw at it -- your mileage may vary). The extra redundancy this gives is invaluable for me.&lt;BR /&gt;&lt;BR /&gt;john</description>
    <pubDate>Mon, 12 Apr 2004 13:58:07 GMT</pubDate>
    <dc:creator>John Eerenberg</dc:creator>
    <dc:date>2004-04-12T13:58:07Z</dc:date>
    <item>
      <title>MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245417#M62538</link>
      <description>I'm adding two VAX 4000-105A's (128MB each, DSSI connection for common system disk - v6.2) into a cluster of 4 Alpha's (v7.2-2) and two other VAX's (also v6.2 but they have their own system disks).  The "new" VAX's will have the majority of their data storage on an HSG80 subsystem, MSCP served up through the Alpha's.  The disks presented are 2 - 72GB - 3 x 36GB - 15,000rpm - Raid 3/5 arrays.  I was doing some restoring and moving of data using one of the new VAX's and found that disk access seemed very "poor".  Backing up from one disk to the other maintained a steady rate of ~26 DIO's per second.  I looked at the Alpha doing the MSCP serving and found that Extra Fragment, Fragmented and Buffer Wait Rates counters were very active.  I ran Autogen on the serving node (which really wasn't busy doing other things at the time)while the disk activity was going on, and it told me : &lt;BR /&gt;&lt;BR /&gt;MSCP_BUFFER parameter information:&lt;BR /&gt;        Feedback information.&lt;BR /&gt;           Old value was 312, New value is 312&lt;BR /&gt;           MSCP server I/O rate: 2 I/Os per 10 sec.&lt;BR /&gt;           I/Os that waited for buffer space: 10021&lt;BR /&gt;           I/Os that fragmented into multiple transfers: 26916&lt;BR /&gt;&lt;BR /&gt;I would think that with counts that high, it would have suggested a higher value for MSCP_BUFFER .&lt;BR /&gt;&lt;BR /&gt;Of course the VAX is limited to 10Mb/half network/disk access (Alpha's are 100Mb/full), but it just seems to be very sluggish.  Any tuning hints to help this situation?  Yes the systems have to remain on the VAX platform for now :-(&lt;BR /&gt;&lt;BR /&gt;Thanks in advance,&lt;BR /&gt;Art</description>
      <pubDate>Mon, 12 Apr 2004 08:40:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245417#M62538</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-12T08:40:03Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245418#M62539</link>
      <description>Set mscp_buffer to a minimum of 2048. You could start lower, but I think this would solve your problem (it did for me) and review it to make sure there are no more fragmented IO's (of increase mscp_buffer to eventually eliminate the fragmented IO's).&lt;BR /&gt;&lt;BR /&gt;Also, check sysmwcnt to make sure you aren't doing any sysgem faults (look at $ monitor page. system faults should average near zero: 0.1/sec or less).&lt;BR /&gt;&lt;BR /&gt;If you are doing backup with /block=32767, I wouldn't expect much more then 40 IO's/sec (with approx 60-70 IO's/sec being the max).&lt;BR /&gt;&lt;BR /&gt;Of course, 128MB of RAM will make the above a little tight if too much memory needs to be consumed by various tasks.&lt;BR /&gt;&lt;BR /&gt;john</description>
      <pubDate>Mon, 12 Apr 2004 09:33:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245418#M62539</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-04-12T09:33:22Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245419#M62540</link>
      <description>Ok thanks...I'll set MSCP_BUFFER on the Alpha's to 2048.  On sort of the same topic, is it possible to designate which node will do the serving?  At first I thought I had control by mounting the disk clusterwide from the Alpha that I wanted to be the server, but this behavior isn't as consistent as I thought.  Short of turning MSCP_SERVE_ALL off on the other Alpha's (which isn't really what I want in case failover is needed), is there some way to provide "weighting" of one Alpha over the other?&lt;BR /&gt;&lt;BR /&gt;Thanks again,&lt;BR /&gt;Art</description>
      <pubDate>Mon, 12 Apr 2004 10:02:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245419#M62540</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-12T10:02:51Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245420#M62541</link>
      <description>Oh and the backups/restores I was citing were after I had "exploded" the savesets from tape ie. I was moving directory trees around...blocksize doesn't enter into it.  The tape restore to the served disks was also very "painful" even though the savesets were blocksize=65024 from a TZ88.&lt;BR /&gt;&lt;BR /&gt;It's 1995 all over again,&lt;BR /&gt;Art</description>
      <pubDate>Mon, 12 Apr 2004 10:05:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245420#M62541</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-12T10:05:52Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245421#M62542</link>
      <description>Art,&lt;BR /&gt;is the MSCP link somehow shared with a DECnet circuit so that you can access the DECnet line counters on both nodes?&lt;BR /&gt;$ MCR NCP SHOW KNOW LINES COUNTERS&lt;BR /&gt;&lt;BR /&gt;In the past when I thought that the speed was too slow I was able to detect problems in the network infrastructure (triple termination, exceeded cable length, ...) that way.</description>
      <pubDate>Mon, 12 Apr 2004 11:27:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245421#M62542</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-12T11:27:51Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245422#M62543</link>
      <description>There's only one ethernet interface on my 4000's, other than not having heartbeat set correctly on the transceiver and a fair number of collisions, I don't see too many problems Decnet-wise:&lt;BR /&gt;&lt;BR /&gt;$ mcr ncp show know line count&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;Known Line Counters as of 12-APR-2004 12:34:01&lt;BR /&gt; &lt;BR /&gt;Line = ISA-0&lt;BR /&gt; &lt;BR /&gt;      &amp;gt;65534  Seconds since last zeroed&lt;BR /&gt;    12124698  Data blocks received&lt;BR /&gt;     1918395  Multicast blocks received&lt;BR /&gt;           0  Receive failure&lt;BR /&gt; &amp;gt;4294967294  Bytes received&lt;BR /&gt;   855675874  Multicast bytes received&lt;BR /&gt;           0  Data overrun&lt;BR /&gt;    18590116  Data blocks sent&lt;BR /&gt;      124221  Multicast blocks sent&lt;BR /&gt;      686907  Blocks sent, multiple collisions&lt;BR /&gt;     4422802  Blocks sent, single collision&lt;BR /&gt;       81400  Blocks sent, initially deferred&lt;BR /&gt; &amp;gt;4294967294  Bytes sent&lt;BR /&gt;    12781118  Multicast bytes sent&lt;BR /&gt;         378  Send failure, including:&lt;BR /&gt;                Excessive collisions&lt;BR /&gt;      &amp;gt;65534  Collision detect check failure&lt;BR /&gt;           0  Unrecognized frame destination&lt;BR /&gt;           0  System buffer unavailable&lt;BR /&gt;           0  User buffer unavailable</description>
      <pubDate>Mon, 12 Apr 2004 11:37:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245422#M62543</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-12T11:37:32Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245423#M62544</link>
      <description>Ah, OK.&lt;BR /&gt;Another thing I sometimes try is run a test with DTSEND on both nodes - testing both directions.&lt;BR /&gt;&lt;BR /&gt;It might be necessary to assign a username/password to the DTR object on the remote node. Then I do:&lt;BR /&gt;$ mcr dtsend&lt;BR /&gt;_Test: DATA/NODENAME=remote/PRINT/SECONDS=60/SPEED=10000000&lt;BR /&gt;&lt;BR /&gt;I am doing this from memory, but there is online help and there are different test available - see /TYPE=</description>
      <pubDate>Mon, 12 Apr 2004 13:21:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245423#M62544</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-12T13:21:44Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245424#M62545</link>
      <description>&amp;gt;  378 Send failure, including: Excessive collisions&lt;BR /&gt;You may want to check if your ethernet has a hardware problem somewhere. &lt;BR /&gt;Personal opinion/guideline: One can see collisions and be just fine; one should never see excessive collisions. Maybe this is just a sideaffect of mscp_buffer being too small.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Ok thanks...I'll set MSCP_BUFFER on the Alpha's to 2048.&lt;BR /&gt;If you have the memory, I would do a min_mscp_buffer=2048 on the VAXes and a min_mscp_buffer=4096 on the Alphas. Both are overkill, but I think they are worth it. Saved my bacon once when a CI adaptor failed and a VAX started MSCP'ing over the ethernet.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; On sort of the same topic, is it possible to designate which node will do the serving? &lt;BR /&gt;The LAVc switches between all available ethernet controllers (actually any supported cluster interface) using the least busy path. Even though you have only one 10Mb/s card on the VAX, if your Alphas have two or more NICs, packets will be sent over the least busy path. There is a way to control traffic by nic card, but not by system (short of mscp_serve_all=0, I haven't seen it). Personal opinion, if the max speed is limited to 10Mb/s, I would not worry about which system actually does the MSCP serving; the overhead is extremely low given your configuration.&lt;BR /&gt;&lt;BR /&gt;fwiw - I have mscp_server_all turned on for all my nodes and clusters whether they need it or not (just make sure cluster_authorize.dat is correctly setup). The overhead this causes is debatable (I've never seen it be a problem no matter what workload I throw at it -- your mileage may vary). The extra redundancy this gives is invaluable for me.&lt;BR /&gt;&lt;BR /&gt;john</description>
      <pubDate>Mon, 12 Apr 2004 13:58:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245424#M62545</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-04-12T13:58:07Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245425#M62546</link>
      <description>You may find $MONITOR MSCP and $SHOW DEVICE/SERVED to be useful.&lt;BR /&gt;&lt;BR /&gt;You might also want to check for SCS credit waits on the SYSAP connection between the MSCP disk class driver on the VAX and the MSCP server in the Alphas. Use $SHOW CLUSTER/CONTINUOUS and ADD CIRCUITS,CONNECTIONS,REM_PROC_NAME,CR_WAITS&lt;BR /&gt;and look for large values which tend to increase over time in the CR_WAITS field.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; is it possible to designate which node will do the serving?&lt;BR /&gt;&lt;BR /&gt;Yes. The MSCP_LOAD parameter can be used to control this. Originally MSCP_LOAD was a binary switch: 0 meant no serving and 1 meant enable serving. This was expanded to retain these two original values but also allow you to specify a load capacity rating for a node. If you set the MSCP_LOAD parameter significantly higher on one node, it will tend to be preferred as the server. The units are in nominal capacity in I/Os per second. The default value of 1 corresponds to a fixed value of 340 on Alpha (for those with VMS source listings, this code is in file [MSCP.LIS]MSCP.LIS, routine LM_INIT_CAPACITY). Anything above 1 is used as the actual load capacity value, so a value of 2 is the lowest possible fixed value, and can be used on a node if you wish to avoid MSCP-serving (except as a last resort) on that node. To avoid any MSCP-serving on a node at all, ever, you would set MSCP_LOAD to zero.</description>
      <pubDate>Tue, 13 Apr 2004 11:33:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245425#M62546</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-04-13T11:33:20Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245426#M62547</link>
      <description>Can you describe your ethernet network in a little more detail.  When you talk about having a transceiver with the wrong heartbeat setting are you using the H4000 with thickwire ethernet?&lt;BR /&gt;&lt;BR /&gt;If you are using an ethernet switch you need to make sure the duplex and speed on the port on the switch that VAXs are connected to match that of the VAXs.  If the VAX ethernet port is half duplex make sure the switch port that it is connected to is also half.  You will see late collisions if the VAX is half and the switch is full.&lt;BR /&gt;&lt;BR /&gt;Also if you are using a cut through switch then you could see a lot of runt or short packets that can chew up bandwidth.&lt;BR /&gt;&lt;BR /&gt;If you are not using a switch at all but a repeater then you could be overoading the VAXs with all the network traffic.</description>
      <pubDate>Tue, 13 Apr 2004 12:00:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245426#M62547</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2004-04-13T12:00:19Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245427#M62548</link>
      <description>Thanks all...lots of good ideas for me to think about so far!&lt;BR /&gt;&lt;BR /&gt;Uwe - DTSEND - somehow, I'm totally unfamiliar with this diagnostic!!  I'll check into it.  What protocol is it actually using to do tests?  Decnet?&lt;BR /&gt;&lt;BR /&gt;Regarding setting MSCP_BUFFER on the Alpha's AND the VAX's, does this setting come into play on the VAX side?  The VAX's are not (actively) serving any disk, the only local storage is the system disk and a page/swap disk.&lt;BR /&gt;&lt;BR /&gt;Network topology - I'm using CentreCom twisted pair transceivers on the AUI ports of the 4000-105A's which are connected into an HP switch (the network folks say they have locked the ports at 10-Half).  A Gig fiber uplink to a switch, down another Gig fiber link to another HP switch to the Alpha's running 100-Full.  I wanted to get the VAX's onto the same switch as the Alpha's but there's a lack of free ports currently.  There's a SQE test switch on the transceivers that is in the wrong position which is why I see Collision Detect Check failures.  In the past this has never really been a "problem", just a max'ed counter.&lt;BR /&gt;&lt;BR /&gt;Anyways, thanks again...I hope to be able to reboot the Alpha's this weekend for new MSCP sysgen settings, hopefully also get the VAX's over to the other switch.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Wed, 14 Apr 2004 08:35:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245427#M62548</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-14T08:35:12Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245428#M62549</link>
      <description>&amp;gt; "Regarding setting MSCP_BUFFER on the Alpha's AND the VAX's, does this setting come into play on the VAX side? The VAX's are not (actively) serving any disk, the only local storage is the system disk and a page/swap disk."&lt;BR /&gt;&lt;BR /&gt;It is handy to serve your local disks on the VAXen so other nodes can have access to them. Makes it more convenient so you don't have to login to that node, or alternatively, use sysman, etc. I happen to prefer setting up my systems this way.</description>
      <pubDate>Wed, 14 Apr 2004 09:39:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245428#M62549</guid>
      <dc:creator>John Eerenberg</dc:creator>
      <dc:date>2004-04-14T09:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245429#M62550</link>
      <description>DTSEND uses DECnet task-to-task comms.&lt;BR /&gt;See&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/73final/documentation/pdf/DECNET_OVMS_NET_UTIL.PDF" target="_blank"&gt;http://h71000.www7.hp.com/doc/73final/documentation/pdf/DECNET_OVMS_NET_UTIL.PDF&lt;/A&gt;&lt;BR /&gt;chapter 4&lt;BR /&gt;</description>
      <pubDate>Wed, 14 Apr 2004 09:43:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245429#M62550</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-04-14T09:43:34Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245430#M62551</link>
      <description>Hello Art,&lt;BR /&gt;yes it uses DECnet. I forgot to mention the /SIZE qualifier. It is a nice way to put load onto a link and test the throughput without being limited by the speed of some underlying disks or tapes.&lt;BR /&gt;&lt;BR /&gt;Ian, thank you for providing a pointer.</description>
      <pubDate>Wed, 14 Apr 2004 11:12:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245430#M62551</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-14T11:12:13Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245431#M62552</link>
      <description>Thanks Ian and Uwe.  Now that I look at the manual I do remember reading about it/using it ... probably back in 1993 when the book was written!&lt;BR /&gt;&lt;BR /&gt;I ran it against the node doing the MSCP serving:&lt;BR /&gt;&lt;BR /&gt;_Test: data/print/stat/seconds=10/node=xxxxxx/size=512/type=seq&lt;BR /&gt;%NET-S-NORMAL, normal successful completion&lt;BR /&gt;&lt;BR /&gt;Test Parameters:&lt;BR /&gt;   Test duration (sec)  10&lt;BR /&gt;   Target node          "xxxxxx"&lt;BR /&gt;   Line speed (baud)    1000000&lt;BR /&gt;   Message size (bytes) 512&lt;BR /&gt;&lt;BR /&gt;Summary statistics:&lt;BR /&gt;   Total messages XMIT  14071  RECV 0&lt;BR /&gt;   Total bytes XMIT     7204352&lt;BR /&gt;   Messages per second  1407.10&lt;BR /&gt;   Bytes per second     720435&lt;BR /&gt;   Line thruput (baud)  5763480&lt;BR /&gt;   %Line Utilization    576.348&lt;BR /&gt;&lt;BR /&gt;I wish I could utilize my paycheque at 576% !!&lt;BR /&gt;&lt;BR /&gt;Art</description>
      <pubDate>Wed, 14 Apr 2004 11:19:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245431#M62552</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-14T11:19:26Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245432#M62553</link>
      <description>Well, I would use /SPEED=10000000 if this is a 10 MBit link on the VAX.</description>
      <pubDate>Wed, 14 Apr 2004 11:26:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245432#M62553</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-14T11:26:17Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245433#M62554</link>
      <description>Ahh...that's a bit better:&lt;BR /&gt;&lt;BR /&gt;_Test: data/print/stat/seconds=10/node=xxxxxx/size=512/type=seq/speed=10000000&lt;BR /&gt;%NET-S-NORMAL, normal successful completion&lt;BR /&gt;&lt;BR /&gt;Test Parameters:&lt;BR /&gt;   Test duration (sec)  10&lt;BR /&gt;   Target node          "xxxxxx"&lt;BR /&gt;   Line speed (baud)    10000000&lt;BR /&gt;   Message size (bytes) 512&lt;BR /&gt;&lt;BR /&gt;Summary statistics:&lt;BR /&gt;   Total messages XMIT  17134  RECV 0&lt;BR /&gt;   Total bytes XMIT     8772608&lt;BR /&gt;   Messages per second  1713.40&lt;BR /&gt;   Bytes per second     877260&lt;BR /&gt;   Line thruput (baud)  7018080&lt;BR /&gt;   %Line Utilization    70.181</description>
      <pubDate>Wed, 14 Apr 2004 11:28:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245433#M62554</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-14T11:28:52Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245434#M62555</link>
      <description>Yes, and what do you get the other way round?</description>
      <pubDate>Wed, 14 Apr 2004 11:33:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245434#M62555</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-14T11:33:50Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245435#M62556</link>
      <description>_Test: data/print/stat/seconds=10/node=yyyyyy/size=512/type=seq/speed=100000000&lt;BR /&gt;%NET-S-NORMAL, normal successful completion&lt;BR /&gt;&lt;BR /&gt;Test Parameters:&lt;BR /&gt;   Test duration (sec)  10&lt;BR /&gt;   Target node          "yyyyyy"&lt;BR /&gt;   Line speed (baud)    100000000&lt;BR /&gt;   Message size (bytes) 512&lt;BR /&gt;&lt;BR /&gt;Summary statistics:&lt;BR /&gt;   Total messages XMIT  17957  RECV 0&lt;BR /&gt;   Total bytes XMIT     9193984&lt;BR /&gt;   Messages per second  1795.70&lt;BR /&gt;   Bytes per second     919398&lt;BR /&gt;   Line thruput (baud)  7355184&lt;BR /&gt;   %Line Utilization    7.355</description>
      <pubDate>Wed, 14 Apr 2004 11:39:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245435#M62556</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2004-04-14T11:39:41Z</dc:date>
    </item>
    <item>
      <title>Re: MSCP performance.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245436#M62557</link>
      <description>So, the network looks OK to me. The 7% looks a bit silly, but of course the VAX cannot receive with 100 MBit/second.</description>
      <pubDate>Wed, 14 Apr 2004 11:43:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mscp-performance/m-p/3245436#M62557</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-04-14T11:43:15Z</dc:date>
    </item>
  </channel>
</rss>

