Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

V8.3-1H1 Installed with Gigabit nics and slow net performance

 
Stuart Green
Frequent Advisor

V8.3-1H1 Installed with Gigabit nics and slow net performance

I get slow network performance on my new I64 box with gigabit nics.
I have them plugged into cisco switches. The switch ports are auto-auto gigabit.

Copying file from a 1.0Gb machine to my integrity server and only get approx 5Mb/s transfer.

Should I be setting the OS to use Jumbo frames and also have the switch ports reconfigured.?
I have seen some performance tips on the net but cant find anything on HP.com about setting this on the OS.

Note: This is a standalone box.

Thanks for any assistance.
21 REPLIES 21
Volker Halle
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

trying to judge NIC speed by copying files around involves too many other possible bottlenecks.

If you have another system running OpenVMS and DECnet, run a DTSEND test.

Or copy a file to the null device on OpenVMS, this eliminates possible issues with writing the file to disk:

FTP your-vms-system
...
FTP> PUT localfile NLA0:

Volker.
Robert Gezelter
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

As a start, I would look the error counters on the switch ports, and both hosts involved in the transfer. Any significant number of collisions or retransmissions is a red flag.

Consider setting the switch appropriately and using WireShark to capture the entire conversation, then look at what is actually causing the delay.

- Bob Gezelter, http://www.rlgsc.com
Richard Whalen
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Setting the nics up for jumbo frames could be what you need to do to realize the expected data rate from these nics. Since the nics have a high signal rate, they can receive (or transmit) a packet very quickly; unfortunately, there is enough fixed overhead in getting a packet from the user application to the wire that the throughput is reduced unless you go to larger packets.
Steve Reece_3
Trusted Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Hi Stuart,

You say copy a file, but you don't mention how you're doing the copy. Windows protocols and SMB take a large overhead on data verification. With FTP, there's the stepping down the network stack and packaging things up for the network card to do before it ever gets to the wire.

Have you looked at LANCP at what the ports have actually configured themselves as? It could be that negotiation is the culprit here and the Cisco switch and the Integrity have negotiated a much slower speed than you are expecting. There may also be contention on the wire.

Steve
Hein van den Heuvel
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

What is the copy command used? FTP?
What is on the other side?

Out-of-the-box OpenVMS is NOT set up for speedy FTP's.

What makes you think is is a network issue?
Was it any faster on 100mb?

How are the disks and CPU doing during the transfer. They should hardly move.

Did you calibrate with
- a local FTP over loopback?
- NL: device output
- disk-to-disk copy.
- reverse direction copy.. from VMS to PC.

Did you measure disk activity?
Disable High-Water-Marking on the output disk? (SET VOLUM/NOHIGH)
Checked SHOW RMS? for buffer/blocks/extent?

Check with Google for prior topics in thsi space. For example:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1341603


Hope this helps,
Hein van den Heuvel
John Gillings
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

I remember a case where a customer wasn't getting expected network throughput copying files after upgrading from 10MB to 100MB.

The throughput was more than 10MB, but only just. After spending quite some time checking all network components in the path between the nodes, it turned out the bottleneck was the write performance of the destination disk drive!

Try to eliminate as many factors as possible when testing.
A crucible of informative mistakes
Cass Witkowski
Trusted Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

We experienced slower performance with jumbo frames enabled because the site's LAN switch did not support Jumbo frames.
Robert Gezelter
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

Before going too far down this investigation, please verify what RMS buffering parameters are in use on both sides.

In particular, you are interested in the /BLOCK, /BUFFER, and /EXTEND settings. Of the three, /EXTEND is the often the most time consuming.

Remember, an extend of 100 blocks will not last very long at 1Gb.

- Bob Gezelter, http://www.rlgsc.com
Hein van den Heuvel
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

Is the copy operation you used to verify the speed believed to be representative of the actual usage of the link? Will the link's dominant usage be similar to the copy used to test it?

If it is, then let's figure out an expectation and figure out whether it lives up to that, and perhaps why it does not perform as expected.
Your expectation is probably 100 MB/sec.
Well, can you source that?
Can you sink it?

But if it isn't, then let's try to characterize the real load, and somehow measure that. Maybe latency is more relevant than throughput? Or packets/sec?

You may want to find, or write, a little tool to volley some request/response packets back and forward with selectable concurrency and measure its performance.
I'm sure stuff like that is out there, and I know that when I wrote one for a special test (mimic SAP messages) it was immensely more valuable than the 'quick' ftp test we relied on before.

Knowing how say FTP or NFS behaved helped, but the real McCoy is the application itself or something that closely mimics it (similatr packet sizes, rates, active port counts, active ip addresses, it may all matter.


Cheers,
Hein van den Heuvel
Bob Blunt
Respected Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart, check the NIC and the switch to ensure that both are at the same speed and duplex. Make sure that the latest LAN patches are installed. Jumbo frames seem to work best in very limited environments and only with certain stacks and applications. Their implementation can make things worse in some situations.

There are also some configurations that have shown problems with specific NIC cards on certain machines. Knowing what sort of Integrity box and what kind of NIC could help a lot.

The HP IP stack has supposedly been fixed to eliminate the need to preallocate an entire file when using FTP for transferring (older versions worked best when you setup a logical name to preallocate the file either in whole or in chunks, TCPIP$FTP_FILE_ALQ I think). You'd need to set that to the size of the file you're moving in OpenVMS disk blocks and you'll still be limited to how fast that destination disk can be written.
Richard W Hunt
Valued Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

The last time we had this problem, it was that the auto-negotiate wasn't really working as we wanted. It somehow had settled on HDX rather than FDX. We tracked it back and forth for weeks before we found the REAL culprit.

In our particular case we solved the problem by setting some environmental variables in the console for the appropriate network variables, because the CONSOLE is the first thing to touch the network cards after a reboot. The console's auto-negotiate was the problem. The variables WE had to use were based on the device names.

ewa0_mode FastFD
ewa1_mode FastFD etc. on the older machines

eia0_mode FastFD
eia1_mode FastFD etc. on the newer machines

You will need to do a console-level SHOW DEVICES to see your network device names and use the corresponding device names as shown in my examples.

We chalked this up to the concept that sometimes you lose in negotiations - which is why you always explicitly ask for what you want and don't just settle for what you get.
Sr. Systems Janitor
Volker Halle
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Richard,

the advice you've given is o.k. for Alpha network interfaces. In this case it's about an Itanium system.

Stuart,

use MC LANCP SHOW DEVICE/INTERNAL_COUNTERS to look at the LAN driver console messages (bottom of display) to find out, if there have been any problems during auto-negotiation.

Volker.
Hoff
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Profile this system and this network environment for the bottleneck (don't guess!), and ensure that the NIC and the Cisco have negotiated speeds and feeds correctly. (OpenVMS I64 expects/requires auto-negotiation on both ends of the connection to the switch.) Anything else is premature.

ftp is an old and slow protocol in general; it's not known for delivering bandwidth under even the best of circumstances, and it's largely incompatible with modern IP network designs.

I'd use a different tool for testing network bandwidth and not ftp, not because ftp is poorly designed and insecure (it is), but because it is intrinsically tied to the performance of the file system and related; it covers too much to be a good basic performance test.
MarkOfAus
Valued Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,
What O/S are you copying to? If VMS, maybe try setting set rms/extend=nnn on the receiving end.
You might also like to check this:
http://h71000.www7.hp.com/doc/83final/6048/6048pro_100.html

Specifically, look at the value of your NPAGEDYN in modparams.dat

Regards,
Mark.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Source Host (1): rx2660 (purchased 2010)
with 4Gb HBA and Gb NIC's (no teaming)
Boot and Data disks all on the SAN

Storage: HP EVA 6400 w FC disks thru SAN switches with 4Gb SFP's. Ports speed set to match HBA's

Destination Host (2): Proliant DL360 G5
with 4Gb HBA and Gb NIC's
Data disk on SAN

Method 1:
NFS export directory on host (2). Mount on host (1)

On (1) executed BACKUP command of data disk (3GB data) to the NFS mount point.
Time taken: 58 mins

Method 2:
sftp the resultant (3GB) .bck on (2) back to (1) and vice versa
Time taken: avg 6:30 mins/sec

Method 3:
Using a second rx2660 (3) on same architecture, use the COPY command to send 3GB between these two OVMS boxes
Time taken: 2 mins

DECNET Phase IV is in use.

So how can I get the same speed on IP layer?
As it is achievable on DECNET which I presume COPY command was using.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Apologies, omitted that Host (2) is running Suse Linux 10 SP2
Jim_McKinney
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

>> So how can I get the same speed on IP layer? As it is achievable on DECNET which I presume COPY command was using.

Not really a question of "slow net performance" but more about the difference in protocols. COPY is very lightweight - SFTP is not. With SFTP you have all the overhead of encryption on one end and decryption on the other. I just ran a small test here where I copied a 20,000 block file from one node to another's NL: device using proxied DECnet access and found it completed in approximately 2 seconds - then I copied the same file over the same clean 100mbps link using SFTP with keyed authentication to the same NL: device and it took 13 seconds. Test script is like this

show time
copy/log junk.zip nodex::nl:
show time
sftp nodex
cd NL:
put junk.zip
exit
show time

FTP would be lots faster than SFTP - if you need/want the security of encryption you'll never approximate COPY's performance.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Thanks for response Jim, and others, interesting comments.

So there are marked differences in the method you use to transfer data across the wire. Do I just accept this as is?

Yes COPY file node"user pass"::disk:[000000] was used.

So BACKUP command where does that feature in the whole scheme of things?
I can run the BACKUP command to the same disk the data is on and takes 5 minutes. Using a NFS mount point as the destination it takes nearly 1 hour?

The same .bck file Linux box to Linux box using sftp, with similar NIC and disk storage architecture takes 2 minutes (same as DECNET copy) but 6-7 mins on OVMS.


I suppose I am bewilderment hoping for great consistent network speeds no matter what tools I use, coming from a Microvax at Half-10 speed.

Jim_McKinney
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Yes, each transfer tool differs. The more the tool does the slower it will be. COPY and FTP are slim and fast. BACKUP is a bit fatter and takes a bit longer. SFTP is slower still. And, my experiences with NFS find it to be exceedingly slow.

>> The same .bck file Linux box to Linux box using sftp, with similar NIC and disk storage architecture takes 2 minutes (same as DECNET copy) but 6-7 mins on OVMS.

All the prior respondents to this thread point to potential speed bumps and strategies to determine if they exist and cure some of them - any and all are possible. I would have expected your DECNET transfer to be quicker than any linux SFTP transfer of the same file. Perhaps its time to take a look at whether or not you do have disk/file oriented issues or NIC/wire/configuration issues. If there are intervening routers, perhaps there's even some profiling and prioritization of packets occurring (probably not likely, but possible)?

Or possibly some TCP tuning is required - a TCPDUMP of the transfer would help here - how big, are the packets being transferred? any packet fragmentation? optimal window sizes? Is one side stalling? etc...? For that matter a TCPDUMP would probably help regardless - you could at least determine if one side is waiting on the other or not - that'd be a start.
Robert Gezelter
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,

With all due respect, I must somewhat differ with Jim.

BACKUP is fatter, particularly if it is doing significant non-transfer scan processing (e.g., /INCREMENTAL, wildcard selections that are sparse).

For actual transfers, BACKUP and COPY should be comparable over DECnet. They both use RMS remote file access.

However, there is a difference. COPY has a good idea of how big the output file will be in advance. BACKUP does not. If you destination volume has a small extend size, this can be particularly painful. Five blocks (2,560 bytes at 1Gb/sec) means very frequent file extensions, which are expensive in a number of ways.

Modifying the LOGIN.COM of the target account with SET RMS/BUFFER_COUNT=nn/BLOCK_COUNT=nn/EXTEND_QUANTITY=nn dependent on IF F$MODE() .EQS. "NETWORK" has quite a measurable impact in many cases.

While I have not done timing tests recently, I would be unsurprised if SFTP and FTP benefited as well.

- Bob Gezelter, http://www.rlgsc.com
MarkOfAus
Valued Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Robert,

"While I have not done timing tests recently, I would be unsurprised if SFTP and FTP benefited as well. "

I can testify to that. We used set rms/et al as you detailed to vastly increase the transfer speeds of a 500MB file transferred from Windows to VMS.

Regards
Mark