Operating System - OpenVMS
1752276 Members
5064 Online
108786 Solutions
New Discussion юеВ

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

 
Bob Blunt
Respected Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart, check the NIC and the switch to ensure that both are at the same speed and duplex. Make sure that the latest LAN patches are installed. Jumbo frames seem to work best in very limited environments and only with certain stacks and applications. Their implementation can make things worse in some situations.

There are also some configurations that have shown problems with specific NIC cards on certain machines. Knowing what sort of Integrity box and what kind of NIC could help a lot.

The HP IP stack has supposedly been fixed to eliminate the need to preallocate an entire file when using FTP for transferring (older versions worked best when you setup a logical name to preallocate the file either in whole or in chunks, TCPIP$FTP_FILE_ALQ I think). You'd need to set that to the size of the file you're moving in OpenVMS disk blocks and you'll still be limited to how fast that destination disk can be written.
Richard W Hunt
Valued Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

The last time we had this problem, it was that the auto-negotiate wasn't really working as we wanted. It somehow had settled on HDX rather than FDX. We tracked it back and forth for weeks before we found the REAL culprit.

In our particular case we solved the problem by setting some environmental variables in the console for the appropriate network variables, because the CONSOLE is the first thing to touch the network cards after a reboot. The console's auto-negotiate was the problem. The variables WE had to use were based on the device names.

ewa0_mode FastFD
ewa1_mode FastFD etc. on the older machines

eia0_mode FastFD
eia1_mode FastFD etc. on the newer machines

You will need to do a console-level SHOW DEVICES to see your network device names and use the corresponding device names as shown in my examples.

We chalked this up to the concept that sometimes you lose in negotiations - which is why you always explicitly ask for what you want and don't just settle for what you get.
Sr. Systems Janitor
Volker Halle
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Richard,

the advice you've given is o.k. for Alpha network interfaces. In this case it's about an Itanium system.

Stuart,

use MC LANCP SHOW DEVICE/INTERNAL_COUNTERS to look at the LAN driver console messages (bottom of display) to find out, if there have been any problems during auto-negotiation.

Volker.
Hoff
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Profile this system and this network environment for the bottleneck (don't guess!), and ensure that the NIC and the Cisco have negotiated speeds and feeds correctly. (OpenVMS I64 expects/requires auto-negotiation on both ends of the connection to the switch.) Anything else is premature.

ftp is an old and slow protocol in general; it's not known for delivering bandwidth under even the best of circumstances, and it's largely incompatible with modern IP network designs.

I'd use a different tool for testing network bandwidth and not ftp, not because ftp is poorly designed and insecure (it is), but because it is intrinsically tied to the performance of the file system and related; it covers too much to be a good basic performance test.
MarkOfAus
Valued Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Stuart,
What O/S are you copying to? If VMS, maybe try setting set rms/extend=nnn on the receiving end.
You might also like to check this:
http://h71000.www7.hp.com/doc/83final/6048/6048pro_100.html

Specifically, look at the value of your NPAGEDYN in modparams.dat

Regards,
Mark.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Source Host (1): rx2660 (purchased 2010)
with 4Gb HBA and Gb NIC's (no teaming)
Boot and Data disks all on the SAN

Storage: HP EVA 6400 w FC disks thru SAN switches with 4Gb SFP's. Ports speed set to match HBA's

Destination Host (2): Proliant DL360 G5
with 4Gb HBA and Gb NIC's
Data disk on SAN

Method 1:
NFS export directory on host (2). Mount on host (1)

On (1) executed BACKUP command of data disk (3GB data) to the NFS mount point.
Time taken: 58 mins

Method 2:
sftp the resultant (3GB) .bck on (2) back to (1) and vice versa
Time taken: avg 6:30 mins/sec

Method 3:
Using a second rx2660 (3) on same architecture, use the COPY command to send 3GB between these two OVMS boxes
Time taken: 2 mins

DECNET Phase IV is in use.

So how can I get the same speed on IP layer?
As it is achievable on DECNET which I presume COPY command was using.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Apologies, omitted that Host (2) is running Suse Linux 10 SP2
Jim_McKinney
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

>> So how can I get the same speed on IP layer? As it is achievable on DECNET which I presume COPY command was using.

Not really a question of "slow net performance" but more about the difference in protocols. COPY is very lightweight - SFTP is not. With SFTP you have all the overhead of encryption on one end and decryption on the other. I just ran a small test here where I copied a 20,000 block file from one node to another's NL: device using proxied DECnet access and found it completed in approximately 2 seconds - then I copied the same file over the same clean 100mbps link using SFTP with keyed authentication to the same NL: device and it took 13 seconds. Test script is like this

show time
copy/log junk.zip nodex::nl:
show time
sftp nodex
cd NL:
put junk.zip
exit
show time

FTP would be lots faster than SFTP - if you need/want the security of encryption you'll never approximate COPY's performance.
Stuart Green
Frequent Advisor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Thanks for response Jim, and others, interesting comments.

So there are marked differences in the method you use to transfer data across the wire. Do I just accept this as is?

Yes COPY file node"user pass"::disk:[000000] was used.

So BACKUP command where does that feature in the whole scheme of things?
I can run the BACKUP command to the same disk the data is on and takes 5 minutes. Using a NFS mount point as the destination it takes nearly 1 hour?

The same .bck file Linux box to Linux box using sftp, with similar NIC and disk storage architecture takes 2 minutes (same as DECNET copy) but 6-7 mins on OVMS.


I suppose I am bewilderment hoping for great consistent network speeds no matter what tools I use, coming from a Microvax at Half-10 speed.

Jim_McKinney
Honored Contributor

Re: V8.3-1H1 Installed with Gigabit nics and slow net performance

Yes, each transfer tool differs. The more the tool does the slower it will be. COPY and FTP are slim and fast. BACKUP is a bit fatter and takes a bit longer. SFTP is slower still. And, my experiences with NFS find it to be exceedingly slow.

>> The same .bck file Linux box to Linux box using sftp, with similar NIC and disk storage architecture takes 2 minutes (same as DECNET copy) but 6-7 mins on OVMS.

All the prior respondents to this thread point to potential speed bumps and strategies to determine if they exist and cure some of them - any and all are possible. I would have expected your DECNET transfer to be quicker than any linux SFTP transfer of the same file. Perhaps its time to take a look at whether or not you do have disk/file oriented issues or NIC/wire/configuration issues. If there are intervening routers, perhaps there's even some profiling and prioritization of packets occurring (probably not likely, but possible)?

Or possibly some TCP tuning is required - a TCPDUMP of the transfer would help here - how big, are the packets being transferred? any packet fragmentation? optimal window sizes? Is one side stalling? etc...? For that matter a TCPDUMP would probably help regardless - you could at least determine if one side is waiting on the other or not - that'd be a start.