Operating System - OpenVMS
1748178 Members
4125 Online
108758 Solutions
New Discussion

sftp and %TCPIP-F-SSH_ALLOC_ERROR

 
Volker Halle
Honored Contributor

sftp and %TCPIP-F-SSH_ALLOC_ERROR

A customer reported constantly seeing the following error when executing an 'sftp> ls filename' command

 

%TCPIP-F-SSH_ALLOC_ERROR, ssh memory allocation error

 

A little research showed, that this problems only happened on a directory with about 12500 files on a remote Linux system. Smaller directories were fine.

 

A little test on OpenVMS Alpha V8.3 TCPIP V5.7 - ECO 3 with a fairly recent TCPIP$SSH_SFTP2.EXE  V57-ECO3P 21-MAR-2013 shows the following behaviour:

 

- created a directory with 1000 files (with names like xx-xxxxxxxxxxxxxxxx_20131031_nnn.CSV)

- invoked sftp "-C" user@vmshost and set default to that directory

- repeated 'ls' a couple of times until also seeing %TCPIP-F-SSH_ALLOC_ERROR

- closer examination of the process running TCPIP$SSH_SFTP2.EXE with SHOW PROC/ACC/ID=<pid> shows Peak virtual size increasing by about 105000 pagelets for each execution of the ls command against 1000 files.

- with a pagefile quota of about 500000 for the current user, the error could be reproduced on the 5th 'ls' command

 

So to successfully issue an 'sftp> ls' command against a directory with 12500 files, the user would need a PGFLQUOTA of at least around 1.3 million pagelets.

 

Volker.

1 REPLY 1
John Gillings
Honored Contributor

Re: sftp and %TCPIP-F-SSH_ALLOC_ERROR

Volker,

 

   Thanks for some experimental numbers. Always good to have hard numbers.

 

I'd just like to comment that 1.3M pagelets is just over 600 Mbytes. Ultimately a PGFLQUOTA is just disk space. At todays prices, the cost of 600Mb is a fraction of a cent (even if it's commercial grade and shadowed a few times). Diagnosing an error for want of such a paltry amount of resource is a waste.

 

  Please make sure your page files are huge, and your quotas high enough that you don't see this kind of problem.

 

On the other hand, This is half way to the P0 limit. Assuming these data structures haven't been explicitly moved to P2 space, the next level is to slam into the architectural wall.

A crucible of informative mistakes