- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: poor FTP performance with large directories
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 12:22 AM
тАО05-12-2006 12:22 AM
poor FTP performance with large directories
We are experiencing poor ftp server performance when accessing a directory with many (30,000+) files.
This seems to be the case regardless of ftp client (tried several different Windows GUI clients, DOS command prompt ftp, VMS ftp client, Unix ftp client). This is also the case using the VMS ftp client directly on the ftp server.
The slowness is evident just in doing a ls or dir to list the directory contents. It takes about 15 minutes for the listing to finish. It is strange because at first the directory listing is fast, until it gets somewhere in the range of 10,000 - 15,000 files, then it slows drastically until the listing is complete.
I upped the acp_dircache setting to be 3072 (previously was set to 1712), but there didn't seem to be any affect. I also defined the TCPIP$FTP_WNDSIZ logical to 8192, again with no noticable affect.
For sake of comparison I setup a Pathworks share on this same directory and using a mapped drive can retrieve a directory listing in about 1m20secs. I exported the directory via NFS and using a remote mount am able to get a directory listing in about 8 mins which is slow, but still twice as fast as FTP. I monitored via TCPDUMP but didn't notice any obvious problems. I also installed the HGFTP server and it exhibited the same problems. Is this simply a problem that is inherent in ftp with large directories? Or are there some other tuning parameters that I should try?
HP TCP/IP Services for OpenVMS Alpha Version V5.4 - ECO 5 running OpenVMS V7.3-2.
Does anyone out there use FTP on large directories such as this? What type of performance do you see? In order to avoid any changes to the application we need to ftp using this directory. The obvious solution would be to move the files of interest to a smaller directory and ftp from it, but I don't know if that will be possible.
Thank you for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 12:55 AM
тАО05-12-2006 12:55 AM
Re: poor FTP performance with large directories
$ dir
needs to display locally the 30 000 files ?
Can you use a search-list and split the 30 000 files in 10 directories of 3 000 files for example ? How long a dir needs to display 3 000 files ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 01:43 AM
тАО05-12-2006 01:43 AM
Re: poor FTP performance with large directories
I happen to have a directory with 29549. files on our rx2600 on a USB disk.
FTP> LS takes about 2:45 minutes
FTP> DIR takes about 4:02 minutes
$ DIR node::dna0:
$ DIR/DAT=CRE/SIZ=ALL/OWN node::dna0:
Output is sent to a remote screen (-VPN-DSL-Powerterm) in all cases, so it's comparable. All tests have been done between the same pair of nodes.
CPU load increases on the FTP/DECnet server node over time during this operation. High kernel mode, caused by file system (check MONI FILE,FCP).
So DECnet and FTP remote file lookups seem to perform comparably.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 01:58 AM
тАО05-12-2006 01:58 AM
Re: poor FTP performance with large directories
$ DIR dn0:
$ DIR dn0:
Output still goes to the screen.
Major difference between local DIR and FTP/DECnet is that MONI FCP shows File lookup Rate = 0 in the local case. File lookup rate is HIGH with the FTP/DECnet test, then suddenly drops to about 50% while Dir Data Attempt Rate (MONI FCP) jumps into the 200000s, Hit Rate 100%
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 02:39 AM
тАО05-12-2006 02:39 AM
Re: poor FTP performance with large directories
FTP LIST command is equivalent to DIR/SIZE/DATE/OWNER/PROTECTION. This requires readind the directory file and opening each file referenced by it to get the specifics about the file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 02:48 AM
тАО05-12-2006 02:48 AM
Re: poor FTP performance with large directories
$ DIR dn0:
$ DIR/OWN/SIZ=ALL/DAT=CRE dn0:
As soon as information is required to be read from the file headers on disk, the performance drops significantly. Still no File Lookup operations performed for local access.
Both DECnet and FTP do lots of File Lookup operations (see MONI FCP). This seems to be the major difference.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 03:10 AM
тАО05-12-2006 03:10 AM
Re: poor FTP performance with large directories
For my
This seems to be a comparable operation with similar performance to what FTP server and FAL seem to be doing, less the actual transfer of the data over the net.
So this does not seem to have anything to do with FTP or DECnet. Handling of large directories seems to be the problem here...
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 10:34 AM
тАО05-12-2006 10:34 AM
Re: poor FTP performance with large directories
Thank you for your replies. Here are the answers to some of the questions:
labadie: doing a local VMS dir command is very fast. Takes less than 5 secs to display all files - this is on a terminal emulator session over the network. I created some search list logicals and attempted to use them via ftp. The first search list contained just over 5,000 files and the ls command was fast. The second contained around 10,000 files, and at first was fast, but then part way through slowed considerably. It slowed at about the same point in the listing that slows when doing a ls on the full directory.
Richard: it appears that the DOS command prompt ftp used nlst by default when issuing a ls command. At the end of a ls command, the output says "226 NLST Directory transfer complete". So even with nlst I am seeing very slow performance.
Volker: I agree with your assessment. When doing "monitor file" while doing the ftp listing of the large directory, the "Dir Data" attempt rate is around 900 early on, while the listing is still quite fast. Then when the listing slows down, the "Dir Data" attempte rate jumps to around 10,000! Also the Hit % starts to drop from 99% to the low 80s. That is why I though upping the ACP_DIRCACHE setting would help, but it didn't seem to have any impact. You seem to be having much better performance than I am, even with about the same number of files. Would you mind comparing the attached listing of ACP settings to what you have set on your server and let me know how they compare, or would you mind posting your acp settings? Or maybe there is some other setting that I should be considering?
Thank you again!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 05:07 PM
тАО05-12-2006 05:07 PM
Re: poor FTP performance with large directories
I've attached the ACP settings of our rx2600 V8.2 system for comparison. ACP_DIRCACHE is (explicitly) set to 8000 on our system, as I've been testing SAMBA with large directories some time ago.
My
And again, a simple F$SEARCH loop provides the SAME load and results.
My 'gut feeling' is, that - once the dir cache is filled - the XQP spends a lot of CPU time searching in the cached dir data.
I've created some nice T4 data (10 seconds sampling rate) comparing a F$SEARCH(large_dir) loop and a DIR node::
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2006 07:07 PM
тАО05-12-2006 07:07 PM
Re: poor FTP performance with large directories
- write a little DCL procedure to do about 100 file lookups per second in a large directory (e.g. 29549. files in a 2780 block .DIR file in my case):
$ cnt=0
$loop:
$ file=F$SEARCH("
$ if file .EQS. "" THEN $ EXIT
$ WRITE SYS$OUTPUT "''cnt' - ''file'"
$ WAIT 0:0:0.01 ! wait 10 ms
$ cnt=cnt+1
$ GOTO loop
- run the procedure (on an idle system) and watch with $ MONITOR FILE,FCP,MODE
In the beginning, you'll see about 80-90 File Lookups per second and about 90 Dir Data attempts/sec, Kernel mode is low.
Then (after about 10500 files - this may vary due to filename distribution, filename size etc.), you'll see a sharp increase in kernel mode time and a huge increase in Dir Data attempts/sec (up to 120000 in my case), Dir Data Hit rate is still 100%, so all dir data is in memory, but the XQP seems to burn lots of CPU cycles to 'find' the next filename.
I assume the problem to be in the ineffectiveness of the directory index cache implementation for 'large' directories, resulting in sequential searches through the directory data.
Volker.