GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: tuning large filesystem
Operating System - HP-UX
1851075
Members
2623
Online
104056
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2002 11:51 AM
10-30-2002 11:51 AM
I have an unusual filesystem:
it has over 7 Million small files and 5 million directories.
Processes have hard times searching through it. Backups
time out on it, trying to get datastructures in memory.
I know, there are ways to change block sizes for filesystems with many small files. Is this appropriate in this case? What else would you recommend for reorg of such filesystems ? Best practices ?
/opt/vista30 (/dev/vg130/vistafs ) :
8192 file system block size 2048 fragment size
97140736 total blocks 1686464 total free blocks
1633786 allocated free blocks 15431728 total i-nodes
421610 total free i-nodes 421610 allocated free i-nodes
1075249153 file system id vxfs file system type
0x10 flags 255 file system name length
/opt/vista30 file system specific string
thanks in advance
dimitry.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2002 12:19 PM
10-30-2002 12:19 PM
Re: tuning large filesystem
Hi Dimitry:
Divide and conquer. Smaller is better (read, faster).
Regards!
...JRF...
Divide and conquer. Smaller is better (read, faster).
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2002 12:26 PM
10-30-2002 12:26 PM
Solution
I doubt that any block size tuning is really going to help at all. In your case I would would divide this one filesystem into several mountpoints. I hope that you have OnlineJFS because another thing that would probably help is running fsadm -F vxfs -d -e on a regular basis to reorganize your directories and extents. I wouldn't dream of doing a reorg until you have split the filesystem.
If it ain't broke, I can fix that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2002 12:40 PM
10-30-2002 12:40 PM
Re: tuning large filesystem
Overall performance with millions of files and directories can only be improved with a solid-state disk. The problem is that directory searches (such as backups and processes or scripts that search for files using a mask) are done completely in the kernel aas this is filesystem management. The sizes of the files are not important, it's the sheer number of entries that must be traversed to locate a specific file.
If you think we're saying this is a bad design for datastructures, you're right! If this is a small mountpoint, say less than 2Gb, then a solidstate disk should be less than $10,000 and will give you adequate performance--assuming you have fast (500 Mhz) processors to handle the kernel code.
Otherwise, the directory structures cannot be optimized as they are scattered throughout the volume. The only fix is to use a real database rather than using a filesystem to keep track of individual items.
Bill Hassell, sysadmin
If you think we're saying this is a bad design for datastructures, you're right! If this is a small mountpoint, say less than 2Gb, then a solidstate disk should be less than $10,000 and will give you adequate performance--assuming you have fast (500 Mhz) processors to handle the kernel code.
Otherwise, the directory structures cannot be optimized as they are scattered throughout the volume. The only fix is to use a real database rather than using a filesystem to keep track of individual items.
Bill Hassell, sysadmin
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP