<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: file system size and system performance relationship in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375203#M347811</link>
    <description>The total space for the filesystem is not important so there is no 'proper' size. What is important is the quantity of files and directories at a single level. You can create twenty 500GB files and the filesystem will be almost full. But accessing the filesystem with tools like ls or du will very fast, just like smaller filesystems. But if there will be thousands or millions of files in one directory, interactive tools like ls or find will appear to be very, very slow. They are processing file names just as fast as a small directory, but there are 100 times or even 1000 times more content, thus much slower to report results. And of course, using "*" as a filename filter, especially as input to a backup program like tar will fail with: line too long.&lt;BR /&gt; &lt;BR /&gt;However, 1.2 GB files will be a good fit, especially when segregated into lots of directories such as projects or user IDs. Most commercial backup programs handle small and large files quite easily. In your example, the GB files will be easy to backup.</description>
    <pubDate>Tue, 10 Mar 2009 00:17:17 GMT</pubDate>
    <dc:creator>Bill Hassell</dc:creator>
    <dc:date>2009-03-10T00:17:17Z</dc:date>
    <item>
      <title>file system size and system performance relationship</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375201#M347809</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;our system admin define a huge 12 T byte file system using EMC NS40. I think it cause system bottleneck to backup data using backup library as well as maintain big file system.&lt;BR /&gt;&lt;BR /&gt;Does anyone have good white paper regarding file system and system performance? or someone can give me some advice what is the proper size of filesystem?&lt;BR /&gt;&lt;BR /&gt;Most of users use 1.2 Gbyte flat files to design chips.&lt;BR /&gt;&lt;BR /&gt;Thank you in advance,&lt;BR /&gt;&lt;BR /&gt;Jeongbae</description>
      <pubDate>Mon, 09 Mar 2009 21:39:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375201#M347809</guid>
      <dc:creator>Jeongbae Min</dc:creator>
      <dc:date>2009-03-09T21:39:02Z</dc:date>
    </item>
    <item>
      <title>Re: file system size and system performance relationship</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375202#M347810</link>
      <description>Hi:&lt;BR /&gt;&lt;BR /&gt;Large filesystems do not have performance problems per se.  A large number of files in a filesystem (regardless of size) where searches ('find's) span the whole extent of the filesystem's root directory _are_ costly to performance.  &lt;BR /&gt;&lt;BR /&gt;The mount point options you use and the way you use a filesystem (e.g. many additions and deletions; frequent metatdata changes) can help or hinder performance.  &lt;BR /&gt;&lt;BR /&gt;One nice white paper on JFS filesystem performance and tuning can be found here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.docs.hp.com/en/5576/JFS_Tuning.pdf" target="_blank"&gt;http://www.docs.hp.com/en/5576/JFS_Tuning.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Mon, 09 Mar 2009 23:28:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375202#M347810</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2009-03-09T23:28:27Z</dc:date>
    </item>
    <item>
      <title>Re: file system size and system performance relationship</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375203#M347811</link>
      <description>The total space for the filesystem is not important so there is no 'proper' size. What is important is the quantity of files and directories at a single level. You can create twenty 500GB files and the filesystem will be almost full. But accessing the filesystem with tools like ls or du will very fast, just like smaller filesystems. But if there will be thousands or millions of files in one directory, interactive tools like ls or find will appear to be very, very slow. They are processing file names just as fast as a small directory, but there are 100 times or even 1000 times more content, thus much slower to report results. And of course, using "*" as a filename filter, especially as input to a backup program like tar will fail with: line too long.&lt;BR /&gt; &lt;BR /&gt;However, 1.2 GB files will be a good fit, especially when segregated into lots of directories such as projects or user IDs. Most commercial backup programs handle small and large files quite easily. In your example, the GB files will be easy to backup.</description>
      <pubDate>Tue, 10 Mar 2009 00:17:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/file-system-size-and-system-performance-relationship/m-p/4375203#M347811</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2009-03-10T00:17:17Z</dc:date>
    </item>
  </channel>
</rss>

