<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic little doubt... in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684687#M245952</link>
    <description>hi all,&lt;BR /&gt;&lt;BR /&gt;when i execute "rm -rf *" in one folder, that folder contains 3,90,345 files. it is giving error like parameter can not handle. i am thinking that, * can not handle that huge number. it can handle up to some number. can somebody tell me up to what number, it can accept.</description>
    <pubDate>Mon, 05 Dec 2005 09:49:35 GMT</pubDate>
    <dc:creator>sukumar maddela</dc:creator>
    <dc:date>2005-12-05T09:49:35Z</dc:date>
    <item>
      <title>little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684687#M245952</link>
      <description>hi all,&lt;BR /&gt;&lt;BR /&gt;when i execute "rm -rf *" in one folder, that folder contains 3,90,345 files. it is giving error like parameter can not handle. i am thinking that, * can not handle that huge number. it can handle up to some number. can somebody tell me up to what number, it can accept.</description>
      <pubDate>Mon, 05 Dec 2005 09:49:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684687#M245952</guid>
      <dc:creator>sukumar maddela</dc:creator>
      <dc:date>2005-12-05T09:49:35Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684688#M245953</link>
      <description>It's not a number but rather a total size so that not only the number but the length of each entry contributes to the argmax value. w/o knowing the version of UNIX you are running, it's not possible to tell you more. Some versions of UNIX have a tunable like argmax or ncargs that allow you to adjusat this value. In any event, having that many files in a directory is dumb. You can use the xarg comand to allow you to process essentially unlimited numbers of parameters by dividing the command into managable chunks. Man xargs for details.</description>
      <pubDate>Mon, 05 Dec 2005 09:55:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684688#M245953</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2005-12-05T09:55:37Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684689#M245954</link>
      <description>Shalom sukumar.&lt;BR /&gt;&lt;BR /&gt;Too many files for one delete. Too many arguments.&lt;BR /&gt;&lt;BR /&gt;All programs, even rf have limits to the number of arguments. * is actually parsed into 3.9 million arguments in this case.&lt;BR /&gt;&lt;BR /&gt;Here is a workaround.&lt;BR /&gt;&lt;BR /&gt;ls -1 &amp;gt; filelist&lt;BR /&gt;while read -r filename&lt;BR /&gt;do&lt;BR /&gt;  rm -f $filename&lt;BR /&gt;done &amp;lt; filelist&lt;BR /&gt;&lt;BR /&gt;This will get it done.&lt;BR /&gt;&lt;BR /&gt;Also: restructure your storage to prevent this many files from being in one folder. An ls command can take days to execute under these circumstances.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 05 Dec 2005 09:55:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684689#M245954</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2005-12-05T09:55:48Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684690#M245955</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Check Bill's answer in this thread.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=103792" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=103792&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You could use fine and xargs.&lt;BR /&gt;&lt;BR /&gt;# find /folder -xdev | xargs rm {}&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Robert-Jan</description>
      <pubDate>Mon, 05 Dec 2005 09:58:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684690#M245955</guid>
      <dc:creator>Robert-Jan Goossens</dc:creator>
      <dc:date>2005-12-05T09:58:40Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684691#M245956</link>
      <description>check "getconf ARG_MAX"&lt;BR /&gt;It can handle up to this number.&lt;BR /&gt;&lt;BR /&gt;Try using find with '-exec'; a new task is spawned for every file to remove.&lt;BR /&gt;&lt;BR /&gt;# find /your_dir -name * -exec rm {} \;&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Sergejs</description>
      <pubDate>Mon, 05 Dec 2005 10:05:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684691#M245956</guid>
      <dc:creator>Sergejs Svitnevs</dc:creator>
      <dc:date>2005-12-05T10:05:36Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684692#M245957</link>
      <description>first i tried with script, at that moment it is using only 39 % of cpu. and only 100 files are deleted for 2 minutes. but when i remove by direct deletion using rm with limited noof files(around 2000) it has taken 99 % of cpu utilization. and all 2000 files are removed in 5 minutes. right now i would like to know how to remove file with out any time delay.</description>
      <pubDate>Mon, 05 Dec 2005 10:38:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684692#M245957</guid>
      <dc:creator>sukumar maddela</dc:creator>
      <dc:date>2005-12-05T10:38:14Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684693#M245958</link>
      <description>If you want to have an empty directory with&lt;BR /&gt;minimal delay try something like the&lt;BR /&gt;following.  You will still have a delay on releasing space from the old files.  &lt;BR /&gt;This will also release the space required&lt;BR /&gt;by the directory.&lt;BR /&gt;&lt;BR /&gt;# Edit next three lines appriately&lt;BR /&gt;DIRNAME=x&lt;BR /&gt;MODE=640&lt;BR /&gt;OWNER=owner:group&lt;BR /&gt;mkdir newdir.$$&lt;BR /&gt;chmod ${MODE}  newdir.$$&lt;BR /&gt;chown ${OWNER} newdir.$$&lt;BR /&gt;mv ${DIRNAME} ${DIRNAME}.$$&lt;BR /&gt;mv newdir.$$ ${DIRNAME}&lt;BR /&gt;find ${DIRNAME}.$$ -print0 | xargs -0 rm&lt;BR /&gt;rmdir ${DIRNAME}.$$</description>
      <pubDate>Mon, 05 Dec 2005 10:46:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684693#M245958</guid>
      <dc:creator>Bill Thorsteinson</dc:creator>
      <dc:date>2005-12-05T10:46:38Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684694#M245959</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;It will never be instant but you could try&lt;BR /&gt;&lt;BR /&gt;echo rm -r $(ls -1)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If it works remove the echo and do&lt;BR /&gt;&lt;BR /&gt;rm -r $(ls -1)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;             Steve Steel</description>
      <pubDate>Mon, 05 Dec 2005 10:46:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684694#M245959</guid>
      <dc:creator>Steve Steel</dc:creator>
      <dc:date>2005-12-05T10:46:39Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684695#M245960</link>
      <description>HI:&lt;BR /&gt;&lt;BR /&gt;Well, nothing is free.  The the most efficient removal is probably going to be achieved by leveraging 'xargs' to bundle groups of file(names) for removal by 'rm'.&lt;BR /&gt;&lt;BR /&gt;If you use '-exec' with a 'find' you are going to spawn a new task for every file to remove -- certainly very resource intensive.&lt;BR /&gt;&lt;BR /&gt;Using 'rm' with one file at a time is probably going to be more costly than an 'xargs' solution too, since, again, a new process will need to be created for each file handled.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Mon, 05 Dec 2005 10:46:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684695#M245960</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2005-12-05T10:46:49Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684696#M245961</link>
      <description>Each unlink takes a finite time so what you ask can't be done. Large directories are especially bad because the directory must be locked for each unlink and written to prevent other processes from updating the directory at the same time. You will find it far more efficient to rebuild the filesystem than to remove each file.</description>
      <pubDate>Mon, 05 Dec 2005 10:49:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684696#M245961</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2005-12-05T10:49:07Z</dc:date>
    </item>
    <item>
      <title>Re: little doubt...</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684697#M245962</link>
      <description>Hey;&lt;BR /&gt;&lt;BR /&gt;J.Furgeson had the correct answer&lt;BR /&gt;&lt;BR /&gt;ls | xargs -i rm {} &lt;BR /&gt;&lt;BR /&gt;It'll take some time but will be much more efficient than looping through each file.&lt;BR /&gt;&lt;BR /&gt;HTH;&lt;BR /&gt;&lt;BR /&gt;Doug</description>
      <pubDate>Mon, 05 Dec 2005 10:54:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/little-doubt/m-p/3684697#M245962</guid>
      <dc:creator>Doug O'Leary</dc:creator>
      <dc:date>2005-12-05T10:54:12Z</dc:date>
    </item>
  </channel>
</rss>

