<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cache blocked in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853693#M94243</link>
    <description>Hello Shahu.&lt;BR /&gt;&lt;BR /&gt;I would be more concerned about frame performance. You have introduced Parity volumes to your stripe set. You may concider revisiting your stripe size to align it with average IO size you have on your volumes. &lt;BR /&gt;Crossing the same IO channels and the same spindels may also inject performance bottlenecks. In case you have few partitions on one physical drive, it will impact performance of the whole disk and eventually - performance of the partition you are writing to.&lt;BR /&gt;Small writes on RAID5 might have performance impact as you still be writing the whole stripe set, effectively increasing 'busy' timeouts for the other partitions. Performance will suffer even more, as parity have to be recalculated even if you change a bit.&lt;BR /&gt;So, my suggestion would be to check layout for your partitions on the array and optimize them.&lt;BR /&gt;How do you copy information (duplicate), i.e. tar or cpio will not drastically improve your performance.&lt;BR /&gt;&lt;BR /&gt;Increasing dbc_max_pct might help you as mentioned above as well as a mount options.&lt;BR /&gt;&lt;BR /&gt;I would also check with DB vendor if you can use fs_async.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;0leg</description>
    <pubDate>Wed, 27 Nov 2002 19:01:31 GMT</pubDate>
    <dc:creator>Oleg Zieaev_1</dc:creator>
    <dc:date>2002-11-27T19:01:31Z</dc:date>
    <item>
      <title>Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853688#M94238</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;  I have a problem, Hope some one from U will be able to give quick solution. In my N class box, we have oracle loaded. I have a weekly script which will shutdown that database and copy data from one filesystem to another filesystem for backup purpose.  This script was getting completed with in 4 hours earlier. Now all of a sudden script is taking hell lot of time. The command I am using  for copying is&lt;BR /&gt;&lt;BR /&gt;tar cvf - ./* | tar xvf - /&lt;DIR&gt;/.&lt;BR /&gt;&lt;BR /&gt; When I go to glance and check for details, I could see 90% is blocked on cache.  Anyone has any suggession?&lt;BR /&gt;&lt;BR /&gt;  The only change we have done is in array. Some stripped Disks we change to Raid 5. But it is nothing to do these particular filesystems or VGs.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;Shahu&lt;/DIR&gt;</description>
      <pubDate>Wed, 27 Nov 2002 17:10:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853688#M94238</guid>
      <dc:creator>Shahul</dc:creator>
      <dc:date>2002-11-27T17:10:06Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853689#M94239</link>
      <description>Do the disks your tar'ing to and from share the same i/o chain as the disks that are now raid5?  How's performance for the raid disks since you changed?&lt;BR /&gt;&lt;BR /&gt;One thing that will help is to use cpio instead of tar.  It's much quicker for disk to disk copies:&lt;BR /&gt;&lt;BR /&gt;$ cd /source_dir &lt;BR /&gt;$ find . | cpio -pudlmv /destination_dir &lt;BR /&gt;omit ???v??? (verbose) option from cpio for more speed&lt;BR /&gt;&lt;BR /&gt;Darrell</description>
      <pubDate>Wed, 27 Nov 2002 17:23:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853689#M94239</guid>
      <dc:creator>Darrell Allen</dc:creator>
      <dc:date>2002-11-27T17:23:52Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853690#M94240</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Do you change dbc_max_pct parm in your kernel recently?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Justo.</description>
      <pubDate>Wed, 27 Nov 2002 17:25:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853690#M94240</guid>
      <dc:creator>Justo Exposito</dc:creator>
      <dc:date>2002-11-27T17:25:36Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853691#M94241</link>
      <description>&lt;BR /&gt; Hi&lt;BR /&gt;&lt;BR /&gt;  Thanks for the replies. See,  I have done changes in some other VGs. Not in this. These were RAID 5 earlier and now. That's what I mentioned it is nothing to do with this FS. Anyway my source VG is this&lt;BR /&gt;&lt;BR /&gt;/dev/vg07&lt;BR /&gt;&lt;BR /&gt;-------------------&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t1d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t2d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t0d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t0d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t1d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t1d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t1d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t2d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t3d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t2d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c15t3d0&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d1&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d2&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d3&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d4&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d5&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t3d6&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t3d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c4t2d7&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;And Destination VG is &lt;BR /&gt;&lt;BR /&gt;/dev/vg08&lt;BR /&gt;&lt;BR /&gt;B:&amp;amp;^&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c6t7d7&lt;BR /&gt;&lt;BR /&gt;/dev/dsk/c16t7d7&lt;BR /&gt;&lt;BR /&gt;  This output is from /etc/lvmtab.&lt;BR /&gt;&lt;BR /&gt;  Then kernel parameter, I have not changed any kernel parameter recently.&lt;BR /&gt;&lt;BR /&gt;I hope this will help U.&lt;BR /&gt;&lt;BR /&gt;Rgds&lt;BR /&gt;Shahul&lt;BR /&gt;</description>
      <pubDate>Wed, 27 Nov 2002 17:38:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853691#M94241</guid>
      <dc:creator>Shahul</dc:creator>
      <dc:date>2002-11-27T17:38:55Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853692#M94242</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;What version of HP-UX are you running?  What are the mount options on the filesystems?  Are you using 'mincache=direct'?  How much buffer cache do you have configured?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Wed, 27 Nov 2002 18:19:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853692#M94242</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-11-27T18:19:05Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853693#M94243</link>
      <description>Hello Shahu.&lt;BR /&gt;&lt;BR /&gt;I would be more concerned about frame performance. You have introduced Parity volumes to your stripe set. You may concider revisiting your stripe size to align it with average IO size you have on your volumes. &lt;BR /&gt;Crossing the same IO channels and the same spindels may also inject performance bottlenecks. In case you have few partitions on one physical drive, it will impact performance of the whole disk and eventually - performance of the partition you are writing to.&lt;BR /&gt;Small writes on RAID5 might have performance impact as you still be writing the whole stripe set, effectively increasing 'busy' timeouts for the other partitions. Performance will suffer even more, as parity have to be recalculated even if you change a bit.&lt;BR /&gt;So, my suggestion would be to check layout for your partitions on the array and optimize them.&lt;BR /&gt;How do you copy information (duplicate), i.e. tar or cpio will not drastically improve your performance.&lt;BR /&gt;&lt;BR /&gt;Increasing dbc_max_pct might help you as mentioned above as well as a mount options.&lt;BR /&gt;&lt;BR /&gt;I would also check with DB vendor if you can use fs_async.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;0leg</description>
      <pubDate>Wed, 27 Nov 2002 19:01:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853693#M94243</guid>
      <dc:creator>Oleg Zieaev_1</dc:creator>
      <dc:date>2002-11-27T19:01:31Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853694#M94244</link>
      <description>&lt;BR /&gt; Hi&lt;BR /&gt;&lt;BR /&gt;  Thanks&lt;BR /&gt;&lt;BR /&gt;  Now I am trying with cpio. Seems little difference. But not as expected. dbc_max parameter is 15 and min is 5. Anyway I have not changed those. Apartfrom that an outage is a dream. Blocked on cache is less in cpio compared to tar. Now for 3.5Gb it is taking 17 mts, Is it reasonable? I think it can much faster than this since it is in same system and in different channels.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;&lt;BR /&gt;Shahul</description>
      <pubDate>Wed, 27 Nov 2002 19:10:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853694#M94244</guid>
      <dc:creator>Shahul</dc:creator>
      <dc:date>2002-11-27T19:10:05Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853695#M94245</link>
      <description>I agree with Darrell about using cpio instead of tar (for disk to disk). The throughput you're getting is apparently pretty good, at a rate of about 210MB/min. I tested on my local (standalone) drives (ultrascsi) and I'm getting something like 35MB for 20sec of cpio which works out to about 105MB/min. Down the road you may want to investigate a more efficient tool for frequent data files synchronization. Here we use "rsync" which does a better job that any of the native tools.</description>
      <pubDate>Wed, 27 Nov 2002 21:11:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853695#M94245</guid>
      <dc:creator>S.K. Chan</dc:creator>
      <dc:date>2002-11-27T21:11:21Z</dc:date>
    </item>
    <item>
      <title>Re: Cache blocked</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853696#M94246</link>
      <description>&lt;BR /&gt;  Hi&lt;BR /&gt;&lt;BR /&gt;   Thanks to all. I have acheived little more improvement by recreating the whole target filesystem. Because I found that the block size of target filesystem was lesser than source. But I am still hungry for speed...&lt;BR /&gt;&lt;BR /&gt;  Searching for it..&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;Shahul</description>
      <pubDate>Fri, 29 Nov 2002 10:26:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cache-blocked/m-p/2853696#M94246</guid>
      <dc:creator>Shahul</dc:creator>
      <dc:date>2002-11-29T10:26:38Z</dc:date>
    </item>
  </channel>
</rss>

