<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: rsync issue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998032#M423955</link>
    <description>Oh, and one other gotcha is sparse files. When sparse filled are read, the "missing" bytes are filled in automatically by the read() system call with zero's and the receiver dutifully writes them out as zeroes because it has no way of knowing this is a sparse file.</description>
    <pubDate>Thu, 17 Aug 2006 13:34:44 GMT</pubDate>
    <dc:creator>A. Clay Stephenson</dc:creator>
    <dc:date>2006-08-17T13:34:44Z</dc:date>
    <item>
      <title>rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998027#M423950</link>
      <description>We're trying to use rsync to synchronize a file system that resides on a VA7100 to a remote system with an AutoRAID. The VG, LV, and FS were all created with matching sizes (PE size, FS block size, and LV size). However, when we perform the rsync, utilization on the AutoRAID housed file system is twice the utilization of the VA housed file system. The one thing I found a little while ago is that the block length of the LUNs on the VA are 520 bytes while the block length of the LUNs on the AutoRAID are 512 bytes. So it seems to me that rsync is working at the physical block level and is sending over the 520 byte blocks, which require two 512 byte blocks on the receiving end. Does this sound like how rsync works?</description>
      <pubDate>Thu, 17 Aug 2006 13:12:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998027#M423950</guid>
      <dc:creator>Jeff_Traigle</dc:creator>
      <dc:date>2006-08-17T13:12:29Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998028#M423951</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;No, this is not proper funciton of rsync.&lt;BR /&gt;&lt;BR /&gt;What really matters is if the same file ends up at both ends of the connection.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 17 Aug 2006 13:20:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998028#M423951</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-08-17T13:20:40Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998029#M423952</link>
      <description>The files sizes must be the same as well as the checksums; how the storage utilization of the underlying filesystem is handled is not rsync's problem. What are you using for your measurements? bdf? du? or array specific utilities?&lt;BR /&gt;</description>
      <pubDate>Thu, 17 Aug 2006 13:27:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998029#M423952</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2006-08-17T13:27:58Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998030#M423953</link>
      <description>bdf is what I used to compare utilization on each end.</description>
      <pubDate>Thu, 17 Aug 2006 13:30:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998030#M423953</guid>
      <dc:creator>Jeff_Traigle</dc:creator>
      <dc:date>2006-08-17T13:30:56Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998031#M423954</link>
      <description>Taken from this document (which is a pretty good and not too technical read):&lt;BR /&gt;&lt;A href="http://rsync.samba.org/how-rsync-works.html" target="_blank"&gt;http://rsync.samba.org/how-rsync-works.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;"The block size and, in later versions, the size of the block checksum are calculated on a per file basis according to the size of that file."&lt;BR /&gt;&lt;BR /&gt;Thus, rsync's notion of a block (upon which both sender and receiver must agree) is data dependent. &lt;BR /&gt;&lt;BR /&gt;I suspect that the source array and the destination array are running vastly different RAID levels (maybe that double parity stuff on the VA) and you are seeing differences in array utilization not the logical utilization as seen by the UNIX hosts themselves.&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 17 Aug 2006 13:32:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998031#M423954</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2006-08-17T13:32:39Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998032#M423955</link>
      <description>Oh, and one other gotcha is sparse files. When sparse filled are read, the "missing" bytes are filled in automatically by the read() system call with zero's and the receiver dutifully writes them out as zeroes because it has no way of knowing this is a sparse file.</description>
      <pubDate>Thu, 17 Aug 2006 13:34:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998032#M423955</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2006-08-17T13:34:44Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998033#M423956</link>
      <description>hi jeff,&lt;BR /&gt;&lt;BR /&gt;allow me to also contribute in saying that you can modify your rsync script to display statistics about the operations going on.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;SERVER2: /home/yogeeraj/scripts&amp;gt;./rsync_server1.sh&lt;BR /&gt;receiving file list ...&lt;BR /&gt;2483 files to consider&lt;BR /&gt;applications/0 files...&lt;BR /&gt;application/scripts/nmon/observations/&lt;BR /&gt;application/scripts/logfiles/error-nmon.crn&lt;BR /&gt;          0 100%    0.00kB/s  148:11:24  (1, 60.7% of 2483)&lt;BR /&gt;application/scripts/logfiles/error-sr710rp1.crn&lt;BR /&gt;          0 100%    0.00kB/s  148:11:24  (2, 60.8% of 2483)&lt;BR /&gt;application/scripts/logfiles/output-nmon.crn&lt;BR /&gt;        194 100%  189.45kB/s    0:00:00  (3, 60.9% of 2483)&lt;BR /&gt;application/scripts/logfiles/output-sr710rp1.crn&lt;BR /&gt;          0 100%    0.00kB/s  148:11:24  (4, 61.1% of 2483)&lt;BR /&gt;application/scripts/nmon/observations/perf.nmon&lt;BR /&gt;     172084 100%    3.49MB/s    0:00:00  (5, 61.7% of 2483)&lt;BR /&gt;application/scripts/nmon/observations/perf.nmon.csv.Z&lt;BR /&gt;      54343 100%  541.52kB/s    0:00:00  (6, 61.8% of 2483)&lt;BR /&gt;&lt;BR /&gt;wrote 2526 bytes  read 121817 bytes  82895.33 bytes/sec&lt;BR /&gt;total size is 344642622  speedup is 2771.71&lt;BR /&gt;SERVER2: /home/yogeeraj/scripts&amp;gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Hence, you can do your test and verify where the problem really lies.&lt;BR /&gt;&lt;BR /&gt;hope this helps too!&lt;BR /&gt;&lt;BR /&gt;kind regards&lt;BR /&gt;yogeeraj</description>
      <pubDate>Fri, 18 Aug 2006 00:41:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998033#M423956</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2006-08-18T00:41:23Z</dc:date>
    </item>
    <item>
      <title>Re: rsync issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998034#M423957</link>
      <description>Looks like sparse files was the culprit. Using the -S option on rsync kept the file system utilization where it was on the source system.&lt;BR /&gt;&lt;BR /&gt;I'll admit I was a little skeptical that could be the cause because I'd done fbackup/frecover of the file system and the file system utilization matched on both systems. The man page for frecover seems to indiciate (for lack of any mention to the contrary) that the -s option is necessary to optimize disk usage for sparse files. It seems that was not the case, however, as I only used -rv options.</description>
      <pubDate>Tue, 22 Aug 2006 09:08:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rsync-issue/m-p/4998034#M423957</guid>
      <dc:creator>Jeff_Traigle</dc:creator>
      <dc:date>2006-08-22T09:08:00Z</dc:date>
    </item>
  </channel>
</rss>

