<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Extreme performance difference between file backed and disk backed VMs in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117295#M521670</link>
    <description>These numbers do not seem too unexpected. A VM accesses a real disk or a raw logical volume as a disk much faster than if you use a file as a disk for a VM. WHen you use a file for a disk on a VM when the VM writes to its disk the VM host must go through the file system overhead to write or read the data.&lt;BR /&gt;&lt;BR /&gt;I would think other hypervisors would have the same type of issue. Most people assign real disks to VMs that are san based to allow for dyanmic migration between hosts too.&lt;BR /&gt;&lt;BR /&gt;I hope this makes sense.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 27 Jun 2013 14:05:41 GMT</pubDate>
    <dc:creator>Emil Velez_2</dc:creator>
    <dc:date>2013-06-27T14:05:41Z</dc:date>
    <item>
      <title>Extreme performance difference between file backed and disk backed VMs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6116095#M521667</link>
      <description>&lt;P&gt;I think this looks like a bug to me, but wanted to see if anyone had some thoughts on this.&amp;nbsp; We are running Integrity VM 6.1 on BL860c blades, some old ones and some newer i2.&amp;nbsp; The HP-UX installations are 11iv3 Nov 2012, with appropriate patches applied.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2CPU 8GB RAM File backed disk:&lt;/P&gt;&lt;P&gt;dd command writing 4GB file, all zeroes, using 8kb block size = 18 second runtime&lt;/P&gt;&lt;P&gt;Same dd command, but overwriting the file, consistently runs between 1-2 minutes long.&lt;/P&gt;&lt;P&gt;If I remove the ddfile, the dd command runs again in 18 seconds.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1CPU 4GB RAM File backed disk:&lt;/P&gt;&lt;P&gt;dd command writing 4GB file, all zeroes, using 8kb block size = consistently over 1 minute runtimes, when overwriting the file obviously the time goes way up.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any HP-UX Physical server, Linux Physical Server, or Linux VM (VMWare) complete the same task within the range of 12-20 seconds, regardless of the number of CPUs (tested with a single CPU Linux VM), and regardless of whether the operation is overwriting an existing file.&amp;nbsp; In the case of an overwrite, tyically it would add about 2 seconds anywhere other than Integrity VM.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now the documentation alludes to some performance benefits from either logical volume, or raw disk backed VMs.&amp;nbsp; They also talk about flexibility vs performance, which is far from damning the file backed solution altogether.&amp;nbsp; They do not talk about astronomical differences, nor do they advise that file backed is unusable.&amp;nbsp; A number of HP folks have recently advised to use raw disks for our workloads which are coming under pressure.&amp;nbsp; They are relatively light app servers.&amp;nbsp; No high I/O databases or anything like that.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I wanted to get thoughts, or experiences from other Integrity VM customers on the disk backing options.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;EDIT: passing a raw disk to the VM, I re-ran the tests&lt;/P&gt;&lt;P&gt;2CPU, 8GB RAM dd test completes in 16 seconds, subsequent ovrerwrite test completes in 20 seconds (file backed overwrite bug disappears).&lt;/P&gt;&lt;P&gt;1CPU, 4GB RAM dd test completes in 17 seconds, subsequent overwrite test completes in 20 seconds (file backed CPU contention bug disappears)&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2013 03:34:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6116095#M521667</guid>
      <dc:creator>Ben Kinder</dc:creator>
      <dc:date>2013-06-27T03:34:45Z</dc:date>
    </item>
    <item>
      <title>Re: Extreme performance difference between file backed and disk backed VMs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117179#M521668</link>
      <description>&lt;P&gt;The astronomical differences are differences between writing the whole file to UFC (file cache) or not.&lt;/P&gt;&lt;P&gt;Sequention write to a file using dd is not particularly good test to measure disk performance especially&lt;/P&gt;&lt;P&gt;if the file size and the amount of memroy that you use in the tests are comparable. The difference&lt;/P&gt;&lt;P&gt;between the first 2 test you describe is the size of the file cache which fits the whole file in the first case.&lt;/P&gt;&lt;P&gt;Here you measure 2 file caches (guest and the host, which is on purpose very small) and not the disk itself.&lt;/P&gt;&lt;P&gt;dd against the raw dsf (/dev/rdisk/ ) would give you better idea of performance characteristics.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As far as I can say the file backing stores are rarely used, it's been known since begining that they&lt;/P&gt;&lt;P&gt;performance was not good enough (which was becaus eof implementation). The implementation has&lt;/P&gt;&lt;P&gt;changed a lot since, but most people simply stick to more 'raw' (whole disk or lvol) backing stores.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Stan&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2013 12:38:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117179#M521668</guid>
      <dc:creator>Stan_M</dc:creator>
      <dc:date>2013-06-27T12:38:46Z</dc:date>
    </item>
    <item>
      <title>Re: Extreme performance difference between file backed and disk backed VMs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117289#M521669</link>
      <description>&lt;P&gt;That is an interesting idea regarding filecache.&amp;nbsp; I added more memory to the VM, now it has 16GB.&amp;nbsp; I also changed the kernel tuning to utilize 50% of it for both min and max filecache.&amp;nbsp; This is now locked in at double the size of the test (8GB).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Re-running the test with a single CPU: first write 1min50sec (we still have the CPU contention problem)&lt;/P&gt;&lt;P&gt;Re-running the test with two CPUs: first write 15sec, overwrite now completes in 19sec (overwrite problem appears to be improved via filecache)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I still contend that this is bugged.&amp;nbsp; HP should document the fact that Integrity VM cannot be used with file backed stores.&amp;nbsp; The filecache does not explain why VMWare is able to achieve near physical device performance in every concievable configuration through VMDKs.&amp;nbsp; I think HP should be able to get closer to VMWare performance.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2013 14:04:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117289#M521669</guid>
      <dc:creator>Ben Kinder</dc:creator>
      <dc:date>2013-06-27T14:04:46Z</dc:date>
    </item>
    <item>
      <title>Re: Extreme performance difference between file backed and disk backed VMs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117295#M521670</link>
      <description>These numbers do not seem too unexpected. A VM accesses a real disk or a raw logical volume as a disk much faster than if you use a file as a disk for a VM. WHen you use a file for a disk on a VM when the VM writes to its disk the VM host must go through the file system overhead to write or read the data.&lt;BR /&gt;&lt;BR /&gt;I would think other hypervisors would have the same type of issue. Most people assign real disks to VMs that are san based to allow for dyanmic migration between hosts too.&lt;BR /&gt;&lt;BR /&gt;I hope this makes sense.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Jun 2013 14:05:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extreme-performance-difference-between-file-backed-and-disk/m-p/6117295#M521670</guid>
      <dc:creator>Emil Velez_2</dc:creator>
      <dc:date>2013-06-27T14:05:41Z</dc:date>
    </item>
  </channel>
</rss>

