<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic vCPU versus cpu entitlement in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286111#M521812</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apologies if this is a&amp;nbsp;basic&amp;nbsp;question. I have to setup a number of hpvm's on rx2800 in the future I just planning the best way to utilise the cpu resource. The rx2800 will have 8 cores to share among the hpvm's. The hpvm's will be used in a functional test environment so I don't envisage a heavy workload with the hpvm's.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My question would it be better config the hpvm's with more vCPU's with lower entitlement or a single vCPU with a great percentage of cpu entitlement ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I guess the increased number of vCPU would give a great number of run queues but a reduced time slice of CPU.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any pointers you can provide would be much appricated&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Michael&lt;/P&gt;</description>
    <pubDate>Thu, 28 Nov 2013 19:41:14 GMT</pubDate>
    <dc:creator>michaelob</dc:creator>
    <dc:date>2013-11-28T19:41:14Z</dc:date>
    <item>
      <title>vCPU versus cpu entitlement</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286111#M521812</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Apologies if this is a&amp;nbsp;basic&amp;nbsp;question. I have to setup a number of hpvm's on rx2800 in the future I just planning the best way to utilise the cpu resource. The rx2800 will have 8 cores to share among the hpvm's. The hpvm's will be used in a functional test environment so I don't envisage a heavy workload with the hpvm's.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My question would it be better config the hpvm's with more vCPU's with lower entitlement or a single vCPU with a great percentage of cpu entitlement ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I guess the increased number of vCPU would give a great number of run queues but a reduced time slice of CPU.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any pointers you can provide would be much appricated&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Michael&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2013 19:41:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286111#M521812</guid>
      <dc:creator>michaelob</dc:creator>
      <dc:date>2013-11-28T19:41:14Z</dc:date>
    </item>
    <item>
      <title>Re: vCPU versus cpu entitlement</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286115#M521813</link>
      <description>&lt;P&gt;There is no good answer except - it depends...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It depends on how many guests, what are your sizing requirements, performance expectations etc.&lt;/P&gt;&lt;P&gt;Generally, it makes sense to keep the guests as narrow (less vcpus) as other requirements allow.&lt;/P&gt;&lt;P&gt;Sharing physical cpu among many different guests does have performance impact. So maybe start with&lt;/P&gt;&lt;P&gt;2 vcpu guests and if needed you can always change the config - trying different config&amp;nbsp;is only&lt;/P&gt;&lt;P&gt;&amp;nbsp;a reboot and one hpvmnodify command away.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;&lt;P&gt;Stan&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2013 19:54:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286115#M521813</guid>
      <dc:creator>Stan_M</dc:creator>
      <dc:date>2013-11-28T19:54:03Z</dc:date>
    </item>
    <item>
      <title>Re: vCPU versus cpu entitlement</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286157#M521814</link>
      <description>Thanks Stan for reply,&lt;BR /&gt;&lt;BR /&gt;The environment I will be building would be used for functional testing so performance initially wouldn't be a key requirement.&lt;BR /&gt;Following functional testing I would moved the applications on right sized hardware.&lt;BR /&gt;&lt;BR /&gt;I was planning on starting with a default config for each hpvm of 1 vCPU , 4GB RAM and the default 10% entitlement. Then monitor the usage and performance and increase accordingly, so based on your replay I would be best to increase a vCPU and then the % of entitlement ?&lt;BR /&gt;&lt;BR /&gt;Ideally I would like to fit as many hpvm's onto rx2800 i4 as possible. Some of the hpvm will be running HP9000 containers inside as I have to move some apps currently running on PA RISC hardware onto Integrity going forward.&lt;BR /&gt;&lt;BR /&gt;How many hpvm could I run on a rx2800 i4 (8 cores) given I had enough memory to accommodate 4GB per hpvm ? What deployment config would you recommend if the goal was to stack as many hpvm's onto as few rx2800 boxes in a functional test environment ?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Michael</description>
      <pubDate>Thu, 28 Nov 2013 21:39:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286157#M521814</guid>
      <dc:creator>michaelob</dc:creator>
      <dc:date>2013-11-28T21:39:17Z</dc:date>
    </item>
    <item>
      <title>Re: vCPU versus cpu entitlement</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286615#M521815</link>
      <description>&lt;P&gt;It really depends - for example, if you want testing on&amp;nbsp;uniprocessor&amp;nbsp;or MP system.&lt;/P&gt;&lt;P&gt;You may probably want both -&amp;nbsp;both UP and MP specific bug exist. When you go to MP config,&lt;/P&gt;&lt;P&gt;keep it as narrow as possible (which is why I said 2 vcpus).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With minimum entitlement of 5%, you can theoretically have up to 20x8 = 160 guests.&lt;/P&gt;&lt;P&gt;Memory will be the limiting factor.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can always adjust things as you go.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2013 08:36:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vcpu-versus-cpu-entitlement/m-p/6286615#M521815</guid>
      <dc:creator>Stan_M</dc:creator>
      <dc:date>2013-11-29T08:36:30Z</dc:date>
    </item>
  </channel>
</rss>

