<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: TPM Ratings in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985581#M923089</link>
    <description>Explain how the first app runs *across* 14 systems?&lt;BR /&gt;Also what was the RAM count on those L1000s?&lt;BR /&gt;I believe the second scenrio is safe. What happens with the OTHER 3 cells. Why not one app one cell? npars are where the SD lives - vpars are inherently *more* overhead &amp;amp; you lose the benefit of independence - i.e. one down all down.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff</description>
    <pubDate>Sat, 31 May 2003 02:09:13 GMT</pubDate>
    <dc:creator>Jeff Schussele</dc:creator>
    <dc:date>2003-05-31T02:09:13Z</dc:date>
    <item>
      <title>TPM Ratings</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985579#M923087</link>
      <description>Hi , &lt;BR /&gt;&lt;BR /&gt;A proposal was put in front of me of running a 64way superdome and carving it up into 4 hard partitions, with one of those hard partitions having 2 vpars. The first Vpar will have 8cpus and 16Gb RAM. The second Vpar will have 4cpus and 8Gb RAM. &lt;BR /&gt;&lt;BR /&gt;What is proposed to run on VPAR1 currently runs on 14 x L1000's with 2 x 440Mhz Cpus. &lt;BR /&gt;&lt;BR /&gt;What is proposed to run on VPAR2 currently runs on a N4000 with 6 x 550Mhz &lt;BR /&gt;&lt;BR /&gt;My TPM rating spreadsheet tells me that what is currently running has quite a lot more CPU power. &lt;BR /&gt;&lt;BR /&gt;Can you guys please give me your opinions. &lt;BR /&gt;&lt;BR /&gt;Much appreciated. &lt;BR /&gt;&lt;BR /&gt;Cheers &lt;BR /&gt;Darren</description>
      <pubDate>Fri, 30 May 2003 23:54:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985579#M923087</guid>
      <dc:creator>Darren Murray_1</dc:creator>
      <dc:date>2003-05-30T23:54:14Z</dc:date>
    </item>
    <item>
      <title>Re: TPM Ratings</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985580#M923088</link>
      <description>Nope!&lt;BR /&gt;&lt;BR /&gt;But, I do have contacts who know how to pronounce Superdome.  I will query and respond if I can get any input.  Interesting!&lt;BR /&gt;&lt;BR /&gt;Pete</description>
      <pubDate>Sat, 31 May 2003 01:55:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985580#M923088</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-05-31T01:55:09Z</dc:date>
    </item>
    <item>
      <title>Re: TPM Ratings</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985581#M923089</link>
      <description>Explain how the first app runs *across* 14 systems?&lt;BR /&gt;Also what was the RAM count on those L1000s?&lt;BR /&gt;I believe the second scenrio is safe. What happens with the OTHER 3 cells. Why not one app one cell? npars are where the SD lives - vpars are inherently *more* overhead &amp;amp; you lose the benefit of independence - i.e. one down all down.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff</description>
      <pubDate>Sat, 31 May 2003 02:09:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985581#M923089</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2003-05-31T02:09:13Z</dc:date>
    </item>
    <item>
      <title>Re: TPM Ratings</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985582#M923090</link>
      <description>There is 2.5GB of Ram in each of the L1000's&lt;BR /&gt;&lt;BR /&gt;At the moment the L1000's form 5 MC/Service Guard Clusters&lt;BR /&gt;&lt;BR /&gt;4 x 3 node clusters and 1 x 2 node clusters.&lt;BR /&gt;&lt;BR /&gt;All servers are active.&lt;BR /&gt;&lt;BR /&gt;They all run a number (24) of separate oracle databases.&lt;BR /&gt;&lt;BR /&gt;There will be other applications running on some other partions but I am only concerned about the these 2 VPARS. Im assuming that I can discount their affect.&lt;BR /&gt;&lt;BR /&gt;Im new to the Superdome thought so&lt;BR /&gt;&lt;BR /&gt;Cheers</description>
      <pubDate>Sat, 31 May 2003 02:26:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985582#M923090</guid>
      <dc:creator>Darren Murray_1</dc:creator>
      <dc:date>2003-05-31T02:26:32Z</dc:date>
    </item>
    <item>
      <title>Re: TPM Ratings</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985583#M923091</link>
      <description>Power, I/O backplane, firmware upgrade and I also believe the system clock are all considered superdome SPOF.  Here is a good thread about it:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xac475c7609e9d61190050090279cd0f9,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xac475c7609e9d61190050090279cd0f9,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;My understanding is Superdome is a more natural replacement for the V Class server and is therefore real gorilla for big kernels and systems with lots and lots of processes.  So Superdome compliments V Class environments.  (* I always thought partitioning kind of defeated this.  *)&lt;BR /&gt;&lt;BR /&gt;Also, memory cells and CPU's have their own oddities.  Like, add memory to one cell requires that you add memory to all the cells.  Here is good link:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/os/11i/superdome.pdf" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/os/11i/superdome.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I've also attached a Superdome Partitioning PDF.</description>
      <pubDate>Sat, 31 May 2003 02:58:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tpm-ratings/m-p/2985583#M923091</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2003-05-31T02:58:16Z</dc:date>
    </item>
  </channel>
</rss>

