<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Superdome partitioning in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835377#M90013</link>
    <description>IS there any single point of failure reason if we partition the superdome. Like powersupply. Do all partitions get individual power supply?&lt;BR /&gt;</description>
    <pubDate>Tue, 29 Oct 2002 22:01:23 GMT</pubDate>
    <dc:creator>Rajesh_17</dc:creator>
    <dc:date>2002-10-29T22:01:23Z</dc:date>
    <item>
      <title>Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835373#M90009</link>
      <description>Is it possible to set up clustering between multiple partitions on superdome. &lt;BR /&gt;&lt;BR /&gt;If the above is possible? are there any single point of failure in such a set up?.&lt;BR /&gt;&lt;BR /&gt;what benifit would I get if i partition a 64 CPU into 8 partitions (8 cpu each) vs 8 small machines with 8 cpu each.&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Oct 2002 17:50:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835373#M90009</guid>
      <dc:creator>Rajesh_17</dc:creator>
      <dc:date>2002-10-29T17:50:47Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835374#M90010</link>
      <description>I think it would be better if you have 8 Servers with 8 CPU each. Because if anything happenes to your Superdome everything will be down. That is why HP is also not supporting the MCSG configuration on different Vpars. You have to look at it cost-wise also.&lt;BR /&gt;&lt;BR /&gt;Sandip</description>
      <pubDate>Tue, 29 Oct 2002 17:57:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835374#M90010</guid>
      <dc:creator>Sandip Ghosh</dc:creator>
      <dc:date>2002-10-29T17:57:08Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835375#M90011</link>
      <description>are electrical connection seperated between the vpars or just one power supply to superdome. Can you please give me single point of failure example.&lt;BR /&gt;&lt;BR /&gt;MCSG info was quite useful.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;rajesh</description>
      <pubDate>Tue, 29 Oct 2002 18:44:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835375#M90011</guid>
      <dc:creator>Rajesh_17</dc:creator>
      <dc:date>2002-10-29T18:44:28Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835376#M90012</link>
      <description>Yes , it is possible to set up clustering between multiple partitions .&lt;BR /&gt;&lt;BR /&gt;But I would suggest , use hard partitions (npars ) instead of soft partitions (v pars )since with vpars there is always a possiblity that some one can add or remove or modify a resource .&lt;BR /&gt;&lt;BR /&gt;With npars , it will be exactly like 8 different machines . You can bring down. modify or service one npar without affecting the other npars . You can move packages back and forth between npars . each npar is a different entity with its own independent resources .&lt;BR /&gt;&lt;BR /&gt;You can have vpars within npars . &lt;BR /&gt;&lt;BR /&gt;The only disadvantage is the cost . The cost of one 8 cpu npar will be 50% higher than that of an npar of similar configuration on a rp8400 .  &lt;BR /&gt;&lt;BR /&gt;The benefit however you gat with superdome is consolidation .</description>
      <pubDate>Tue, 29 Oct 2002 18:48:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835376#M90012</guid>
      <dc:creator>Ashwani Kashyap</dc:creator>
      <dc:date>2002-10-29T18:48:50Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835377#M90013</link>
      <description>IS there any single point of failure reason if we partition the superdome. Like powersupply. Do all partitions get individual power supply?&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Oct 2002 22:01:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835377#M90013</guid>
      <dc:creator>Rajesh_17</dc:creator>
      <dc:date>2002-10-29T22:01:23Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835378#M90014</link>
      <description>As I said every hard partition is a single entity in itself with separate resources resources of its own . So the answer to your question is yes . Based on the number of hard partitions you want, you will have to order the power supplies or for that matter any other parts .&lt;BR /&gt;&lt;BR /&gt;THe only single point of failure that I know off on a superdome is firmware upgrade . If for any reason you have to do a firmware upgrade on the superdome , you have to bring down all the partitions . But I have seen people running superdome with the same firmware now , when the superdome was released first a couple of years back .</description>
      <pubDate>Tue, 29 Oct 2002 22:21:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835378#M90014</guid>
      <dc:creator>Ashwani Kashyap</dc:creator>
      <dc:date>2002-10-29T22:21:52Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835379#M90015</link>
      <description>rajesh,&lt;BR /&gt;&lt;BR /&gt;The factors YOU have to weigh:&lt;BR /&gt;&lt;BR /&gt;(1) COST. How much of a difference is there between the superdome versus 8 separate servers. And how much MAINTENANCE costs?&lt;BR /&gt;&lt;BR /&gt;(2) BANDWIDTH. HOW much IO do you need on the backplane?? The Superdome wins this race&lt;BR /&gt;&lt;BR /&gt;(3) RESILENCE. Superdome parts are hotswapable and the is NO single point of failure unless a tank drives over it&lt;BR /&gt;&lt;BR /&gt;(4) SCALABILITY. If you get into a Superdome, can you scale past the configuration?&lt;BR /&gt;&lt;BR /&gt;(5) HVAC. How much are you going to spend of Electrical, redundancy, cooling, space??&lt;BR /&gt;&lt;BR /&gt;(6) STORAGE. Disk storage sub-system (Hopefully a SAN). How big and how redundant is your SAN??&lt;BR /&gt;&lt;BR /&gt;good luck!&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry</description>
      <pubDate>Wed, 30 Oct 2002 00:33:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835379#M90015</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2002-10-30T00:33:40Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835380#M90016</link>
      <description>Yes you can have an SG cluster within a SuperDome complex, but there ARE SPOF's here, not least is the box itself.&lt;BR /&gt;If you lose power to the box, then you lose everything.&lt;BR /&gt;There are also 2 or 3 ohter boards/parts in the SD that are Single Points Of Failure.&lt;BR /&gt;&lt;BR /&gt;ou may want to take a read of the White Paper entitled:&lt;BR /&gt;ServiceGuard Cluster Configuration for Partitioned Systems &lt;BR /&gt;&lt;BR /&gt;available at:&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/B3936-90058/B3936-90058.html" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/B3936-90058/B3936-90058.html&lt;/A&gt;</description>
      <pubDate>Wed, 30 Oct 2002 09:26:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835380#M90016</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2002-10-30T09:26:40Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835381#M90017</link>
      <description>&amp;gt; Is it possible to set up &lt;BR /&gt;&amp;gt; clustering between multiple &lt;BR /&gt;&amp;gt; partitions on superdome. &lt;BR /&gt;&lt;BR /&gt;Yes, no problem..&lt;BR /&gt;&lt;BR /&gt;&amp;gt; If the above is possible? &lt;BR /&gt;&amp;gt; are there any single point &lt;BR /&gt;&amp;gt; of failure in such a set up?. &lt;BR /&gt;&lt;BR /&gt;Make sure more nodes are outside the SD than inside the SD and you should be fine.&lt;BR /&gt;&lt;BR /&gt;Recall that if you've got more than 4 nodes the clusterlock disk is not possible.&lt;BR /&gt;&lt;BR /&gt;You might want to set up the A as an arbitrator.&lt;BR /&gt;&lt;BR /&gt;If you are scheduling maintenance on one of your outside the sd nodes, recall that your SD could be a SPOF for the entire cluster.. more nodes inside SD than outside it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt; what benifit would I get if &lt;BR /&gt;&amp;gt; i partition a 64 CPU into 8 &lt;BR /&gt;&amp;gt; partitions (8 cpu each) vs &lt;BR /&gt;&amp;gt; 8 small machines with 8 cpu &lt;BR /&gt;&amp;gt; each. &lt;BR /&gt;&lt;BR /&gt;Flexibility, easy of management. less box spofs.&lt;BR /&gt;more difficult upgrade...&lt;BR /&gt;all the reasons to sell an SD.&lt;BR /&gt;&lt;BR /&gt;However, having said that I'd consider the N's in your case a better environment... at least once you don't feel like repartitioning a lot.&lt;BR /&gt;&lt;BR /&gt;Later,&lt;BR /&gt;Bill&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Oct 2002 09:37:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835381#M90017</guid>
      <dc:creator>Bill McNAMARA_1</dc:creator>
      <dc:date>2002-10-30T09:37:02Z</dc:date>
    </item>
    <item>
      <title>Re: Superdome partitioning</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835382#M90018</link>
      <description>Yes you can partition the dome into differenct servers. Look at using the nPar option. This way each partition will be electrically isolated from the other. &lt;BR /&gt;&lt;BR /&gt;There are three single points of failure in a single dome cabinet, which is why HP does not recommend that you do failover in the same cabinet. These are the system clock, the system backplane and the power monitor.&lt;BR /&gt;&lt;BR /&gt;If you want to configure a cluster with no "single point of failure" then you will need to use more than one stand alone dome, or a complex consisting of two cabinets.&lt;BR /&gt;&lt;BR /&gt;ryan</description>
      <pubDate>Wed, 30 Oct 2002 14:50:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/superdome-partitioning/m-p/2835382#M90018</guid>
      <dc:creator>Ryan Green</dc:creator>
      <dc:date>2002-10-30T14:50:56Z</dc:date>
    </item>
  </channel>
</rss>

