<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Question for cluster disk in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204906#M792709</link>
    <description>Provided the disks can be 'shared' by all the nodes...you can use whatever you want.&lt;BR /&gt;&lt;BR /&gt;Just remember that if the node/server holding the lock disk goes down, one of the remaining other node(s) must be able to gain control of the lock disks....hence be on an array that all nodes can get to.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 27 Feb 2004 21:45:46 GMT</pubDate>
    <dc:creator>Rita C Workman</dc:creator>
    <dc:date>2004-02-27T21:45:46Z</dc:date>
    <item>
      <title>Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204905#M792708</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I want to make a cluster with about 6 nodes but I do not want to use SAN as the shared disks. &lt;BR /&gt;&lt;BR /&gt;Does anyone know is there any other feasible  alternatives to do it (e.g. use SC-10, etc.)?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Wendy</description>
      <pubDate>Fri, 27 Feb 2004 21:29:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204905#M792708</guid>
      <dc:creator>Wendy_9</dc:creator>
      <dc:date>2004-02-27T21:29:06Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204906#M792709</link>
      <description>Provided the disks can be 'shared' by all the nodes...you can use whatever you want.&lt;BR /&gt;&lt;BR /&gt;Just remember that if the node/server holding the lock disk goes down, one of the remaining other node(s) must be able to gain control of the lock disks....hence be on an array that all nodes can get to.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 27 Feb 2004 21:45:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204906#M792709</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2004-02-27T21:45:46Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204907#M792710</link>
      <description>Hi Rita,&lt;BR /&gt;&lt;BR /&gt;Is there any good suggestions for it?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Wendy</description>
      <pubDate>Fri, 27 Feb 2004 22:08:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204907#M792710</guid>
      <dc:creator>Wendy_9</dc:creator>
      <dc:date>2004-02-27T22:08:08Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204908#M792711</link>
      <description>I'm not sure what your meaning by 'good suggestions'...but you don't have a SAN to have a cluster, if that's what your asking.&lt;BR /&gt;&lt;BR /&gt;Your nodes can be what is called host based connected to your disk array.  The point is that all the nodes or servers have to have access to the same disks by means of some connection (i.e. direct connected between host &amp;amp; disk array or via a SAN).&lt;BR /&gt;&lt;BR /&gt;To failover to another node, then each of those nodes have to be able to access the same disks.&lt;BR /&gt;For the cluster to reform in the event of a failover, then the lock disk has to be accessible by the other nodes so that one of them can become the owner.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 27 Feb 2004 22:49:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204908#M792711</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2004-02-27T22:49:24Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204909#M792712</link>
      <description>You can of course use a SCSI-based solution, but You will run unreasonably low on scsi-id's. (15 - 6 controllers)&lt;BR /&gt;Also, I'm not quite sure, what the cabling would look like. HVD is a definite requirement, and You will have to somehow daisy-chain the nodes, which would *very* error-prone.&lt;BR /&gt;&lt;BR /&gt;I'd either recommend using SSA (needs adapters and some kind of conversion, but I a lot cheaper than fibrechannel) or - preferably - to consider an iSCSI-HVD bridge with a resonably good diskarray attached, which would be fast enough and probably the cheapest thing, too.&lt;BR /&gt;&lt;BR /&gt;Or just fibrechannel, which is proven and was made for &amp;gt;2 host cluster stuff :)</description>
      <pubDate>Sat, 28 Feb 2004 09:47:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204909#M792712</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2004-02-28T09:47:48Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204910#M792713</link>
      <description>One point to make a note of, is that in a 6 node cluster you CANNOT have a cluster lock disc.&lt;BR /&gt;If you do wish to have the cluster locking mechanism for a 6 node cluster, you must use the free Quorum Server product on a node outside of the cluster.&lt;BR /&gt;The storage you use for each package must be able to be shared across all nodes that will be expected to run that package.&lt;BR /&gt;Therefore, if a package is only ever expected to run on just two of the six nodes, then the storasge for that package is not needed to be seen/connected on the other 4 nodes.&lt;BR /&gt;&lt;BR /&gt;For standar Fast Wide ScSI, HP does (or did) sell a Y-cable, allowing you to link a disc JBOD across 3 or 4 nodes, but you have to keep the cables lengths under the maximum length (25 metres for FW scsi).&lt;BR /&gt;Another way is to have two 3 node clusters which might make the design easier&lt;BR /&gt;&lt;BR /&gt;It may be worth paying for some design consultancy if you are really worried about getting it right</description>
      <pubDate>Sat, 28 Feb 2004 17:21:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204910#M792713</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2004-02-28T17:21:17Z</dc:date>
    </item>
    <item>
      <title>Re: Question for cluster disk</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204911#M792714</link>
      <description>Hi Wendy,&lt;BR /&gt;&lt;BR /&gt;On clusters with more than four nodes, lock disk is not allowed. So, you don't need to bother about lock disk.&lt;BR /&gt;&lt;BR /&gt;If at all you want to, you can configure a Quorum server. Quoram server shouldn't be one of the nodes in the cluster. But it can work for multiple clusters on the system.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Sun, 29 Feb 2004 00:46:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/question-for-cluster-disk/m-p/3204911#M792714</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2004-02-29T00:46:03Z</dc:date>
    </item>
  </channel>
</rss>

