<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: HSG80 + Netware cluster in MSA Storage</title>
    <link>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708213#M2644</link>
    <description>We have updated to the latest QL2x00.ham drivers (6.90d by HP), replaced the cpqshd.cdm with the scsihd.cdm driver and adjusted some of the parameters on the ql driver in our startup.ncf:&lt;BR /&gt;;Use netware multipath IO&lt;BR /&gt;SET MULTI-PATH SUPPORT = ON&lt;BR /&gt;&lt;BR /&gt;; For servers with FC HBAâ  s&lt;BR /&gt;LOAD QL2X00.HAM SLOT=3 /ALLPATHS /LUNS /CONSOLE&lt;BR /&gt;LOAD QL2X00.HAM SLOT=4 /ALLPATHS /LUNS /CONSOLE&lt;BR /&gt;&lt;BR /&gt;We do not load dosfat.nss as stated by qlogic for enabling multipath failover. We could not see any relation with path failover and the dos volume.&lt;BR /&gt;&lt;BR /&gt;We also have moved the volume causing the most I/O's (a GroupWise PO) to an EVA4000.&lt;BR /&gt;Next to the enormous performance increase on this PO (GWcheck runs 2 times faster, a backup even 10 times! 24h -&amp;gt; 2.5h), other PO's left on the MA8000 performed somewhat better too.&lt;BR /&gt;&lt;BR /&gt;We now plan to move all our PO's to the EVA and leave some fileshares on the MA8000 for now.&lt;BR /&gt;&lt;BR /&gt;We think that the cache on the controllers is just too little. 256MB per controller leaves maybe 100MB for read and/or write cache. Combined with RAID5 (treat a GroupWise PO as a database) this was killing performance.&lt;BR /&gt;&lt;BR /&gt;We have submitted nss code to Novell but have not yet recieved an awnswer yet.&lt;BR /&gt;Hope this helps a bit ;)&lt;BR /&gt;&lt;BR /&gt;Cheers!</description>
    <pubDate>Wed, 22 Nov 2006 02:03:02 GMT</pubDate>
    <dc:creator>M. Gerritsen</dc:creator>
    <dc:date>2006-11-22T02:03:02Z</dc:date>
    <item>
      <title>HSG80 + Netware cluster</title>
      <link>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708211#M2642</link>
      <description>Hi,&lt;BR /&gt;
We have a Netware 6.5 sp5 cluster running on DL580 G3's (PSP7.60) with 2 FCA2214.&lt;BR /&gt;
These are attached to two Compaq SANSwitch 16's  (1GB Broccade 28xx's?) with firmware v2.6.2d. These in turn are connected to 2 HSG80 controllers (V8.8F). Behind this we have 6 43xx vcabinets filled with 72.8GB disks and devided in RAID5 sets. Where the first raid set uses the first disk of each 43xx cabinet (6 disks in a raid set, effective disk size of 360GB).&lt;BR /&gt;
We do not use Secure Path. our startup.ncf has: SET MULTI-PATH SUPPORT=OFF&lt;BR /&gt;
LOAD QL2300.HAM SLOT=3 /LUNS /xretry=12 /xtimeout=120&lt;BR /&gt;
LOAD CPQSHD.CDM &lt;BR /&gt;
We have a lot of problems of server restarts, abends an other bad stuff in our cluster.&lt;BR /&gt;
Googleing has found me this: &lt;A href="http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001783" target="_blank" rel="nofollow"&gt;http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001783&lt;/A&gt;&lt;BR /&gt;
I can't find any related HP dosuments. Can anyone help?&lt;BR /&gt;</description>
      <pubDate>Mon, 16 Oct 2006 05:33:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708211#M2642</guid>
      <dc:creator>M. Gerritsen</dc:creator>
      <dc:date>2006-10-16T05:33:17Z</dc:date>
    </item>
    <item>
      <title>Re: HSG80 + Netware cluster</title>
      <link>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708212#M2643</link>
      <description>Did you find out where the problem is?&lt;BR /&gt;&lt;BR /&gt;I have a similar problem on a DL580 G3, Netware 6.5 and storage (EVA). &lt;BR /&gt;The DL580 G3 is in cluster with a DL380 G4.&lt;BR /&gt;There are no problems with the DL380 G4.&lt;BR /&gt;Only the DL580 G3 freezes or  falls in a Netware-sort of a blue screen (I am not familiar with Netware too much).&lt;BR /&gt;&lt;BR /&gt;I replaced all the HW, but the error repeats itself. &lt;BR /&gt;But there is sometimes also a problem with the HP shiped spare parts, who might come DOA or with an error. &lt;BR /&gt;</description>
      <pubDate>Tue, 21 Nov 2006 10:23:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708212#M2643</guid>
      <dc:creator>BR839478</dc:creator>
      <dc:date>2006-11-21T10:23:06Z</dc:date>
    </item>
    <item>
      <title>Re: HSG80 + Netware cluster</title>
      <link>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708213#M2644</link>
      <description>We have updated to the latest QL2x00.ham drivers (6.90d by HP), replaced the cpqshd.cdm with the scsihd.cdm driver and adjusted some of the parameters on the ql driver in our startup.ncf:&lt;BR /&gt;;Use netware multipath IO&lt;BR /&gt;SET MULTI-PATH SUPPORT = ON&lt;BR /&gt;&lt;BR /&gt;; For servers with FC HBAâ  s&lt;BR /&gt;LOAD QL2X00.HAM SLOT=3 /ALLPATHS /LUNS /CONSOLE&lt;BR /&gt;LOAD QL2X00.HAM SLOT=4 /ALLPATHS /LUNS /CONSOLE&lt;BR /&gt;&lt;BR /&gt;We do not load dosfat.nss as stated by qlogic for enabling multipath failover. We could not see any relation with path failover and the dos volume.&lt;BR /&gt;&lt;BR /&gt;We also have moved the volume causing the most I/O's (a GroupWise PO) to an EVA4000.&lt;BR /&gt;Next to the enormous performance increase on this PO (GWcheck runs 2 times faster, a backup even 10 times! 24h -&amp;gt; 2.5h), other PO's left on the MA8000 performed somewhat better too.&lt;BR /&gt;&lt;BR /&gt;We now plan to move all our PO's to the EVA and leave some fileshares on the MA8000 for now.&lt;BR /&gt;&lt;BR /&gt;We think that the cache on the controllers is just too little. 256MB per controller leaves maybe 100MB for read and/or write cache. Combined with RAID5 (treat a GroupWise PO as a database) this was killing performance.&lt;BR /&gt;&lt;BR /&gt;We have submitted nss code to Novell but have not yet recieved an awnswer yet.&lt;BR /&gt;Hope this helps a bit ;)&lt;BR /&gt;&lt;BR /&gt;Cheers!</description>
      <pubDate>Wed, 22 Nov 2006 02:03:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708213#M2644</guid>
      <dc:creator>M. Gerritsen</dc:creator>
      <dc:date>2006-11-22T02:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: HSG80 + Netware cluster</title>
      <link>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708214#M2645</link>
      <description>Tnx for your reply... it got us going on.&lt;BR /&gt;&lt;BR /&gt;But we found out that we had a problem with PervasiveSQL and 4GB RAM. When we tried running the server with 2GB we had no problems, but not so with 4GB.&lt;BR /&gt;So after replaciang almost all of HW we found out this on a new instalation of PSQL on Netware. &lt;BR /&gt;We are OK after we configured PSQL not to use more than 700MB of RAM (which is enough).&lt;BR /&gt;&lt;BR /&gt;-marko</description>
      <pubDate>Mon, 04 Dec 2006 13:34:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/hsg80-netware-cluster/m-p/708214#M2645</guid>
      <dc:creator>BR839478</dc:creator>
      <dc:date>2006-12-04T13:34:58Z</dc:date>
    </item>
  </channel>
</rss>

