<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: LVM timeout and SANs in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737125#M613207</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;regarding Pats post:&lt;BR /&gt;&lt;BR /&gt;- lvm mirroring / not mirroring has no effect on PV timeouts, expect the chance of a path/channel failure is 50% lower.&lt;BR /&gt;- man pvchange has a -t option that is able to set the timeout applies on a specific volume under LVM control. use &lt;BR /&gt;pvchange -t 180 /dev/rdsk/cXtYdZ to give a quite high timeout value that will cover i.e. a full core switch reboot on Your disk accesses to be able to sustain 'connectivity' during SAN issues.&lt;BR /&gt;&lt;BR /&gt;As already stated, mirroring is better done in hardware, the only expection being *some* D/R scenarios (i.e. when You have no SRDF ;)&lt;BR /&gt;&lt;BR /&gt;Pat also seems to be running a FC-AL (loop) config which is unfortunately prone to trespassing storms especially with HP-UX. &lt;BR /&gt;Unfortunately.&lt;BR /&gt;&lt;BR /&gt;Adjusting MAX_FCP_REQUESTS to below vendor recommendation takes care of that and even *raises* performance for most people running FC-AL. :)</description>
    <pubDate>Tue, 28 Feb 2006 20:17:53 GMT</pubDate>
    <dc:creator>Florian Heigl (new acc)</dc:creator>
    <dc:date>2006-02-28T20:17:53Z</dc:date>
    <item>
      <title>LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737119#M613201</link>
      <description>This is a new thread carried over from:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=362426" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=362426&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;(I thought I should start a new one instead of hijacking that old thread! :))&lt;BR /&gt;&lt;BR /&gt;Pat, (or whoever else is knowledgeable in this area)&lt;BR /&gt;&lt;BR /&gt;I am just setting up my LUNs to be used on our servers and have been thinking about the LVM issue, discussed above.&lt;BR /&gt;&lt;BR /&gt;Say I wanted to create a 100GB LVM, I had intended to just create one 200GB raw LUN with RAID1 on the SAN bringing the total useable space to 100Gigs, and then present it directly.&lt;BR /&gt;From your experiences I find that this could be hazardous.  So I need to use LVM and increase the timeout value.&lt;BR /&gt;&lt;BR /&gt;Now, Do I need to create two seperate 100GB raw LUNs now and then use lvm rather than the SAN to mirror them and give me the useable 100gigs?&lt;BR /&gt;&lt;BR /&gt;Thanks as ever for your sage council&lt;BR /&gt;Guy&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Feb 2006 09:52:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737119#M613201</guid>
      <dc:creator>Guy Humphreys</dc:creator>
      <dc:date>2006-02-22T09:52:57Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737120#M613202</link>
      <description>Guy,&lt;BR /&gt;&lt;BR /&gt;In my experience, hardware mirroring outperforms software mirroring hands down, so I would say no, you don't want to use lvm to do your mirroring.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete</description>
      <pubDate>Wed, 22 Feb 2006 09:55:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737120#M613202</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2006-02-22T09:55:25Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737121#M613203</link>
      <description>Guy.. if your lone SAN Array is a controller based one (i.e. an EVA) - then I suggest you do all your striping and RAIDing onthe array itself. Whatever PV timeout values for the array - you need to set it.&lt;BR /&gt;&lt;BR /&gt;IF however you are using multple "controller centric" arrays (again say EVA) OR "cache centric arrays" (like the XP or Hitachi line) - then you will get the best performance by striping (not mirroring since each LUN presented is already protected on the SAN/array level) accross LUNS on different array controllers or pairs or arrays (EVAs). If you've 4 EVAs.. I'd stripe my lvol/volume on the host accross 4 disks with each disk coming from each EVA. Or 4 or 8 disks if coming from an XP.&lt;BR /&gt;&lt;BR /&gt;Again for LVM - follow the recommended PV TOV values - if there are.&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Feb 2006 10:04:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737121#M613203</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-02-22T10:04:11Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737122#M613204</link>
      <description>Yes 2 in 2 weeks is unfortunate, however 5 events in 5 years is better, and 2 events were on a model 1, one with vcs 1.21 and the other on vcs 2.003 or 2.005.  I also have over a dozen now. &lt;BR /&gt;&lt;BR /&gt;Now to your question, The others really answered the question, so I am left to summarize.  Two EVA's with host mirroring is the safest, though I still say never use raid0.  I always use raid5, and don't have funds to mirror EVA's, and don't really care to do raid1 within the eva.  &lt;BR /&gt;&lt;BR /&gt;Now of course all the above experience is with a eva5K.  I am demoing a eva8k with autopath, and autopath has a timeout value which I think does the same as lvm, but not sure.&lt;BR /&gt;&lt;BR /&gt;I get my best perf creating 2-100 gn luns and preferring each to opposite controllers, and then host striping them together for a 200gb volume and fs.</description>
      <pubDate>Wed, 22 Feb 2006 12:09:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737122#M613204</guid>
      <dc:creator>Pat Obrien_1</dc:creator>
      <dc:date>2006-02-22T12:09:14Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737123#M613205</link>
      <description>Thanks for the replies so far guys.&lt;BR /&gt;&lt;BR /&gt;I had always intended to use RAID 1 from the SAN (which is a lone EVA 5000 - we can't afford to mirror ours either!)&lt;BR /&gt;&lt;BR /&gt;What is tripping me up is Pat's mention of the timeout problem.  This prob forces me to use LVM, not that I am averse to LVM per se you understand - but I wanted to step away from the old and move to the new, the EVA.&lt;BR /&gt;&lt;BR /&gt;Can I get some clarification on what the problem is - just so I am doubly sure I understand it.&lt;BR /&gt;&lt;BR /&gt;As I see it the prob is that if a disk fails in the san it can lead to loop failures where the SAN tries to find the failed disk and do self-diagnostics - this process takes longer than the default SCSI timeout value on the host and so data is lost!&lt;BR /&gt;&lt;BR /&gt;If I am wrong with this assumption please tell me.&lt;BR /&gt;&lt;BR /&gt;Now, what I am confused about is will the SAN not recognise that the failed disk is part of a RAID1 set and just use the other mirrored disk automatically? or is it that it has to go through these diagnostics BEFORE it swaps over to the other disk?&lt;BR /&gt;&lt;BR /&gt;cheers&lt;BR /&gt;Guy</description>
      <pubDate>Thu, 23 Feb 2006 04:36:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737123#M613205</guid>
      <dc:creator>Guy Humphreys</dc:creator>
      <dc:date>2006-02-23T04:36:00Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737124#M613206</link>
      <description>Create 1 LUN on the san, utilize lvm pvlinks/autopath/secure path depending on the type of SAN to provide HA.</description>
      <pubDate>Tue, 28 Feb 2006 15:37:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737124#M613206</guid>
      <dc:creator>Kevin Wright</dc:creator>
      <dc:date>2006-02-28T15:37:40Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737125#M613207</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;regarding Pats post:&lt;BR /&gt;&lt;BR /&gt;- lvm mirroring / not mirroring has no effect on PV timeouts, expect the chance of a path/channel failure is 50% lower.&lt;BR /&gt;- man pvchange has a -t option that is able to set the timeout applies on a specific volume under LVM control. use &lt;BR /&gt;pvchange -t 180 /dev/rdsk/cXtYdZ to give a quite high timeout value that will cover i.e. a full core switch reboot on Your disk accesses to be able to sustain 'connectivity' during SAN issues.&lt;BR /&gt;&lt;BR /&gt;As already stated, mirroring is better done in hardware, the only expection being *some* D/R scenarios (i.e. when You have no SRDF ;)&lt;BR /&gt;&lt;BR /&gt;Pat also seems to be running a FC-AL (loop) config which is unfortunately prone to trespassing storms especially with HP-UX. &lt;BR /&gt;Unfortunately.&lt;BR /&gt;&lt;BR /&gt;Adjusting MAX_FCP_REQUESTS to below vendor recommendation takes care of that and even *raises* performance for most people running FC-AL. :)</description>
      <pubDate>Tue, 28 Feb 2006 20:17:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737125#M613207</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2006-02-28T20:17:53Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737126#M613208</link>
      <description>Florian, thank you for a very complete and concise answer.  My faith in EVA's is now restored&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Guy</description>
      <pubDate>Wed, 01 Mar 2006 04:41:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737126#M613208</guid>
      <dc:creator>Guy Humphreys</dc:creator>
      <dc:date>2006-03-01T04:41:23Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737127#M613209</link>
      <description>The timeout issue in its gorriest details which is a fabric and not a loop as others stated:&lt;BR /&gt;&lt;BR /&gt;The rogue drive syndrome is a bad disk will start a storm on a backend loop from one of the ports on the defective disk drive.  This storm causes the loop to fail leaving you a single loop to half the shelves on this backend loop.  At this point I have seen 2 scenarios:&lt;BR /&gt;1) The defective drive will then begin a storm onthe remaining loop causing this to fail also.  There are several drives in each loop called quorum disk, and when quorum is lost, both controllers of the eva will begin reboot sequence.  During  this reboot, yo will not have host access to the storage.  Depending on firmware version, this may be seconds or minutes.&lt;BR /&gt;2) The defective disk will fail and disapear at which point the eva diagnostics perform a scan backend loops and will find the defect disk and attepmt to take ownership during which the drive will fail again.  During this internal activity, host IO becomes spotty at best.  It was 13 minutes for the eva to mark this drive bad and stablize itself. Some io will happen and other will not during this time frame.  From experience oracle was vastly upset.&lt;BR /&gt;The 13 minute delay has been reduced to 50 seconds in 3.020, and I understand better in more current versions.&lt;BR /&gt;&lt;BR /&gt;This behavior in the dual loop backend of the EVA5K is the nemisis of this product which HP recogized during the rebuild eva8K.  They have added "E-PBC-IO" chips to the shelf io modules or the emu to bypass a shelf with such a bad drive, and then there has been drive firmware upgrades with just about every vcs update I have done with the exception of vcs 1.21.&lt;BR /&gt;&lt;BR /&gt;Now a few weeks ago I did have 2 different evas (vcs 3.020) loose host access (one for 2 seconds and the other for 12 seconds)because of a single bad drive in each case.  I have almost finished converting non lvm volumes to lvm volumes and the systems aattached to these eva's logged a few ems messages, but oracle did not even know this occurred.  Including these latest 2 events, I think I have seen 4-5 in about the same number of years and I now have over a dozen eva 5K and 2 8k inbound.&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Mar 2006 09:36:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737127#M613209</guid>
      <dc:creator>Pat Obrien_1</dc:creator>
      <dc:date>2006-03-01T09:36:50Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737128#M613210</link>
      <description>Thanks again to Pat for some very good info - forwarned is forearmed, as they say.&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Guy</description>
      <pubDate>Wed, 01 Mar 2006 09:59:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737128#M613210</guid>
      <dc:creator>Guy Humphreys</dc:creator>
      <dc:date>2006-03-01T09:59:14Z</dc:date>
    </item>
    <item>
      <title>Re: LVM timeout and SANs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737129#M613211</link>
      <description>Hello Guy,&lt;BR /&gt;&lt;BR /&gt;This info is quite late but may help someone else.&lt;BR /&gt;regarding increase size from EVA-3000 and not able to increase it on HP-UX &lt;BR /&gt;talked to support in 2005 and they gave a me script "vgmodify" to do just what you were asking for.&lt;BR /&gt;increase the LUN size on EVA-3000 &lt;BR /&gt;from 5g to 10G&lt;BR /&gt;and from HP-UX command line.&lt;BR /&gt;#vgchange -a n vgtest&lt;BR /&gt;#./vgmodify â  d 10g /dev/vgtest &lt;BR /&gt;#vgchange -a y vgtest.&lt;BR /&gt;works very well for me and save me ton of times.&lt;BR /&gt;Cheers,&lt;BR /&gt;Tom</description>
      <pubDate>Wed, 01 Nov 2006 19:18:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-timeout-and-sans/m-p/3737129#M613211</guid>
      <dc:creator>tom quach_1</dc:creator>
      <dc:date>2006-11-01T19:18:40Z</dc:date>
    </item>
  </channel>
</rss>

