<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: disk array, lvm,  POWERFAILED in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470373#M211660</link>
    <description>Hah!&lt;BR /&gt;&lt;BR /&gt;Seems I'm guessing right :)&lt;BR /&gt;&lt;BR /&gt;If there are no hardware issues to be seen&lt;BR /&gt;with fcmsutil follow the following. &lt;BR /&gt;Otherwise get rid of the hardware problems and consider following it afterwards.&lt;BR /&gt; &lt;BR /&gt;In short: &lt;BR /&gt;change the kernel paramter &lt;BR /&gt;â  â   max_fcp_reqs                  512          512   Static    N/A          M â   â  &lt;BR /&gt;to something according the formula I wrote above.&lt;BR /&gt;&lt;BR /&gt;To explain the issue:&lt;BR /&gt;&lt;BR /&gt;when hp-ux hit's a timeout for a single device for any reason, it will issue a full ioscan and check availability of all devices the kernel knows. &lt;BR /&gt;Now, the Clariion is kind of an active-passive array, even if a single device has an alternate path only one of these paths is active at a time.&lt;BR /&gt;now when the alternate path is checked by hp-ux the Clariion needs to switch the lun over to the other SP. This triggers the message 'CRU / UNIT shutdown for trespass'.&lt;BR /&gt;In a quickloop this is a bad situation as the trespass will also change all other hosts to switch until (ad nauseam) there's so much switching that simple commands hitting many LUNs (lvlnboot, vgcfgbackup) will overload the storage processor.&lt;BR /&gt;&lt;BR /&gt;Specific to hp-ux is that a HP box does allow a maximum of 512 requests / s / path whitout asking the Clariion how much it can take.&lt;BR /&gt;FC4500: 256reqs / SP &lt;BR /&gt;FC4700: 512reqs / SP&lt;BR /&gt;This number must (I think) even be divided by two as soon as the trespass occurs.&lt;BR /&gt;I even took it in half as it count's per path.&lt;BR /&gt;So, if You have three hp-ux hosts connected to it: &lt;BR /&gt;512/2/3/2=42&lt;BR /&gt;42 is a good number, isn't it?&lt;BR /&gt;&lt;BR /&gt;In the end just set it lower than &lt;BR /&gt;There is no noticeable performance degradation and You follow emc^2 r</description>
    <pubDate>Wed, 26 Jan 2005 11:17:15 GMT</pubDate>
    <dc:creator>Florian Heigl (new acc)</dc:creator>
    <dc:date>2005-01-26T11:17:15Z</dc:date>
    <item>
      <title>disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470352#M211639</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have a `little' problem.&lt;BR /&gt;More than a week ago (2005.01.15), we have change the location of a feew HP servers  + a CLARIION disk array.&lt;BR /&gt;&lt;BR /&gt;The disk array is mounted on a L1000 class machine (hostname=l1000db2) on two LV (vg02 &amp;amp;&amp;amp; vg03).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Yesterday, a proplem has ocurred, see below in syslog file.&lt;BR /&gt;&lt;BR /&gt;I have no idea if it is just a coincidence,&lt;BR /&gt;but a feew minutes before the firts error message someone has run a script from HP , check.dat (hardware inspection). &lt;BR /&gt;and there was a command "lvlnboot -v",&lt;BR /&gt;you can see it in the syslog file&lt;BR /&gt;" Jan 24 16:32:48 l1000db2 LVM[8338]: lvlnboot -v"&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Next, the syslog:&lt;BR /&gt;################&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Jan 19 16:46:40 l1000db2 pwgrd: Invalid entry found in passwd file /etc/passwd&lt;BR /&gt;Jan 19 16:49:59 l1000db2 : su : + ta root-vscsftp1&lt;BR /&gt;Jan 19 16:54:46 l1000db2 : su : + ta root-oracle&lt;BR /&gt;Jan 19 16:57:49 l1000db2 : su : + ta root-oracle&lt;BR /&gt;Jan 24 16:28:12 l1000db2 ftpd[7725]: FTP LOGIN FROM 172.16.10.31 [172.16.10.31], root&lt;BR /&gt;Jan 24 16:32:48 l1000db2 LVM[8338]: lvlnboot -v&lt;BR /&gt;Jan 24 16:37:15 l1000db2 vmunix: LVM: Recovered Path (device 0x1f060700) to PV 3 in VG 2.&lt;BR /&gt;Jan 24 16:37:15 l1000db2 vmunix: LVM: Restored PV 3 to VG 2.&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f071400) to PV 3 in VG 3 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070500) to PV 2 in VG 2 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070300) to PV 1 in VG 2 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070700) to PV 3 in VG 2 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: vg[2]: pvnum=1 (dev_t=0x1f060300) is POWERFAILED&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: vg[2]: pvnum=2 (dev_t=0x1f060500) is POWERFAILED&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: vg[2]: pvnum=3 (dev_t=0x1f060700) is POWERFAILED&lt;BR /&gt;Jan 24 16:37:47 l1000db2 vmunix: LVM: vg[3]: pvnum=3 (dev_t=0x1f061400) is POWERFAILED&lt;BR /&gt;&lt;BR /&gt;############&lt;BR /&gt;&lt;BR /&gt;I have no idea how to solve this problem now.&lt;BR /&gt;Should I try a restart of the server ?!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Marcel</description>
      <pubDate>Tue, 25 Jan 2005 06:28:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470352#M211639</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-25T06:28:11Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470353#M211640</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;lvlnboot -v  just shows the boot disk, no harm in using this command.&lt;BR /&gt;&lt;BR /&gt;LOg shows one of the disk in VG2 is failed,&lt;BR /&gt;#ioscan -fnC disk&lt;BR /&gt;check all the disks are claimed or not.&lt;BR /&gt;propably you may have to replace the failed disk</description>
      <pubDate>Tue, 25 Jan 2005 06:39:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470353#M211640</guid>
      <dc:creator>Ravi_8</dc:creator>
      <dc:date>2005-01-25T06:39:06Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470354#M211641</link>
      <description>&lt;BR /&gt;Hi Marcel,&lt;BR /&gt;&lt;BR /&gt;If you have powerpath installed then do,&lt;BR /&gt;&lt;BR /&gt;Powermt config&lt;BR /&gt;Powermt check &lt;BR /&gt;&lt;BR /&gt;again verify dead devices with powermt display.&lt;BR /&gt;&lt;BR /&gt;You may consider a reboot , if you are not able to clear dead paths.&lt;BR /&gt;&lt;BR /&gt;EMC has recommendations for settings on HP Unix and Clariion Arrays .Refer EMC Web .&lt;BR /&gt;&lt;BR /&gt;You have to change the PV time out to 120 or more ,if PV time out is default.&lt;BR /&gt;&lt;BR /&gt;Check with "pvdisplay -v /dev/rmt/c#t#d#"&lt;BR /&gt;&lt;BR /&gt;Set PV time out value by,&lt;BR /&gt;&lt;BR /&gt;pvchange -t 120 /dev/rmt/c#t#d#&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Baiju.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 25 Jan 2005 11:29:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470354#M211641</guid>
      <dc:creator>baiju_3</dc:creator>
      <dc:date>2005-01-25T11:29:05Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470355#M211642</link>
      <description>IF you have alternate paths to the LUNS then you dont have to restart the server.&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v /dev/vg02&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v /dev/vg03&lt;BR /&gt;&lt;BR /&gt;Do you see "Alternate Link" for the PVs ?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 25 Jan 2005 11:43:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470355#M211642</guid>
      <dc:creator>Sundar_7</dc:creator>
      <dc:date>2005-01-25T11:43:30Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470356#M211643</link>
      <description>The HP script either triggered the error or made the os aware of it, You have to find out which is the case.&lt;BR /&gt;&lt;BR /&gt;Check the Clariions logs for 'array shutdown for trespass' messages. &lt;BR /&gt;&lt;BR /&gt;/opt/Navisphere/bin/navicli -h &lt;CLARIION&gt; getlog -2000 &lt;BR /&gt;&lt;BR /&gt;If there are more than 10-20 of these errors, this is suspicious.&lt;BR /&gt;Are You running in a quick loop configuration?&lt;BR /&gt;&lt;BR /&gt;If You don't know, check it using /opt/fcms/bin/fcmsutil /dev/td&lt;HBA&gt; | grep Topology&lt;BR /&gt;&lt;BR /&gt;If it is a quickloop (plain FC-AL):&lt;BR /&gt;There are conditions where HP-UX overloads the Clariions Storage Processor to a point I better don't talk about.&lt;BR /&gt;&lt;BR /&gt;Check Your kernel parameter max_fcp_reqs and compare it to emc's documetation.&lt;BR /&gt;a rule of thumb: max_fcp_reqs=(512/2/systems_in_loop)&lt;BR /&gt;Also, check the scsi queue depth against what emc wants.&lt;BR /&gt;&lt;BR /&gt;It seems like You don't have Navisphere Agent installed - You should.&lt;BR /&gt;&lt;BR /&gt;(We had a lot of problems in our SAN until I stopped begging for permission and simply set that parameter to what it needed to be :)&lt;/HBA&gt;&lt;/CLARIION&gt;</description>
      <pubDate>Tue, 25 Jan 2005 12:25:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470356#M211643</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-25T12:25:55Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470357#M211644</link>
      <description>Hi,&lt;BR /&gt;sorry for my late response, other problems come too :-(&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;About  "ioscan -fnC disk"&lt;BR /&gt;all disks are CLAIMED, like:&lt;BR /&gt;....&lt;BR /&gt;disk     28  0/7/0/0.8.0.1.0.0.1  sdisk    CLAIMED     DEVICE       DGC     C4400WDR1&lt;BR /&gt;                                 /dev/dsk/c7t0d1   /dev/rdsk/c7t0d1&lt;BR /&gt;....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;About powermt command , no such command&lt;BR /&gt;About "pvdisplay -v /dev/rmt/c#t#d#".&lt;BR /&gt;&lt;BR /&gt;Probable I dont have such devices.&lt;BR /&gt;ls -l /dev/rmt/c*&lt;BR /&gt;crw-rw-rw-   2 bin        bin        205 0x030000 Mar 16  2001 /dev/rmt/c3t0d0BEST&lt;BR /&gt;crw-rw-rw-   2 bin        bin        205 0x030080 Mar 16  2001 /dev/rmt/c3t0d0BESTb&lt;BR /&gt;crw-rw-rw-   2 bin        bin        205 0x030040 Apr 13  2001 /dev/rmt/c3t0d0BESTn&lt;BR /&gt;crw-rw-rw-   2 bin        bin        205 0x0300c0 Mar 16  2001 /dev/rmt/c3t0d0BESTnb&lt;BR /&gt;crw-rw-rw-   1 bin        bin        205 0x030001 Mar 16  2001 /dev/rmt/c3t0d0DDS&lt;BR /&gt;crw-rw-rw-   1 bin        bin        205 0x030081 Mar 16  2001 /dev/rmt/c3t0d0DDSb&lt;BR /&gt;crw-rw-rw-   1 bin        bin        205 0x030041 Mar 16  2001 /dev/rmt/c3t0d0DDSn&lt;BR /&gt;crw-rw-rw-   1 bin        bin        205 0x0300c1 Mar 16  2001 /dev/rmt/c3t0d0DDSnb&lt;BR /&gt;&lt;BR /&gt;Looks to be just magnetic tapes.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;About vgdisplay:&lt;BR /&gt;&lt;BR /&gt;All disks have "Alternate Link"&lt;BR /&gt;&lt;BR /&gt;vgdisplay -v /dev/vg02&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg02&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      16&lt;BR /&gt;Cur PV                      4&lt;BR /&gt;Act PV                      4&lt;BR /&gt;Max PE per PV               13915&lt;BR /&gt;VGDA                        8&lt;BR /&gt;PE Size (Mbytes)            4&lt;BR /&gt;Total PE                    55652&lt;BR /&gt;Alloc PE                    55652&lt;BR /&gt;Free PE                     0&lt;BR /&gt;Total PVG                   0&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg02/lvol1&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            222608&lt;BR /&gt;   Current LE                  55652&lt;BR /&gt;   Allocated PE                55652&lt;BR /&gt;   Used PV                     4&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d1&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d1  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d3&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d3  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d5&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d5  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d7&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d7  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;On #vgdisplay -v /dev/vg03&lt;BR /&gt;is a similar situation, alal disks have "Alternate Link"&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;About the topology:&lt;BR /&gt;l1000db2 # /opt/fcms/bin/fcmsutil /dev/td0 | grep Topology&lt;BR /&gt;                               Topology = PRIVATE_LOOP&lt;BR /&gt;l1000db2 # /opt/fcms/bin/fcmsutil /dev/td1 | grep Topology&lt;BR /&gt;                               Topology = PRIVATE_LOOP&lt;BR /&gt;&lt;BR /&gt;I was traing:&lt;BR /&gt;&lt;BR /&gt;l1000db2 # /opt/Navisphere/bin/navicli getlog&lt;BR /&gt;ABORT instruction (core dumped)&lt;BR /&gt;&lt;BR /&gt;The clariion is connected to the L1000 server via optical fibre, so I dindn't know what IP to user as "-h host" option :-(  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;BTW: I'm not an expert in HP-UX, in any case not disks-array and Navisphere.&lt;BR /&gt;Also I'm about 2000 kms distance by the servers. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks for your time,&lt;BR /&gt;I'll notice you about what will happen.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Marcel</description>
      <pubDate>Tue, 25 Jan 2005 13:55:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470357#M211644</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-25T13:55:01Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470358#M211645</link>
      <description>if everything is CLAIMED and PV Status is "available" too, then I'd say there is no need to reboot.&lt;BR /&gt;&lt;BR /&gt;The Clariion has two service processors which are usually connected to the site lan, but if noone can tell You that won't do any good.&lt;BR /&gt;&lt;BR /&gt;Either have someone get You that ip address or hostname, or search through the disks - there is another parameter (documented in a file called agent.config that is *somewhere*) to use a scsi-device for communication. &lt;BR /&gt;this definitely works if You see any LUNs on 0/7/0 with a capacity of 0 or 8 MB. &lt;BR /&gt;&lt;BR /&gt;just use /opt/Navisphere/bin/navicli and hit enter for a list of commands.&lt;BR /&gt;&lt;BR /&gt;I kind of think You're having the same problem we had, but I can't tell for sure.&lt;BR /&gt;&lt;BR /&gt;First find a way to reproduce the problem at a low-risk time, then - if You wish - try changing the kernel parameter (ask either HP or EMC^2 on it, they both have the relevant EMC^2 document on what to change to make HP-UX a 'supported' client in loop configurations) and see if the problem persists. (I'd bet it doesn't)</description>
      <pubDate>Tue, 25 Jan 2005 18:52:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470358#M211645</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-25T18:52:21Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470359#M211646</link>
      <description>The Navisphere cli Manual is called 069001038-1 at EMC^2, and I'll copy a bit but You might want to search for a pdf...:&lt;BR /&gt;&lt;BR /&gt;navicli [-d device ] [-h hostname] {-np} [-m] [-p] [-t timeout] -[v|q] command&lt;BR /&gt;&lt;BR /&gt;(-m suppress output, -np don't poll the box, which sometimes may help performance of command, -p appears to be preview mode, -q suppresses *error* messages -t sets the timeout for the command, from my experience this is passed to the box, not to the cli program -v extensive error descriptions)&lt;BR /&gt;&lt;BR /&gt;so, this could be &lt;BR /&gt;navicli -d /dev/dsk/c7t0d0 getlog&lt;BR /&gt;&lt;BR /&gt;I have never tried using a *data* lun for this, and I'm not sure if it'll work. If this goes somewhere through the scsi mode pages it  might, but maybe it just breaks everything :))&lt;BR /&gt;&lt;BR /&gt;So better find someone that knows the ip, afterwards search for the EMC^2 config file agent.config and edit it to allow access for root@localhost and on how to access the clariion.&lt;BR /&gt;&lt;BR /&gt;afterwards /sbin/init.d/agent stop and /sbin/init.d/agent start and from that moment on You'll have a lot of output on every single soft scsi error and whatever the clariion notices right into Your syslog.&lt;BR /&gt;&lt;BR /&gt;:)</description>
      <pubDate>Tue, 25 Jan 2005 19:08:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470359#M211646</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-25T19:08:22Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470360#M211647</link>
      <description>Hi Marcel.&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;Jan 24 16:32:48 l1000db2 LVM[8338]: lvlnboot  is OK. it is no harm for your server.&lt;BR /&gt;But&lt;BR /&gt;=========&lt;BR /&gt;Jan 24 16:37:15 l1000db2 vmunix: LVM: Recovered Path (device 0x1f060700) to PV 3 in VG 2.&lt;BR /&gt;Jan 24 16:37:15 l1000db2 vmunix: LVM: Restored PV 3 to VG 2.&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f071400) to PV 3 in VG 3 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070500) to PV 2 in VG 2 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070300) to PV 1 in VG 2 Failed!&lt;BR /&gt;Jan 24 16:37:44 l1000db2 vmunix: LVM: Path (device 0x1f070700) to PV 3 in VG 2 Failed!&lt;BR /&gt;===&lt;BR /&gt;are problem.&lt;BR /&gt;Now issue these command to diagnostic&lt;BR /&gt;#vgdisplay -v vg02&lt;BR /&gt;#vgdisplay -v vg02&lt;BR /&gt;Pls post result for me/&lt;BR /&gt;tienna</description>
      <pubDate>Tue, 25 Jan 2005 20:48:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470360#M211647</guid>
      <dc:creator>Nguyen Anh Tien</dc:creator>
      <dc:date>2005-01-25T20:48:13Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470361#M211648</link>
      <description>Hi Florian,&lt;BR /&gt;&lt;BR /&gt;I think that it's a problem with Navisphere too.&lt;BR /&gt;&lt;BR /&gt;l1000db2 # /opt/Navisphere/bin/navicli&lt;BR /&gt;ABORT instruction (core dumped)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;l1000db2 # /sbin/init.d/agent stop&lt;BR /&gt;Navisphere Agent is not running&lt;BR /&gt;l1000db2 # /sbin/init.d/agent start&lt;BR /&gt;Starting Navisphere Agent&lt;BR /&gt;/sbin/init.d/agent[37]: 26412 Abort(coredump)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Marcel&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 26 Jan 2005 02:44:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470361#M211648</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T02:44:22Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470362#M211649</link>
      <description>l1000db2 # vgdisplay -v vg02&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg02&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      16&lt;BR /&gt;Cur PV                      4&lt;BR /&gt;Act PV                      4&lt;BR /&gt;Max PE per PV               13915&lt;BR /&gt;VGDA                        8&lt;BR /&gt;PE Size (Mbytes)            4&lt;BR /&gt;Total PE                    55652&lt;BR /&gt;Alloc PE                    55652&lt;BR /&gt;Free PE                     0&lt;BR /&gt;Total PVG                   0&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;Hi  Nguyen,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;the oputput of the commands:&lt;BR /&gt;l1000db2 # vgdisplay -v vg02&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg02&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      16&lt;BR /&gt;Cur PV                      4&lt;BR /&gt;Act PV                      4&lt;BR /&gt;Max PE per PV               13915&lt;BR /&gt;VGDA                        8&lt;BR /&gt;PE Size (Mbytes)            4&lt;BR /&gt;Total PE                    55652&lt;BR /&gt;Alloc PE                    55652&lt;BR /&gt;Free PE                     0&lt;BR /&gt;Total PVG                   0&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg02/lvol1&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            222608&lt;BR /&gt;   Current LE                  55652&lt;BR /&gt;   Allocated PE                55652&lt;BR /&gt;   Used PV                     4&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d1&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d1  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d3&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d3  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d5&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d5  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t0d7&lt;BR /&gt;   PV Name                     /dev/dsk/c7t0d7  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    13913&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;____________________________________&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;l1000db2 # vgdisplay -v vg03&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg03&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      16&lt;BR /&gt;Cur PV                      12&lt;BR /&gt;Act PV                      12&lt;BR /&gt;Max PE per PV               26592&lt;BR /&gt;VGDA                        24&lt;BR /&gt;PE Size (Mbytes)            32&lt;BR /&gt;Total PE                    33964&lt;BR /&gt;Alloc PE                    33964&lt;BR /&gt;Free PE                     0&lt;BR /&gt;Total PVG                   0&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg03/lvol1&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            1086848&lt;BR /&gt;   Current LE                  33964&lt;BR /&gt;   Allocated PE                33964&lt;BR /&gt;   Used PV                     12&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d1&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d1  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2399&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d2&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d2  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d3&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d3  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d4&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d4  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d5&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d5  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d6&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d6  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    3323&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t1d7&lt;BR /&gt;   PV Name                     /dev/dsk/c7t1d7  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2399&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t2d0&lt;BR /&gt;   PV Name                     /dev/dsk/c7t2d0  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t2d1&lt;BR /&gt;   PV Name                     /dev/dsk/c7t2d1  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t2d2&lt;BR /&gt;   PV Name                     /dev/dsk/c7t2d2  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t2d3&lt;BR /&gt;   PV Name                     /dev/dsk/c7t2d3  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    2815&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c6t2d4&lt;BR /&gt;   PV Name                     /dev/dsk/c7t2d4  Alternate Link&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    3323&lt;BR /&gt;   Free PE                     0&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 26 Jan 2005 02:47:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470362#M211649</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T02:47:56Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470363#M211650</link>
      <description>OFF TOPIC.&lt;BR /&gt;Buna ziua stimate domn Marcel.&lt;BR /&gt;Ma numesc Roxana Colta si sunt din Brasov.&lt;BR /&gt;Cu scuzele de rigoare, imi permit sa va abordez cu un subiect diferit de problema dumneavoastra.&lt;BR /&gt;Deoarece sunteti, deocamdata, singurul roman cu experienta in servere HP-UX de pe acest forum, imi permit sa va solicit in mod expres parerea in urmatoarea problema: am niste DTC-uri mai vechi (modelele HP 2345A si HP 2340A) pe care le folosesc pentru a conecta niste terminale la un server HP e3000 cu MPE/iX 7.0 ; am achizitionat un server Inegrity rx2600 cu HP-UX 11iv2 pe care urmeaza sa il instalez si configurez ; pot sa folosesc DTC-urile vechi pe acest server, HP-UX 11i le recunoaste la configurare ?&lt;BR /&gt;&lt;BR /&gt;Putem gasi o alta cale de a comunica in mod direct ? Adresa mea este roxana@engineer.com . Va multumesc anticipat si imi cer scuze pentru indrazneala de a va aborda pe acest forum.&lt;BR /&gt;&lt;BR /&gt;Roxana</description>
      <pubDate>Wed, 26 Jan 2005 03:57:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470363#M211650</guid>
      <dc:creator>Roxana_4</dc:creator>
      <dc:date>2005-01-26T03:57:41Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470364#M211651</link>
      <description>Hi Marcel,&lt;BR /&gt;&lt;BR /&gt;I don't know what Roxana said before this. But I can see it is not a single disk (LUN)failure. If you go through the syslog.log message, you can see that PV1, PV2 and PV3 of VG2 have issued this message.&lt;BR /&gt;&lt;BR /&gt;If more than one LUN is causing such messages I would suspect something is wrong with the controller on the Clarion or HBA on the system or anything between them.&lt;BR /&gt;&lt;BR /&gt;If you are experiencing performance problem due to this message, then it may be an intermittent problem. That is, that path may become available for sometime and then go off-line and again become available. This may go on until the problem is fixed. There may be a considerable performance degradation due to this.&lt;BR /&gt;&lt;BR /&gt;Let us know if this is the case. You can also provide the syslog.log in its entirety.&lt;BR /&gt;&lt;BR /&gt;With regards,&lt;BR /&gt;Mohan.</description>
      <pubDate>Wed, 26 Jan 2005 06:52:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470364#M211651</guid>
      <dc:creator>Mohanasundaram_1</dc:creator>
      <dc:date>2005-01-26T06:52:03Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470365#M211652</link>
      <description>marcel - if the navicli is broken, then find the person responsible or the service contract, have an EMC technician come to the site and check the event logs.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 26 Jan 2005 07:26:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470365#M211652</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-26T07:26:05Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470366#M211653</link>
      <description>Hi again,&lt;BR /&gt;&lt;BR /&gt;I'll try to contact the person who can contact the tech support, the maintenance contract is no more available.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;In a feew minutes I'll put the syslog file.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks for your time.&lt;BR /&gt;&lt;BR /&gt;Marcel</description>
      <pubDate>Wed, 26 Jan 2005 07:44:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470366#M211653</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T07:44:05Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470367#M211654</link>
      <description>The entirely syslog file.&lt;BR /&gt;&lt;BR /&gt;If some of you have some time to read it...&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Marcel</description>
      <pubDate>Wed, 26 Jan 2005 08:10:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470367#M211654</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T08:10:15Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470368#M211655</link>
      <description>Hi Marcel,&lt;BR /&gt;&lt;BR /&gt;I can see the powerfailed message and recovered message for PVs in VG2 and VG3. So the problem must be as I was guessing. one of the path is going on and off frequently. Trace the entire path and isolate the problem.&lt;BR /&gt;&lt;BR /&gt;Contrary to what you have indicated in the first post, vg3 seems to be the most battered one due to this problem. Your problem lies in one of the components - HBA, cable, switch, Clariion controller.&lt;BR /&gt;&lt;BR /&gt;With regards,&lt;BR /&gt;Mohan.</description>
      <pubDate>Wed, 26 Jan 2005 08:26:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470368#M211655</guid>
      <dc:creator>Mohanasundaram_1</dc:creator>
      <dc:date>2005-01-26T08:26:44Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470369#M211656</link>
      <description>Hi again,&lt;BR /&gt;&lt;BR /&gt;thanks for the reply  Mohan,  .&lt;BR /&gt;&lt;BR /&gt;A good news,&lt;BR /&gt;on other server(which was down till today) I have found a Navispher which seems works (no core dump).&lt;BR /&gt;But I still cant find the IP of the clariion array, if it exist.&lt;BR /&gt;&lt;BR /&gt;Maybe some of you could help me to find it in a config file.&lt;BR /&gt;I cant figure which of the file could be:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;l1000db1 # pwd&lt;BR /&gt;/etc/Navisphere&lt;BR /&gt;l1000db1 # ls -ltr&lt;BR /&gt;total 4&lt;BR /&gt;drwxr-xr-x   2 root       sys             96 Mar 27  2001 logs&lt;BR /&gt;drwxr-xr-x   2 root       sys             96 Mar 27  2001 backup&lt;BR /&gt;lrwxr-xr-x   1 root       sys             32 Mar 27  2001 agent.config -&amp;gt; /opt/Navisphere/bin/agent.config&lt;BR /&gt;lrwxr-xr-x   1 root       sys             30 Mar 27  2001 clsendtrap -&amp;gt; /opt/Navisphere/bin/clsendtrap&lt;BR /&gt;drwxrwxrwx   2 root       sys             96 Mar 27  2001 log&lt;BR /&gt;lrwxr-xr-x   1 root       sys             31 Mar 30  2001 Navimon.cfg -&amp;gt; /opt/Navisphere/bin/Navimon.cfg&lt;BR /&gt;drwxr-xr-x   2 root       sys           2048 Jan 26 15:47 messages&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Marcel</description>
      <pubDate>Wed, 26 Jan 2005 10:40:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470369#M211656</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T10:40:18Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470370#M211657</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;looking in /etc/Navisphere/agent.config&lt;BR /&gt;I have found 2 devices: c6t0d0 &amp;amp; c7t0d2&lt;BR /&gt;&lt;BR /&gt;ussing those:&lt;BR /&gt;&lt;BR /&gt;l1000db1 # navicli -d c6t0d0 getlog -2000 | grep -i trespass&lt;BR /&gt;01/26/2005 16:42:04 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba9220   14318b0&lt;BR /&gt;01/26/2005 16:42:06 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba89cc   14318b0&lt;BR /&gt;01/26/2005 16:42:08 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba89cc   14318b0&lt;BR /&gt;01/26/2005 16:42:11 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba89cc   14318b0&lt;BR /&gt;01/26/2005 16:42:13 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba89cc   14318b0&lt;BR /&gt;01/26/2005 16:42:15 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba8408   14318b0&lt;BR /&gt;01/26/2005 16:42:17 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   ba8408   14318b0&lt;BR /&gt;l1000db1 # navicli -d c7t0d2 getlog -2000 | grep -i trespass&lt;BR /&gt;01/26/2005 16:42:25 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0b]   baa4b4   14318b0&lt;BR /&gt;01/26/2005 16:42:26 Bus 0 Enclosure 0 Disk 0(606) Unit Shutdown for Trespass              [0x01]   ba82c0   14318b0&lt;BR /&gt;01/26/2005 16:42:28 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x11]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:28 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x09]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:29 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0d]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:30 Bus 0 Enclosure 0 Disk 2(606) Unit Shutdown for Trespass              [0x03]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:30 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0a]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:30 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x10]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:32 Bus 0 Enclosure 0 Disk 4(606) Unit Shutdown for Trespass              [0x05]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:34 Bus 0 Enclosure 0 Disk 6(606) Unit Shutdown for Trespass              [0x07]   ba97e4   14318b0&lt;BR /&gt;01/26/2005 16:42:34 Bus 0 Enclosure 0 Disk 4(606) Unit Shutdown for Trespass              [0x05]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:34 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0c]   ba84ac   14318b0&lt;BR /&gt;01/26/2005 16:42:35 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x0e]   ba9d04   14318b0&lt;BR /&gt;01/26/2005 16:42:35 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x0f]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:35 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x12]   baa038   14318b0&lt;BR /&gt;01/26/2005 16:42:37 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x13]   baa224   14318b0&lt;BR /&gt;01/26/2005 16:42:37 Bus 0 Enclosure 0 Disk 0(606) Unit Shutdown for Trespass              [0x01]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:37 Bus 0 Enclosure 0 Disk 2(606) Unit Shutdown for Trespass              [0x03]   184db0   14318a8&lt;BR /&gt;01/26/2005 16:42:37 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x14]   826d30   ba8884&lt;BR /&gt;01/26/2005 16:42:37 Bus 0 Enclosure 1 Disk 0(606) Unit Shutdown for Trespass              [0x09]   826d30   ba8884&lt;BR /&gt;01/26/2005 16:42:38 Bus 0 Enclosure 2 Disk 0(606) Unit Shutdown for Trespass              [0x10]   826d30   ba8884&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;what the hell could be the problem?&lt;BR /&gt;unfortunately I can not verify the cables, hardvare in general, I'm out of site. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks, Marcel</description>
      <pubDate>Wed, 26 Jan 2005 10:53:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470370#M211657</guid>
      <dc:creator>Marcel  Preda</dc:creator>
      <dc:date>2005-01-26T10:53:05Z</dc:date>
    </item>
    <item>
      <title>Re: disk array, lvm,  POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470371#M211658</link>
      <description>our agent config reads like this:&lt;BR /&gt;# SAMPLES:&lt;BR /&gt;# device c0t0d0 NAVISPHERE-1 "NAVISPHERE-1"&lt;BR /&gt;&lt;BR /&gt;# automatically detect manageable devices&lt;BR /&gt;device auto auto&lt;BR /&gt;&lt;BR /&gt;maybe that's enough, but I'm not very convinced:&lt;BR /&gt;&lt;BR /&gt;root@xxxxx:/opt/Navisphere/bin&amp;gt;./navicli getlog -100&lt;BR /&gt;Error: getlog command failed&lt;BR /&gt;Cannot access device&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;In the meantime You should use fcmsutil to check for adapter/cabling errors...&lt;BR /&gt;&lt;BR /&gt;/opt/fcms/bin/fcmsutil /dev/tdN stat&lt;BR /&gt;&lt;BR /&gt;root@xxxxx:/opt/Navisphere/bin&amp;gt;/opt/fcms/bin/fcmsutil /dev/td2 stat&lt;BR /&gt;Wed Jan 26 16:53:47 2005&lt;BR /&gt;Channel Statistics&lt;BR /&gt;&lt;BR /&gt;Statistics From Link Status Registers ...&lt;BR /&gt;Loss of signal                    22     Bad Rx Char                     2379&lt;BR /&gt;Loss of Sync                      46     Link Fail                         41&lt;BR /&gt;Received EOFa                      0     Discarded Frame                    0&lt;BR /&gt;Bad CRC                            0     Protocol Error                     0&lt;BR /&gt;&lt;BR /&gt;Do a reset on the counters and see, if Loss of signal, Bad RX or similar things increase, if yes there might be a hardware issue&lt;BR /&gt;&lt;BR /&gt;These are also interesting:&lt;BR /&gt;Storm Statistics ...&lt;BR /&gt;Elastic Store Error Storm                                   0&lt;BR /&gt;Link Fail storm .                                           0&lt;BR /&gt;LIP(f8, xx) storm .                                         0&lt;BR /&gt;Loss Of Signal Storm                                        0&lt;BR /&gt;Out Of Sync Storm                                           0&lt;BR /&gt;Link Fault Storm                                            0&lt;BR /&gt;NOS_OLS Storm                                               1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 26 Jan 2005 10:57:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-array-lvm-powerfailed/m-p/3470371#M211658</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-26T10:57:55Z</dc:date>
    </item>
  </channel>
</rss>

