<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EMC powerpath v5.0 in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933483#M22706</link>
    <description>You have to install NaviSphere agent in order to let powerpath controll the EMC clariion disks. the disks are Clariion, not Symm. Make sure agent.config is right, i.e no tresspass. The install does not require a reboot, 3 minutes.</description>
    <pubDate>Wed, 31 Jan 2007 12:35:21 GMT</pubDate>
    <dc:creator>John Guster</dc:creator>
    <dc:date>2007-01-31T12:35:21Z</dc:date>
    <item>
      <title>EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933476#M22699</link>
      <description>Help, I don't know why but the EMC powermt command does not 'see' the devices.&lt;BR /&gt;&lt;BR /&gt;I've got EMC Powerpath installed.&lt;BR /&gt;atom:/root $ swlist -l product | grep -i emc&lt;BR /&gt;  EMCpower              HP.5.0.0_b132  PowerPath  &lt;BR /&gt;I've got the license installed correctly.&lt;BR /&gt;powermt version&lt;BR /&gt;EMC powermt for PowerPath (c) Version 5.0.0 (build 132)&lt;BR /&gt;&lt;BR /&gt;The problem is: #powermt config &lt;BR /&gt;fails with "Error: device(s) not found"&lt;BR /&gt;&lt;BR /&gt;But ioscan -fnC disk&lt;BR /&gt;shows four CLAIMED devices. One is the primary path and there is one alternate path which I defined for the LVM Volume Group. The other two paths I'm not using but I think they point to the same device.  I am using the device successfully. Its a 100GB disk.&lt;BR /&gt;&lt;BR /&gt;Although the disk storage is working, I'm just concerned that there is no failover path defined.&lt;BR /&gt;</description>
      <pubDate>Fri, 26 Jan 2007 12:01:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933476#M22699</guid>
      <dc:creator>john flores</dc:creator>
      <dc:date>2007-01-26T12:01:32Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933477#M22700</link>
      <description>You may want to try and run powercf -q&lt;BR /&gt;then powermt check and powermt config.</description>
      <pubDate>Fri, 26 Jan 2007 12:58:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933477#M22700</guid>
      <dc:creator>Taylor Lewick_2</dc:creator>
      <dc:date>2007-01-26T12:58:00Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933478#M22701</link>
      <description>I tried doing what you suggested but it didn't make a difference.&lt;BR /&gt;atom:/root $ powercf -q&lt;BR /&gt;atom:/root $ powermt check &lt;BR /&gt;Device(s) not found.&lt;BR /&gt;atom:/root $ powermt config&lt;BR /&gt;atom:/root $ powermt display dev=all&lt;BR /&gt;Device(s) not found.&lt;BR /&gt;&lt;BR /&gt;Just the same I appreciate your input.&lt;BR /&gt;&lt;BR /&gt;I have a case with EMC open, so hopefully they can tell me what to do.&lt;BR /&gt;Thank you.</description>
      <pubDate>Fri, 26 Jan 2007 14:08:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933478#M22701</guid>
      <dc:creator>john flores</dc:creator>
      <dc:date>2007-01-26T14:08:06Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933479#M22702</link>
      <description>ioscan is not enough. You need create device files for each EMC LUN by execute insf -e&lt;BR /&gt;after that, ioscan -fnC disk to check each h/w path does have its device file, one it is there, then you can powermt config/display/save....</description>
      <pubDate>Mon, 29 Jan 2007 15:54:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933479#M22702</guid>
      <dc:creator>John Guster</dc:creator>
      <dc:date>2007-01-29T15:54:41Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933480#M22703</link>
      <description>Hope you can read the following output from ioscan -fnC disk command. It shows I have device files for 4 "DGC" devices. The first one c4t0d0 is what I'm using as the primary and c8t0d0 is the Alternate link in my VG01. I'm actually using the 100GB fibre disk. So my only problem is the powermt command doesn't see these devices. Also I don't know where to look for the LUN number. I've rebooted many times, the last time was this morning. I've tried insf -e and it doesn't do anything new. Got any other idea? Also emc processes are running.&lt;BR /&gt;&lt;BR /&gt;disk      3  0/2/1/0.100.11.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5&lt;BR /&gt;                            /dev/dsk/c4t0d0   /dev/rdsk/c4t0d0&lt;BR /&gt;disk      4  0/2/1/0.100.27.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5&lt;BR /&gt;                            /dev/dsk/c6t0d0   /dev/rdsk/c6t0d0&lt;BR /&gt;disk      5  0/3/1/0.100.11.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5&lt;BR /&gt;                            /dev/dsk/c8t0d0   /dev/rdsk/c8t0d0&lt;BR /&gt;disk      6  0/3/1/0.100.27.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5&lt;BR /&gt;                            /dev/dsk/c10t0d0   /dev/rdsk/c10t0d0&lt;BR /&gt;atom:/root $ &lt;BR /&gt;</description>
      <pubDate>Mon, 29 Jan 2007 16:29:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933480#M22703</guid>
      <dc:creator>john flores</dc:creator>
      <dc:date>2007-01-29T16:29:59Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933481#M22704</link>
      <description>Check the file /etc/Navisphere/agent.config&lt;BR /&gt;for entry # OptionsSupported AutoTrespass&lt;BR /&gt;&lt;BR /&gt;The entry should have # ( commented) as above.&lt;BR /&gt;&lt;BR /&gt;Restart Navisphere Agent.&lt;BR /&gt;&lt;BR /&gt;Then run&lt;BR /&gt;# powermt config     &lt;BR /&gt;# powermt save   &lt;BR /&gt;# powermt display &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Jan 2007 03:24:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933481#M22704</guid>
      <dc:creator>Sameer_Nirmal</dc:creator>
      <dc:date>2007-01-30T03:24:40Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933482#M22705</link>
      <description>I do not have Navisphere installed. I was trying to avoid installing it until this problem is solved.</description>
      <pubDate>Tue, 30 Jan 2007 09:30:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933482#M22705</guid>
      <dc:creator>john flores</dc:creator>
      <dc:date>2007-01-30T09:30:03Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933483#M22706</link>
      <description>You have to install NaviSphere agent in order to let powerpath controll the EMC clariion disks. the disks are Clariion, not Symm. Make sure agent.config is right, i.e no tresspass. The install does not require a reboot, 3 minutes.</description>
      <pubDate>Wed, 31 Jan 2007 12:35:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933483#M22706</guid>
      <dc:creator>John Guster</dc:creator>
      <dc:date>2007-01-31T12:35:21Z</dc:date>
    </item>
    <item>
      <title>Re: EMC powerpath v5.0</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933484#M22707</link>
      <description>I finally got it. EMC told me.&lt;BR /&gt;&lt;BR /&gt;Run the following command&lt;BR /&gt;powermt manage class=clariion &lt;BR /&gt;which told me that "clariion devices are not managed but will be managed after I&lt;BR /&gt;reboot." After rebooting I got the following: &lt;BR /&gt;&lt;BR /&gt;atom:/root $ powermt manage class=clariion &lt;BR /&gt;Warning: class Clariion already managed. &lt;BR /&gt;atom:/root $ powermt display dev=all &lt;BR /&gt;CLARiiON ID=APM00042002404 [SG_Atom] &lt;BR /&gt;Logical device ID=600601602B091100849A2BB4B04CDA11 [LUN 40] &lt;BR /&gt;state=alive; policy=CLAROpt; priority=0; queued-IOs=0 &lt;BR /&gt;Owner: default=SP A, current=SP A &lt;BR /&gt;==============================================================================&lt;BR /&gt;&lt;BR /&gt;---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats --- &lt;BR /&gt;###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors &lt;BR /&gt;==============================================================================&lt;BR /&gt;&lt;BR /&gt;  10 0/3/1/0.100.27.0.0.0.0    c10t0d0   SP B0     active  alive      0      0 &lt;BR /&gt;   4 0/2/1/0.100.11.0.0.0.0    c4t0d0    SP A1     active  alive      0      0 &lt;BR /&gt;   6 0/2/1/0.100.27.0.0.0.0    c6t0d0    SP B1     active  alive      0      0 &lt;BR /&gt;   8 0/3/1/0.100.11.0.0.0.0    c8t0d0    SP A0     active  alive      0      0 &lt;BR /&gt;&lt;BR /&gt;I ran the rest of the commands EMC support recommened: &lt;BR /&gt;&lt;BR /&gt;atom:/root $ powermt save &lt;BR /&gt;atom:/root $ powercf -q &lt;BR /&gt;Unexpected error occurred. &lt;BR /&gt;atom:/root $ ioscan -fnC disk &lt;BR /&gt;Class     I  H/W Path        Driver   S/W State   H/W Type     Description &lt;BR /&gt;=========================================================================== &lt;BR /&gt;disk      0  0/0/2/0.0.0.0   sdisk    CLAIMED     DEVICE       TEAC    DV-28E-C &lt;BR /&gt;                            /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0 &lt;BR /&gt;disk      1  0/1/1/0.0.0     sdisk    CLAIMED     DEVICE       HP 73.4GMAS3735NC &lt;BR /&gt;                            /dev/dsk/c2t0d0   /dev/rdsk/c2t0d0 &lt;BR /&gt;disk      2  0/1/1/0.1.0     sdisk    CLAIMED     DEVICE       HP 73.4GMAS3735NC &lt;BR /&gt;                            /dev/dsk/c2t1d0   /dev/rdsk/c2t1d0 &lt;BR /&gt;disk      3  0/2/1/0.100.11.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5 &lt;BR /&gt;                            /dev/dsk/c4t0d0   /dev/rdsk/c4t0d0 &lt;BR /&gt;disk      4  0/2/1/0.100.27.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5 &lt;BR /&gt;                            /dev/dsk/c6t0d0   /dev/rdsk/c6t0d0 &lt;BR /&gt;disk      5  0/3/1/0.100.11.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5 &lt;BR /&gt;                            /dev/dsk/c8t0d0   /dev/rdsk/c8t0d0 &lt;BR /&gt;disk      6  0/3/1/0.100.27.0.0.0.0  sdisk    CLAIMED     DEVICE       DGC     CX700WDR5 &lt;BR /&gt;                            /dev/dsk/c10t0d0   /dev/rdsk/c10t0d0 &lt;BR /&gt;atom:/root $ insf -eC disk &lt;BR /&gt;insf: Installing special files for sdisk instance 0 address 0/0/2/0.0.0.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 1 address 0/1/1/0.0.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 2 address 0/1/1/0.1.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 3 address 0/2/1/0.100.11.0.0.0.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 4 address 0/2/1/0.100.27.0.0.0.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 5 address 0/3/1/0.100.11.0.0.0.0 &lt;BR /&gt;insf: Installing special files for sdisk instance 6 address 0/3/1/0.100.27.0.0.0.0 &lt;BR /&gt;atom:/root $ powermt config &lt;BR /&gt;atom:/root $ powermt display dev=all &lt;BR /&gt;CLARiiON ID=APM00042002404 [SG_Atom] &lt;BR /&gt;Logical device ID=600601602B091100849A2BB4B04CDA11 [LUN 40] &lt;BR /&gt;state=alive; policy=CLAROpt; priority=0; queued-IOs=0 &lt;BR /&gt;Owner: default=SP A, current=SP A &lt;BR /&gt;==============================================================================&lt;BR /&gt;&lt;BR /&gt;---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats --- &lt;BR /&gt;###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors &lt;BR /&gt;==============================================================================&lt;BR /&gt;&lt;BR /&gt;  10 0/3/1/0.100.27.0.0.0.0    c10t0d0   SP B0     active  alive      0      0 &lt;BR /&gt;   4 0/2/1/0.100.11.0.0.0.0    c4t0d0    SP A1     active  alive      0      0 &lt;BR /&gt;   6 0/2/1/0.100.27.0.0.0.0    c6t0d0    SP B1     active  alive      0      0 &lt;BR /&gt;   8 0/3/1/0.100.11.0.0.0.0    c8t0d0    SP A0     active  alive      0      0 &lt;BR /&gt;</description>
      <pubDate>Tue, 13 Feb 2007 16:58:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-powerpath-v5-0/m-p/3933484#M22707</guid>
      <dc:creator>john flores</dc:creator>
      <dc:date>2007-02-13T16:58:03Z</dc:date>
    </item>
  </channel>
</rss>

