<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS Cluster, Fibrechannel and the WWID in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790400#M9737</link>
    <description>We distribute the paths evenly over the controllers at boot time. So, the 1st and 5th disk gets path 1, the 2nd and 6th path 2, etc.&lt;BR /&gt;This because our load changes all the time due to the nature of the applications.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
    <pubDate>Fri, 19 May 2006 03:22:52 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2006-05-19T03:22:52Z</dc:date>
    <item>
      <title>OpenVMS Cluster, Fibrechannel and the WWID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790396#M9733</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I have a cluster of an rx7620 and rx4640 with dual port Qlogic FC cards installed in each. These are connected to a SAN.&lt;BR /&gt;&lt;BR /&gt;If I do a SHOW DEV FG, it shows me the FC port name and FC node name for each card :-&lt;BR /&gt;&lt;BR /&gt;%SYSMAN-I-OUTPUT, command execution on node ITP002&lt;BR /&gt;Device FGA0:, device type QLogic ISP23xx FibreChannel, is online, shareable,&lt;BR /&gt;    error logging is enabled.&lt;BR /&gt;    Error count                    0    Operations completed                 95&lt;BR /&gt;    Owner process                 ""    Owner UIC                      [SYSTEM]&lt;BR /&gt;    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W&lt;BR /&gt;    Reference count                0    Default buffer size                   0&lt;BR /&gt;    Current preferred CPU Id       1    Fastpath                              1&lt;BR /&gt;    Current Interrupt CPU Id       1&lt;BR /&gt;Document Name: untitled&lt;BR /&gt;&lt;BR /&gt;    FC Port Name 5006-0B00-0038-A3A4    FC Node Name        5006-0B00-0038-A3A5&lt;BR /&gt;Device FGB0:, device type QLogic ISP23xx FibreChannel, is online, shareable,&lt;BR /&gt;    error logging is enabled.&lt;BR /&gt;    Error count                    0    Operations completed                 95&lt;BR /&gt;    Owner process                 ""    Owner UIC                      [SYSTEM]&lt;BR /&gt;    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W&lt;BR /&gt;    Reference count                0    Default buffer size                   0&lt;BR /&gt;    Current preferred CPU Id       0    Fastpath                              1&lt;BR /&gt;    Current Interrupt CPU Id       0&lt;BR /&gt;    FC Port Name 5006-0B00-0038-A3A6    FC Node Name        5006-0B00-0038-A3A7&lt;BR /&gt;%SYSMAN-I-OUTPUT, command execution on node ITP001&lt;BR /&gt;Device FGA0:, device type QLogic ISP23xx FibreChannel, is online, shareable,&lt;BR /&gt;    error logging is enabled.&lt;BR /&gt;    Error count                    0    Operations completed                 95&lt;BR /&gt;    Owner process                 ""    Owner UIC                      [SYSTEM]&lt;BR /&gt;    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W&lt;BR /&gt;    Reference count                0    Default buffer size                   0&lt;BR /&gt;    Current preferred CPU Id       3    Fastpath                              1&lt;BR /&gt;    Current Interrupt CPU Id       3&lt;BR /&gt;    FC Port Name 5006-0B00-0032-9914    FC Node Name        5006-0B00-0032-9915&lt;BR /&gt;Device FGB0:, device type QLogic ISP23xx FibreChannel, is online, shareable,&lt;BR /&gt;    error logging is enabled.&lt;BR /&gt;    Error count                    0    Operations completed                 95&lt;BR /&gt;    Owner process                 ""    Owner UIC                      [SYSTEM]&lt;BR /&gt;    Owner process ID        00000000    Dev Prot              S:RWPL,O:RWPL,G,W&lt;BR /&gt;    Reference count                0    Default buffer size                   0&lt;BR /&gt;    Current preferred CPU Id       2    Fastpath                              1&lt;BR /&gt;    Current Interrupt CPU Id       2&lt;BR /&gt;    FC Port Name 5006-0B00-0032-9916    FC Node Name        5006-0B00-0032-9917&lt;BR /&gt;SYSMAN&amp;gt;                   &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If I do a show device from either systems is displays the fibrechannel disks, and the path to the disks&amp;gt;:-&lt;BR /&gt;&lt;BR /&gt;ITP002&amp;gt;&amp;gt;sh dev d/multi&lt;BR /&gt;Device                  Device           Error         Current&lt;BR /&gt; Name                   Status           Count  Paths    path&lt;BR /&gt;$1$DGA2:      (ITP002)  Online               0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA131:    (ITP002)  Online               0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA143:    (ITP002)  Online               0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA155:    (ITP002)  Online               0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA167:    (ITP002)  Online               0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA1371:   (ITP002)  ShadowSetMember      0   3/ 3  FGB0.5006-0484-49AE-31F2&lt;BR /&gt;$1$DGA1389:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1407:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1425:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1443:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1449:   (ITP002)  Mounted              0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1455:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;$1$DGA1461:   (ITP002)  ShadowSetMember      0   3/ 3  FGA0.5006-0484-49AE-31DD&lt;BR /&gt;&lt;BR /&gt;My questions are:-&lt;BR /&gt;&lt;BR /&gt;The WWID of the current paths on both nodes of the cluster are the same, but I can set different paths to access different devices on a per node basis. &lt;BR /&gt;&lt;BR /&gt;Does this mean all the FC traffic for a particular path is going across 1 HBA for the entire cluster?&lt;BR /&gt;&lt;BR /&gt;How come the WWID of the current path in the SHOW device display does not match either the port name or node name of the SHOW DEVICE/FULL FG command?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 18 May 2006 06:49:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790396#M9733</guid>
      <dc:creator>Andrew Rycroft1</dc:creator>
      <dc:date>2006-05-18T06:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster, Fibrechannel and the WWID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790397#M9734</link>
      <description>&amp;gt; Does this mean all the FC traffic for a particular path is going across 1 HBA for the entire cluster?&lt;BR /&gt;&lt;BR /&gt;No. Each node will route its own traffic to a single virtual disk via one path. The traffic for different devices can go though different paths.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; does not match either the port name or node name&lt;BR /&gt;&lt;BR /&gt;'SHOW DEVICE/MULTIPATH' displays the WWPNs of the storage controller ports (the targets), while 'SHOW DEVICE/FULL FG' displays the WWPNs and WWNNs of the server's Fibre Channel adapter ports (the initiators).</description>
      <pubDate>Thu, 18 May 2006 07:35:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790397#M9734</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-05-18T07:35:40Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster, Fibrechannel and the WWID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790398#M9735</link>
      <description>-- Does this mean all the FC traffic for a particular path is going across 1 HBA for the entire cluster?&lt;BR /&gt;&lt;BR /&gt;The HBA listed in the path is for that node and that device.  All the traffic for that path on that node will go through that HBA for that device.  It is not a cluster-wide setting.  AFAIK you have to manually decide which path you want for each device if you don't want the default path when the system starts up.  So for traffic balancing purposes you would need a bit of knowledge about the device usage.  Of course, VMS will automatically switch the paths if something should go wrong on any given path and the device become unreachable.&lt;BR /&gt;&lt;BR /&gt;-- How come the WWID of the current path in the SHOW device display does not match either the port name or node name of the SHOW DEVICE/FULL FG command?&lt;BR /&gt;&lt;BR /&gt;The WWID of the current path in the SHOW DEVICE for the disk is that of the storage element not the HBA on each node.  In my case (I have a pair of HSG80's) my SHOW DEVICE for the disk shows the WWID for the port on the HSG that the lun is available on - not the WWID of the HBA.</description>
      <pubDate>Thu, 18 May 2006 07:40:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790398#M9735</guid>
      <dc:creator>John H. Reinhardt</dc:creator>
      <dc:date>2006-05-18T07:40:11Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster, Fibrechannel and the WWID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790399#M9736</link>
      <description>Andrew, the attached document may help you to understand how this works a little better.&lt;BR /&gt;&lt;BR /&gt;Rob.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 19 May 2006 02:53:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790399#M9736</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2006-05-19T02:53:14Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster, Fibrechannel and the WWID</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790400#M9737</link>
      <description>We distribute the paths evenly over the controllers at boot time. So, the 1st and 5th disk gets path 1, the 2nd and 6th path 2, etc.&lt;BR /&gt;This because our load changes all the time due to the nature of the applications.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 19 May 2006 03:22:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-fibrechannel-and-the-wwid/m-p/3790400#M9737</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-05-19T03:22:52Z</dc:date>
    </item>
  </channel>
</rss>

