<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Live editing Network settings Question in Array Setup and Networking</title>
    <link>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186205#M3175</link>
    <description>&lt;P&gt;Hello Sheldon,&lt;/P&gt;&lt;P&gt;The ISCSI go to a redundant dedicated 2 switch ISCSI stack and the Management go to a seperate redundant 2 switch Management stack. Both are so isolated that they could realistically be merged.&lt;/P&gt;&lt;P&gt;The thought process was that they wanted to protect against Nic card (built in or 4 port Nic) failure and controller failure, and network equipment failure since our systems interface with heavy production equipment.&amp;nbsp; The only reason I dont have a second nimble is that it was not in the budget.&lt;/P&gt;&lt;P&gt;All Hosts are running NCM 7.0.2. When i tried this on the site that was being commissioned I saw high latency but no system drop out, but do not know if that was just reporting blip as connections and network were adjusted.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your reply. Ill ask HPE directly and post results (most likely in a week while an equipment stand down can be scheduled)&lt;/P&gt;</description>
    <pubDate>Tue, 11 Apr 2023 14:28:48 GMT</pubDate>
    <dc:creator>Darkzadow</dc:creator>
    <dc:date>2023-04-11T14:28:48Z</dc:date>
    <item>
      <title>Live editing Network settings Question</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186193#M3173</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I have a HF40 already up and running with many (80) production systems on it. Connected to an 5 Host VMware cluster. For reasons unkown, at commisioning, they only used 2 out of the 4 available ISCSI ports. I would like to add the unused ports to my iscsi interfaces. Will this tank my system?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Brandon&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2023-04-11 090423.png" style="width: 1368px;"&gt;&lt;img src="https://community.hpe.com/t5/image/serverpage/image-id/134579i20180423FDE74FC9/image-size/large?v=v2&amp;amp;px=2000" role="button" title="Screenshot 2023-04-11 090423.png" alt="Screenshot 2023-04-11 090423.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Apr 2023 04:20:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186193#M3173</guid>
      <dc:creator>Darkzadow</dc:creator>
      <dc:date>2023-04-13T04:20:11Z</dc:date>
    </item>
    <item>
      <title>Re: Live editing Network settings Question</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186201#M3174</link>
      <description>&lt;P&gt;It &lt;EM&gt;should&lt;/EM&gt; be no problem at all. "Any networking port can be used for any purpose."&lt;BR /&gt;Assuming the ESXi hosts have the latest&amp;nbsp;&lt;EM&gt;HPE Storage Connection Manager &lt;STRONG&gt;(NCM) &lt;/STRONG&gt;for VMware &lt;/EM&gt;installed on each of them, they will simply have more array data ports from which the discovery port can draw.&lt;BR /&gt;You may want to contact Nimble Support and have a support engineer in a virtual room while you enable the two remaining ports.&lt;/P&gt;&lt;P&gt;That's an interesting port layout. HPE Best Practices would typically have eth0A and eth0B both doing management, preferably&amp;nbsp;using &lt;U&gt;two&lt;/U&gt; switches. In such a layout, all ports ending with "A" or "C" would go to switch 1, and all "B" or "D" to switch 2. This spreads the communications in a way to reduce or eliminate single points of failure. Of course if everything is connected to a single switch, then it's academic.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2023 14:06:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186201#M3174</guid>
      <dc:creator>Sheldon Smith</dc:creator>
      <dc:date>2023-04-11T14:06:10Z</dc:date>
    </item>
    <item>
      <title>Re: Live editing Network settings Question</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186205#M3175</link>
      <description>&lt;P&gt;Hello Sheldon,&lt;/P&gt;&lt;P&gt;The ISCSI go to a redundant dedicated 2 switch ISCSI stack and the Management go to a seperate redundant 2 switch Management stack. Both are so isolated that they could realistically be merged.&lt;/P&gt;&lt;P&gt;The thought process was that they wanted to protect against Nic card (built in or 4 port Nic) failure and controller failure, and network equipment failure since our systems interface with heavy production equipment.&amp;nbsp; The only reason I dont have a second nimble is that it was not in the budget.&lt;/P&gt;&lt;P&gt;All Hosts are running NCM 7.0.2. When i tried this on the site that was being commissioned I saw high latency but no system drop out, but do not know if that was just reporting blip as connections and network were adjusted.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your reply. Ill ask HPE directly and post results (most likely in a week while an equipment stand down can be scheduled)&lt;/P&gt;</description>
      <pubDate>Tue, 11 Apr 2023 14:28:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/live-editing-network-settings-question/m-p/7186205#M3175</guid>
      <dc:creator>Darkzadow</dc:creator>
      <dc:date>2023-04-11T14:28:48Z</dc:date>
    </item>
  </channel>
</rss>

