<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: C Class enclosure interconnect slots in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524339#M7983</link>
    <description>Quad port mezz card?&lt;BR /&gt;&lt;BR /&gt;Are you satisfied with the VC performance with so many systems using it?  VC with only 4 x 4 Gpbs is not what I would think enough for heavy usage.  I admit that its probably unlikely that all 16 servers would be pushing past 1 Gbps at the same time but I don't want that hassle if I can design something else.&lt;BR /&gt;&lt;BR /&gt;VC looks great and if they could use 8 x 4 Gpbs FC ports (or even 4 x 8 Gpbs), it would sell me.  However our network people would rather loose a limb than use VC.  I don't know why and it gives me some impression its a loss of "individuality".&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Sun, 01 Nov 2009 03:57:41 GMT</pubDate>
    <dc:creator>S Petersen</dc:creator>
    <dc:date>2009-11-01T03:57:41Z</dc:date>
    <item>
      <title>C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524332#M7976</link>
      <description>Greetings,&lt;BR /&gt;&lt;BR /&gt;I found that we have been given some Cisco 9124e switches so I thought I should give them a go.  We already have SAN pass through modules in interconnect 3 and 4.  I read some vague document that suggested that you could have a mixture of SAN Pass through modules, 9124e switches and Virtual Connect modules and they could all work..  &lt;BR /&gt;&lt;BR /&gt;So, I put the 9124e's into interconnect 7 and 8 in one enclosure with little in it and nothing untowards happened.  I then found out that the particular enclosure has a network problem and I needed to move the 9124e's to another enclosure in interconnect 7 an 8.  As soon as they went in, a 680C complained about a pot mismatch.  So I moved them to interconnect 5 and 6 and another server complained about a mismatch.&lt;BR /&gt;&lt;BR /&gt;Some of the system log says.&lt;BR /&gt;&lt;BR /&gt;A port mismatch was found with server blade bay 8 and interconnect bay 6.&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;Blade in bay #8 status changed from OK to Degraded.&lt;BR /&gt;Mismatching I/O was detected on Blade 8, Mezz card 2, Port 3.&lt;BR /&gt;&lt;BR /&gt;Once I withdrew the two 9124e's all seems to be ok again.  The actual servers did not notice anything that I can see.&lt;BR /&gt;&lt;BR /&gt;The dumb thing about this is that Mezz card 2 does not have anything in it.  However, looking at the Graphical View, there is a tick in the Mezz slot 2.  The ports that are not even there in my opinion say they are using the bays 5, 6, 7 and 8.  &lt;BR /&gt;&lt;BR /&gt;The other blade in another slot did the same thing.  It complained about a mezz card that is not there.&lt;BR /&gt;&lt;BR /&gt;What could be the problem?  Is the solution just unticking a box in Port Mapping?&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Stephen</description>
      <pubDate>Fri, 30 Oct 2009 05:03:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524332#M7976</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-10-30T05:03:54Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524333#M7977</link>
      <description>Do You know the c-class port mapping basics ?&lt;BR /&gt;&lt;BR /&gt;What mezzazine cards You have in the server and in which slots ?</description>
      <pubDate>Fri, 30 Oct 2009 06:41:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524333#M7977</guid>
      <dc:creator>JKytsi</dc:creator>
      <dc:date>2009-10-30T06:41:17Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524334#M7978</link>
      <description>---- Do You know the c-class port mapping basics ?&lt;BR /&gt;&lt;BR /&gt;I expect that by that you mean that the onboard nic's go to interconnect bays 1 and 2.&lt;BR /&gt;&lt;BR /&gt;Mezz card 1 goes t interconnect 3 and 4.  That leaves mezz card 2 going to 5 and 6 adn 7 and 8.&lt;BR /&gt;&lt;BR /&gt;---- What mezzazine cards You have in the server and in which slots ?&lt;BR /&gt;&lt;BR /&gt;All our servers have mezz cards in slot 1.  Generally for normal usage, we only have interconnect bays 1, 2, 3 and 4 in use.  &lt;BR /&gt;&lt;BR /&gt;I had a look at another of the blades in the enclosure that did not complain.  In OA, it clearly states that Mezz slot 2 has no card present.  The two blades that complained look different in OA.  The Mezz Slot 2 "tick Box" is ticked and the grayed out ports are mapped to the right bays.  What I did not realise is that two of the blades actually have cards there.  Table view shows very clearly that Mezz slot 2 has some quad port 1 Gb NICs.&lt;BR /&gt;&lt;BR /&gt;This changes my plans a lot.  The servers that complained are Hyper V and obviously need these extra NICs.&lt;BR /&gt;&lt;BR /&gt;Thanks for making me look in the right spot.&lt;BR /&gt;&lt;BR /&gt;Stephen</description>
      <pubDate>Fri, 30 Oct 2009 08:32:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524334#M7978</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-10-30T08:32:08Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524335#M7979</link>
      <description>The 9124 SAN switches in a c7000 enclosure in interconnects 7&amp;amp;8 will only connect to full height blades if you have a HBA in mezz3 and a 2 port NIC in mezz2.&lt;BR /&gt;&lt;BR /&gt;They will not connect to half height blades at all.</description>
      <pubDate>Fri, 30 Oct 2009 11:46:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524335#M7979</guid>
      <dc:creator>Adrian Clint</dc:creator>
      <dc:date>2009-10-30T11:46:32Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524336#M7980</link>
      <description>&lt;BR /&gt;--- The 9124 SAN switches in a c7000 enclosure in interconnects 7&amp;amp;8 will only connect to full height blades if you have a HBA in mezz3 and a 2 port NIC in mezz2.&lt;BR /&gt;&lt;BR /&gt;--- They will not connect to half height blades at all.&lt;BR /&gt;&lt;BR /&gt;Going by the diagram for port mapping of the half-height servers, Mezz 2 gives an indication that it can connect to 7/8.  The OA admin guide shows that a Mezz 2 that uses a PCIe x8 (interesting) slot can address interconnect 5 and 6 as well as 7 and 8?  The full height blades are even more interesting.&lt;BR /&gt;&lt;BR /&gt;My main error was that as there is nothing physically in the interconnect bays 5 and 6, I assumed they were not being used.  Someone has put NC326m Dual Port 1 Gb NIC for c-Class Bladesystem cards in some of the servers.  They use interconnect 5 and 6 (half height and 7 and 8 full height) even though there is nothing can be seen in the interconnect bays.&lt;BR /&gt;&lt;BR /&gt;Anyway, I have learnt a valuable lesson on this and it changes our architecture design for SAN a lot.  VC bandwidth for virtualisation systems just can't offer the bandwidth especially in a full C Class running hot little 490's.  This was not so much about the 9124e, it was about using the interconnects.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 30 Oct 2009 21:20:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524336#M7980</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-10-30T21:20:40Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524337#M7981</link>
      <description>We have two enclosures full of 490 servers and we don't see any problems with VC.&lt;BR /&gt;&lt;BR /&gt;You need to have quad-port mezz card to reach interconnect bays 7/8 from half-height machine</description>
      <pubDate>Sat, 31 Oct 2009 17:48:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524337#M7981</guid>
      <dc:creator>JKytsi</dc:creator>
      <dc:date>2009-10-31T17:48:41Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524338#M7982</link>
      <description>As Jarkko (and the c7000 wiring diagram indicates), &lt;BR /&gt;&lt;BR /&gt;The only way a halfheight blade can make use of IC Bays 7 and 8, is if there is a quad port NIC in MEZZ 2.    In which case NIC port 3 maps to Bay 7 and NIC port 4 maps to Bay 8.&lt;BR /&gt;&lt;BR /&gt;Dave.</description>
      <pubDate>Sun, 01 Nov 2009 03:51:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524338#M7982</guid>
      <dc:creator>The Brit</dc:creator>
      <dc:date>2009-11-01T03:51:32Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524339#M7983</link>
      <description>Quad port mezz card?&lt;BR /&gt;&lt;BR /&gt;Are you satisfied with the VC performance with so many systems using it?  VC with only 4 x 4 Gpbs is not what I would think enough for heavy usage.  I admit that its probably unlikely that all 16 servers would be pushing past 1 Gbps at the same time but I don't want that hassle if I can design something else.&lt;BR /&gt;&lt;BR /&gt;VC looks great and if they could use 8 x 4 Gpbs FC ports (or even 4 x 8 Gpbs), it would sell me.  However our network people would rather loose a limb than use VC.  I don't know why and it gives me some impression its a loss of "individuality".&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 01 Nov 2009 03:57:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524339#M7983</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-11-01T03:57:41Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524340#M7984</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;--- The only way a halfheight blade can make use of IC Bays 7 and 8, is if there is a quad port NIC in MEZZ 2. In which case NIC port 3 maps to Bay 7 and NIC port 4 maps to Bay 8&lt;BR /&gt;&lt;BR /&gt;Are you saying that in that scenerio, the card in Mezz 1 would see the 9124e in interconnect 5 and 6?&lt;BR /&gt;&lt;BR /&gt;I have a 460 and a 680 in this rack that first identified me of this problem.  When I put the 9124e into interconnect 7 and 8, the 680 complained.  It has a NC325m Quad port NIC in Mezz slot 2 and it uses interconnects 5,6,7 and 8.&lt;BR /&gt;&lt;BR /&gt;I then put the 9124e into interconnect 5 and 6 and the 460 also complained.  It has a NC326 Dual port NIC in Mezz slot 2 and it uses interconnect 5 and 6.&lt;BR /&gt;&lt;BR /&gt;I guess the two systems made for an interesting day. I just don't have a 460 with a Quad port NIC in Mezz slot 2 to see what happens.&lt;BR /&gt;&lt;BR /&gt;I really have to reread those manuals.&lt;BR /&gt;&lt;BR /&gt;Thanks for our help.&lt;BR /&gt;&lt;BR /&gt;Stephen</description>
      <pubDate>Sun, 01 Nov 2009 04:47:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524340#M7984</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-11-01T04:47:59Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524341#M7985</link>
      <description>well ...we have 8Gb VC FC modules (8Gb FC HBAs and 8Gb director core switches)but so far using only 1 uplink per module and not seeing any performance problems. But that of course depends of usage (we are running ESX vSphere)</description>
      <pubDate>Sun, 01 Nov 2009 07:56:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524341#M7985</guid>
      <dc:creator>JKytsi</dc:creator>
      <dc:date>2009-11-01T07:56:33Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524342#M7986</link>
      <description>I see that the 8 Gpbs has recently been released and it's about time.  Thanks for your comments.</description>
      <pubDate>Sun, 01 Nov 2009 08:25:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524342#M7986</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-11-01T08:25:50Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524343#M7987</link>
      <description>Hi Stephen,&lt;BR /&gt;&lt;BR /&gt;If you look at the attachment you will see that &lt;BR /&gt;&lt;BR /&gt;a)   Mezz1 can only handle dual port cards, (NICs or HBA), and that the two ports map to Interconnect Bays 3 and 4.   This mapping is hard and you have no control over that.&lt;BR /&gt;&lt;BR /&gt;b)  Mezz slot 2 can take either a dual or a quad card.   in either case, then then ports 1 &amp;amp; 2 map to IC Bays 5 &amp;amp; 6.   If it is a quad, the ports 3 and 4 map to IC Bays 7 &amp;amp; 8.   These mappings are all hard, and beyond your control.&lt;BR /&gt;&lt;BR /&gt;The port mismap that you mentioned in your original post usually happens because you have a Ethernet port on the blade mapped to a FC module on the Interconnect, (or viceversa).&lt;BR /&gt;&lt;BR /&gt;Hope this helps&lt;BR /&gt; &lt;BR /&gt;Dave.</description>
      <pubDate>Mon, 02 Nov 2009 13:30:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524343#M7987</guid>
      <dc:creator>The Brit</dc:creator>
      <dc:date>2009-11-02T13:30:15Z</dc:date>
    </item>
    <item>
      <title>Re: C Class enclosure interconnect slots</title>
      <link>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524344#M7988</link>
      <description>Thanks Dave.  I didn't know about the quad port cards and to tell the truth, I doubt our server operations people did either.  I am just a poor SAN guy trying to sort stuff out.. :)&lt;BR /&gt;&lt;BR /&gt;I am not really interested in VC with those 24 port 8 Gpbs cards.</description>
      <pubDate>Tue, 03 Nov 2009 09:31:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/c-class-enclosure-interconnect-slots/m-p/4524344#M7988</guid>
      <dc:creator>S Petersen</dc:creator>
      <dc:date>2009-11-03T09:31:34Z</dc:date>
    </item>
  </channel>
</rss>

