<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Multicast and Blade systems in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/multicast-and-blade-systems/m-p/4305557#M4388</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We have experienced some strange issues when we run some multicast tests with two setups:&lt;BR /&gt;&lt;BR /&gt;1. C7000 enclosure, bl685c Blades, GB2ec intraconnect switches. Red Hat 5.2, one of the servers has mrouted installed as a igmp routing daemon. (I have understood that the GB2ec switches can't act as one)&lt;BR /&gt;&lt;BR /&gt;2. Same as above but with Virtual Connect modules instead of GB2ec, connected to Extreme switches which acts as IGMP routers.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Generally we have noticed that the servers that sends UDP traffic has very high load on the CPU, but the receivers seem to be okay.&lt;BR /&gt;&lt;BR /&gt;The strangest issue, which is our real problem, happened when we tried some tests on the configuration 1 above.&lt;BR /&gt;When doing multicast tests, we can't get higher speed than 100Mbit/s as a total within the enclosure.&lt;BR /&gt;&lt;BR /&gt;We first tried to send multicast from server A to server B.&lt;BR /&gt;The result was that server A had a transfer rate of 100MBIT and the listener as well.&lt;BR /&gt;&lt;BR /&gt;We then tried to send multicast from server C to server D at the same time.&lt;BR /&gt;The result was that both server A and server C had a transfer rate of 50MBIT.&lt;BR /&gt;&lt;BR /&gt;It seems like there is some limitation within the enclosure/switches that limits this for us?</description>
    <pubDate>Thu, 13 Nov 2008 11:21:48 GMT</pubDate>
    <dc:creator>Linus Hedström</dc:creator>
    <dc:date>2008-11-13T11:21:48Z</dc:date>
    <item>
      <title>Multicast and Blade systems</title>
      <link>https://community.hpe.com/t5/bladesystem-general/multicast-and-blade-systems/m-p/4305557#M4388</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We have experienced some strange issues when we run some multicast tests with two setups:&lt;BR /&gt;&lt;BR /&gt;1. C7000 enclosure, bl685c Blades, GB2ec intraconnect switches. Red Hat 5.2, one of the servers has mrouted installed as a igmp routing daemon. (I have understood that the GB2ec switches can't act as one)&lt;BR /&gt;&lt;BR /&gt;2. Same as above but with Virtual Connect modules instead of GB2ec, connected to Extreme switches which acts as IGMP routers.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Generally we have noticed that the servers that sends UDP traffic has very high load on the CPU, but the receivers seem to be okay.&lt;BR /&gt;&lt;BR /&gt;The strangest issue, which is our real problem, happened when we tried some tests on the configuration 1 above.&lt;BR /&gt;When doing multicast tests, we can't get higher speed than 100Mbit/s as a total within the enclosure.&lt;BR /&gt;&lt;BR /&gt;We first tried to send multicast from server A to server B.&lt;BR /&gt;The result was that server A had a transfer rate of 100MBIT and the listener as well.&lt;BR /&gt;&lt;BR /&gt;We then tried to send multicast from server C to server D at the same time.&lt;BR /&gt;The result was that both server A and server C had a transfer rate of 50MBIT.&lt;BR /&gt;&lt;BR /&gt;It seems like there is some limitation within the enclosure/switches that limits this for us?</description>
      <pubDate>Thu, 13 Nov 2008 11:21:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/multicast-and-blade-systems/m-p/4305557#M4388</guid>
      <dc:creator>Linus Hedström</dc:creator>
      <dc:date>2008-11-13T11:21:48Z</dc:date>
    </item>
  </channel>
</rss>

