<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: sshd identification string within enclosures in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195729#M32516</link>
    <description>Just because there is strength in numbers--I have the very same problem with our blade servers...</description>
    <pubDate>Thu, 20 Nov 2008 12:18:38 GMT</pubDate>
    <dc:creator>Mark Galata</dc:creator>
    <dc:date>2008-11-20T12:18:38Z</dc:date>
    <item>
      <title>sshd identification string within enclosures</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195725#M32512</link>
      <description>I have a couple of blade enclosures with 8 HP Proliant blades in each. In each of the blades this shows up in the messages log:&lt;BR /&gt;&lt;TIMESTAMP&gt; &lt;HOSTNAME&gt; sshd[xxxx]: Did not receive identification string from &lt;IP address="" of="" 1st="" blade="" in="" the="" enclosure=""&gt;&lt;BR /&gt;&lt;BR /&gt;So in every blade of the first enclosure its the IP address of the first blade in that enclosure that is logged, including in the first blade itself. In the second enclosure its the same, except its the IP address of the first blade of the 2nd enclosure that shows up.&lt;BR /&gt;&lt;BR /&gt;They are all in the same subnet and none of the blades has a connection to the Internet. There is no problem logging in with ssh from one node to the other. The OS is SLES10, SP1. Actually this issue is not really a problem as such, it just generates a lot of logging. Is it supposed to be like that, or is there anything I can do that will stop the logging? (Aside from configuring syslog-ng to filter it out.)&lt;/IP&gt;&lt;/HOSTNAME&gt;&lt;/TIMESTAMP&gt;</description>
      <pubDate>Mon, 12 May 2008 08:05:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195725#M32512</guid>
      <dc:creator>Fialia</dc:creator>
      <dc:date>2008-05-12T08:05:01Z</dc:date>
    </item>
    <item>
      <title>Re: sshd identification string within enclosures</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195726#M32513</link>
      <description>Have you setup any monitoring system like Nagios/ZenOSS on blade1? That message is usually generated when you happen to do, for example: telnet &lt;IP&gt; 22&lt;BR /&gt;&lt;BR /&gt;&lt;/IP&gt;</description>
      <pubDate>Mon, 12 May 2008 08:36:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195726#M32513</guid>
      <dc:creator>Goncalo Gomes</dc:creator>
      <dc:date>2008-05-12T08:36:29Z</dc:date>
    </item>
    <item>
      <title>Re: sshd identification string within enclosures</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195727#M32514</link>
      <description>It may happen if first blade of each enclosure tries to scan sshd on each blade. See here for example:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.spry.com/showthread.php?p=608" target="_blank"&gt;http://forums.spry.com/showthread.php?p=608&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Now,I'm not saying that your blades go through ssh brute force scan but it could be some sort of management software (either HP or non-HP) that does this scanning to keep an eye on blades?&lt;BR /&gt;&lt;BR /&gt;Basically to find out run a sniffer on one of your blades (i.e Ethereal or tcpdump) and watch for traffic on port 22 (default sshd). Once you discover the port on the remote server(your first blade in enclosure) that generates this traffic - go to first blade and find a process running on this port (with lsof).&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 12 May 2008 10:03:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195727#M32514</guid>
      <dc:creator>Zeev Schultz</dc:creator>
      <dc:date>2008-05-12T10:03:45Z</dc:date>
    </item>
    <item>
      <title>Re: sshd identification string within enclosures</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195728#M32515</link>
      <description>OK, so this is basically generated by some kind of scanning software that is not let through?&lt;BR /&gt;There is scanning software on all of them that probably generates these messages. As far as I know the first blade should be no different than the others in that respect though. Is there anything in the blade/enclosure setup that singles out the first blade in this way?</description>
      <pubDate>Wed, 14 May 2008 08:07:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195728#M32515</guid>
      <dc:creator>Fialia</dc:creator>
      <dc:date>2008-05-14T08:07:25Z</dc:date>
    </item>
    <item>
      <title>Re: sshd identification string within enclosures</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195729#M32516</link>
      <description>Just because there is strength in numbers--I have the very same problem with our blade servers...</description>
      <pubDate>Thu, 20 Nov 2008 12:18:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sshd-identification-string-within-enclosures/m-p/4195729#M32516</guid>
      <dc:creator>Mark Galata</dc:creator>
      <dc:date>2008-11-20T12:18:38Z</dc:date>
    </item>
  </channel>
</rss>

