Operating System - HP-UX
1753787 Members
7629 Online
108799 Solutions
New Discussion юеВ

Re: SSH - Known Hosts - how to sync?

 
SOLVED
Go to solution
Geoff Wild
Honored Contributor

SSH - Known Hosts - how to sync?

We are now using ssh with keys for our backup server. Unfortunately, when we move the package to the other node, the first time we try to run a backup, it fails as it is waiting for an answer - something like "man in the middle attack - are you sure you want to continue?" as we connect to the floating ip address. So we have to manualy connect to the server the first time to accept adding to the known hosts file.

Other option I thought of was to have 2 different versions of the known_hosts file, and when we fail over, make a ssh connection from the node in the cluster to the backup server and copy the known_hosts.node2 file to known_hosts.

Anyone know a better workaround?

Rgds...Geoff

Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
16 REPLIES 16
Oviwan
Honored Contributor

Re: SSH - Known Hosts - how to sync?

Hy

you can also run a script which copy the known_hosts file when you move the package. there are two function "function customer_defined_run_cmds" and "function customer_defined_halt_cmds" in the packagename.cntl file.

Regards
Ralph Grothe
Honored Contributor

Re: SSH - Known Hosts - how to sync?

Hi Geoff,

for cluster nodes that essentially form a single host I would use the same SSH host keys on all of the nodes that belong to that cluster.
The host keys should reside in /opt/ssh/etc.
However, if you don't fancy the idea you could also make use of the HostKeyAlias option like

ssh -o hostkeyalias=nodeA nodeC

when your client actually comes from nodeB
(presumably owe to a package failover) instead of nodeA what he pretends by the HostKeyAlias.
But this is an ugly kludge, that I usally use when doing port forwarding through firewalled
routes.
In my opinion a common cluterwide hostkey
is the better choice.




Madness, thy name is system administration
Geoff Wild
Honored Contributor

Re: SSH - Known Hosts - how to sync?

As I said, we connect to the floating ip (not the real hosts names) as that is needed for oracle.

The issue is, when you ssh to floatip, it has a mac address.

When you ssh to the same floatip - but now on another host - it sees that the mac has changed.

The hostkeyalias is for making multiple hosts (IE in VSE) use the same MAC.

But, looking through the ssh_config man page, I see this option:

CheckHostIP
If this flag is set to ``yes'', ssh will additionally check the
host IP address in the known_hosts file. This allows ssh to
detect if a host key changed due to DNS spoofing. If the option
is set to ``no'', the check will not be executed. The default is
``yes''.


Maybe I'll try that.


Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Steven E. Protter
Exalted Contributor

Re: SSH - Known Hosts - how to sync?

Shalom,

You can build into the cluster statup script a command that copies the active nodes known_hosts file to the proper location.

You may be able to play with the backup clients /etc/hosts file and make this go away. The problem of course is due to the hostname not matching.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Denver Osborn
Honored Contributor

Re: SSH - Known Hosts - how to sync?

I'd add "-o StrictHostKeyChecking=no" and "-o PreferredAuthentications=publickey" to your syntax.

The other option is to keep the HostKey the same between all nodes in the cluster. ITRC Doc KBRC00011982 mentions HostKeys being copied from one node to another.

-denver
Olivier Masse
Honored Contributor

Re: SSH - Known Hosts - how to sync?

One thing you can try is having a second ssh daemon that listens to another port on each of your nodes (no need for a package for this), and have it use the same host key on each server in your cluster. That's what I'm planning to do soon.

Let's say you decide to use port 822. When you connect to server:22, you access the normal ssh service of the node. But when you connect to floating:822, you access a high-availability ssh service of some sort, for which the host key will be consistent across your cluster.

I suggest you restrict a lot the service on this port, for instance make it ONLY allow logins with certificates, and use AllowUsers to further limit who can access it.

Another thing I've been doing in the past is manually adding multiple host keys in known_hosts for the same floating address, AFAIK it used to work two years ago with openssh derivatives but I had to forget with other third-party SSH tools (winscp, etc). However, I'm not sure it still works with more recent version of HP-UX Secure Shell.

Good luck
Ermin Borovac
Honored Contributor

Re: SSH - Known Hosts - how to sync?

Here is another option.

For example if your SG cluster consists of two hosts and you might have the following two lines in your known_hosts file.

, ssh-rsa
, ssh-rsa

Edit known_hosts file adding SG package name and IP to both lines as shown below.

,,, ssh-rsa
,,, ssh-rsa

Also make sure to remove any other lines in known_hosts file that contain or .

Ralph Grothe
Honored Contributor
Solution

Re: SSH - Known Hosts - how to sync?

Hi Geoff,

I can't share your difficulties.
The HostKeyAlias option for the ssh client
is exactly provided for cases as yours
I beleive.
As said, I regularily make use of it whenever
I tunnel some service via ssh over our admin server to e.g. avoid in-between copying
because the copying source host isn't allowed to make direct connections to the destination host (so called local port forwarding).
Naturally in a situation as such I would get a clash of hostkeys between my client's know_host entry and the now forwarded to deviating destination host's host key.

Admitedly my explanation was wrong because I mixed up whose host keys clashed.

So I reconstructed your case with one of our clusters (only spoofed IP addresses and removed fingerprints here for paranioa's sake)
My admin host (i.e. where ssh client connects from) be 10.10.10.10
and my package's "floating" IP be 123.123.123.123
that before the switch over should run on nodeA.
I deleted host keys for IP 123.123.123.123 in me's known_host on 10.10.10.10 prior to tests, why I am asked on 1st time connection for confirmation of nodeA's hostkey:

$ ssh me@123.123.123.123 hostname
The authenticity of host '123.123.123.123 (123.123.123.123)' can't be established.
RSA key fingerprint is
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '123.123.123.123' (RSA) to the list of known hosts.
Last login: Thu Dec 7 14:11:11 2006 from 10.10.10.10
(c)Copyright 1983-2000 Hewlett-Packard Co., All Rights Reserved.
(c)Copyright 1979, 1980, 1983, 1985-1993 The Regents of the Univ. of California
(c)Copyright 1980, 1984, 1986 Novell, Inc.
(c)Copyright 1986-1992 Sun Microsystems, Inc.

nodeA

Now nodeA's host key for IP 123.123.123.123 should reside in me's known_host.

I then login on cluster nodeB as root
and switch packageA from nodeA over to nodeB


[root@nodeB:/root]
# cmhaltpkg packageA && cmrunpkg -n nodeB packageB
One or more packages has been halted and will not be started automatically. To start these packages, enable AUTO_RUN via cmmodpkg -e .
cmhaltpkg : Completed successfully on all packages specified.
cmrunpkg : Completed successfully on all packages specified.
[root@nodeB:/root]
# cmviewcl -p packageA

PACKAGE STATUS STATE AUTO_RUN NODE
packageA up running disabled nodeB


and verify that IP 123.123.123.123 is now active on nodeB


[root@nodeB:/root]
# netstat -in|grep '123.123.123.123'
lan0:3 1500 123.123.123.0 123.123.123.123 6 0 0 0 0


When I now retry to connect to that IP
from me's management host 10.10.10.10 as before SSH discovers that the host key
entry in me's known_host for this IP
isn't the one that nodeB now is advertising,
and rightly refuses to continue.


$ ssh me@123.123.123.123 hostname
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is

Please contact your system administrator.
Add correct host key in /home/me/.ssh/known_hosts to get rid of this message
.
Offending key in /home/me/.ssh/known_hosts:949
RSA host key for 123.123.123.123 has changed and you have requested strict checking.

Host key verification failed.


But knowing that 123.123.123.123 is now bound to nodeB I can tell the ssh client to instead refer to the host key entry of nodeB,
so that it can continue to RSA public key autorization and execute the command.


$ ssh -o hostkeyalias=nodeB me@123.123.123.123 hostname
nodeB


If you want to make sure that your command
doesn't get hung because of possible host key clashes or other reasons,
you can always add -o batchmode=yes to your ssh command, or simply -B if you are doing scp.

You can also put these, or any other options
in your client's $HOME/.ssh/config file
so that you need not supply clumsy command lines.


Nevertheless, I still would maintain to use
just a single hostkey on all nodes of a cluster to avoid all this.
On the other hand you usually also allow
root's remsh and rcp commands (trusted host environment) between cluster nodes,
which is merely another manifestation of "one-host concept".

Regards
Ralph





Madness, thy name is system administration
Florian Heigl (new acc)
Honored Contributor

Re: SSH - Known Hosts - how to sync?

Hi Geoff,

You should attach a specific sshd instance with the cluster package.
Only clean solution :)

Regards,
Florian
yesterday I stood at the edge. Today I'm one step ahead.