OpenSSH on 2 servers in cluster (fail over mode) - SSH

This is a discussion on OpenSSH on 2 servers in cluster (fail over mode) - SSH ; Hello, I've got a little problem with my ssh connections to a cluster of servers. To be short : I'm doing a SSH from a unix server A to a couple of unix servers (B & C). The cluster is ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: OpenSSH on 2 servers in cluster (fail over mode)

  1. OpenSSH on 2 servers in cluster (fail over mode)

    Hello,

    I've got a little problem with my ssh connections to a cluster of
    servers.

    To be short : I'm doing a SSH from a unix server A to a couple of unix
    servers (B & C).
    The cluster is a "failover" type, I mean, that there is a virtual-IP
    shared between the servers B and C, with a unique DNS name on this VIP.
    This DNS name is used by A to connect on the node B (or C, in case of B
    failure).
    B and C have the same hardware, OS, and openssh version.

    The problem I have is that, when I do a ssh from A to B, A stores the
    fingerprint of B in its own known_hosts file. When B fails, C becomes
    active, and the VIP goes from B to C. For A, it is the same IP and the
    same DNS name.
    But, on the next connection, A complains that the host keys have
    changed. This is quite normal.
    I've tried to copy all ssh_host_* files from B to C, in order to avoid
    the problem. But, even after restarting sshd on C, the fingerprint is
    different from the one of B....

    My question is : is there a way to make this work without modifying the
    known_host file of A (because A does NOT know if whether B or C is
    active) ?

    Second question (only for my understanding) : how is the fingerprint
    defined ?
    Many thanks.
    Castor

  2. Re: OpenSSH on 2 servers in cluster (fail over mode)

    In article kill bill
    writes:
    >
    >The problem I have is that, when I do a ssh from A to B, A stores the
    >fingerprint of B in its own known_hosts file. When B fails, C becomes
    >active, and the VIP goes from B to C. For A, it is the same IP and the
    >same DNS name.
    >But, on the next connection, A complains that the host keys have
    >changed. This is quite normal.
    >I've tried to copy all ssh_host_* files from B to C, in order to avoid
    >the problem. But, even after restarting sshd on C, the fingerprint is
    >different from the one of B....


    This should work, you must have made some mistake - e.g. copied from or
    to the wrong directory, forgot to restart sshd, or somesuch. Also be
    sure to preserve ownership/mode of the key files, and verify that the
    file names agree with sshd_config. As has been pointed out before here,
    doing it this way has a somewhat negative effect on security - I think
    the primary argument was that if any of the hosts in such a cluster gets
    compromised, the attacker can impersonate all of them.

    >My question is : is there a way to make this work without modifying the
    >known_host file of A (because A does NOT know if whether B or C is
    >active) ?


    You can do as above, or you can have multiple keys associated with a
    given host name in known_hosts (i.e. you could have the VIP/name
    associated with the keys for both B and C) - the latter would have to be
    arranged manually though, and thinking about it, I'm not sure it
    improves security all that much. I would be an improvement for the case
    when you connect "directly" to one of the hosts for maintenance or
    whatever I guess.

    >Second question (only for my understanding) : how is the fingerprint
    >defined ?


    I don't know exactly off-hand, but generally it's enough to know that a
    given key will always have the same fingerprint, and that it's
    "extremely unlikely" (but obviously possible) for two different keys to
    produce the same fingerprint.

    --Per Hedeland
    per@hedeland.org

+ Reply to Thread