This is not an SSH problem. This is the standard builtin behaviour of
TCP. The TCP connection is timing out, not the SSH session. You are not
going to be able to get around this, the way you want. Your "sshd" process=
are dying because their TCP connection is getting a timeout error and is fo=
to close.

If you are not doing any port forwarding, (ie. just using shell access) the=
I would suggest using "screen".

"screen" will be able to background the active shells upon disconnection.
You can then, reconnect to the backgrounded screen session, upon relogin.

If you are using X forwarding, you will not be able to do this anymore.
A work around for X applications, would be to run something like TightVNC.

You can then login via ssh and start a "vncserver" and port forward the cli=
from your B box. Upon client disconnection, the vncserver will still be ru=
and maintaining an X framebuffer for the X applications. You would then
upon re-sshing into A, tunnel a new vncclient connection to the vncserver a=
it will repaint the framebuffer on your screen. Then you are back in busin=

On 6/17/05, JCA <> wrote:
> I inquired about this in, with limited success;
> hopefully somebody in this mailing list can provide further feedback.
> I have two Linux boxes, A and B, running OpenSSH versions 3.6.1p2
> and 3.9p1, respectively. I connect from B to A by means of ssh. That
> is, on B I am running X windows, and I open xterms that create a shell
> on A by means of the ssh command.
> The thing is, I have to bring B down on a regular basis. "Bring
> down" means suspend my running session to disk, by means of the
> suspend software available in When I wake B up,
> after being suspended for a few hours, I retrieve my setup all right,
> including the xterms where I have my ssh connections to A. However, as
> soon as I press any key when the pointer is in any such windows, the
> following message is printed out:
> Read from remote host Connection reset by peer
> Connection to closed.
> And the xterm window disappears (it had been created as xterm -e ssh
>, so it just exits when the ssh connection terminates.)
> In an attempt the get around this, I added the following lines to
> the global sshd_config file on
> A:
> KeepAlive no
> ClientAliveInterval 30
> ClientAliveCountMax 1540
> My intention here was to make sure that the OpenSSH daemon on A would
> mantain connections open for up to 12 hours, regardless of what the
> clients are doing. As it happens, after such changes (and after
> restarting the OpenSSH daemon on A) I noticed that the sshd daemons
> forked in A for each ssh connection from B stay alive for just a few
> minutes (between 2 and 30; I hope I'll be able to measure this more
> accurately soon) after B gets suspended to disk.
> Somebody suggested to use autossh on B. With this, when B is
> brought back from hibernation, pressing a key on any of the ssh xterms
> on B elicits the same message as above, but a new ssh connection is
> automatically opened instantly. This is fine, but if I had something
> like, say, a debugging session running within the original ssh
> connection, that debugging session gets lost.
> The bottom line is, is there a way to coax A to keep its OpenSSH
> forked processes alive for a prespecified time, no matter what their
> matching ssh clients in B are doing? Why is it the case that the three
> lines I added to the sshd_config file do not pull it off? Something to
> do with the TCP stack overriding them perhaps?