I inquired about this in comp.security.ssh, with limited success;
hopefully somebody in this mailing list can provide further feedback.

I have two Linux boxes, A and B, running OpenSSH versions 3.6.1p2
and 3.9p1, respectively. I connect from B to A by means of ssh. That
is, on B I am running X windows, and I open xterms that create a shell
on A by means of the ssh command.

The thing is, I have to bring B down on a regular basis. "Bring
down" means suspend my running session to disk, by means of the
suspend software available in www.suspend2.net. When I wake B up,
after being suspended for a few hours, I retrieve my setup all right,
including the xterms where I have my ssh connections to A. However, as
soon as I press any key when the pointer is in any such windows, the
following message is printed out:

Read from remote host xxx.xxx.xxx: Connection reset by peer
Connection to xxx.xxx.xxx closed.

And the xterm window disappears (it had been created as xterm -e ssh
xxx.xxx.xxx, so it just exits when the ssh connection terminates.)

In an attempt the get around this, I added the following lines to
the global sshd_config file on
A:

KeepAlive no
ClientAliveInterval 30
ClientAliveCountMax 1540

My intention here was to make sure that the OpenSSH daemon on A would
mantain connections open for up to 12 hours, regardless of what the
clients are doing. As it happens, after such changes (and after
restarting the OpenSSH daemon on A) I noticed that the sshd daemons
forked in A for each ssh connection from B stay alive for just a few
minutes (between 2 and 30; I hope I'll be able to measure this more
accurately soon) after B gets suspended to disk.

Somebody suggested to use autossh on B. With this, when B is
brought back from hibernation, pressing a key on any of the ssh xterms
on B elicits the same message as above, but a new ssh connection is
automatically opened instantly. This is fine, but if I had something
like, say, a debugging session running within the original ssh
connection, that debugging session gets lost.

The bottom line is, is there a way to coax A to keep its OpenSSH
forked processes alive for a prespecified time, no matter what their
matching ssh clients in B are doing? Why is it the case that the three
lines I added to the sshd_config file do not pull it off? Something to
do with the TCP stack overriding them perhaps?