This is a discussion on RE: MAX_SESSIONS Increase Impact - openssh ; Thanks Damien, I agree. In the meantime, I'm going to go ahead and increase the value to 128 for the developers that I'm supporting. I have noticed that the cvs repository had changed the fatal() to error() in the clientloop.c. ...
I agree. In the meantime, I'm going to go ahead and increase the value
to 128 for the developers that I'm supporting. I have noticed that the
cvs repository had changed the fatal() to error() in the clientloop.c.
But I did notice something during my testing and have opened a bugzilla
ticket - if the limit of filehandles is reached during the allocation of
stdin/stdout/stderr from the recvmsg(), the return leaves the client_fd
open (which is no longer accessible) and leaves the "slave" ssh process
in a blocking state. I've attach a simple addition of a close() which
will allow the slave ssh process to exit instead of block. I didn't know
if there was a better method in the code to do that. The bugzilla ticket
From: Damien Miller [mailto:firstname.lastname@example.org]
Sent: Saturday, December 08, 2007 5:15 PM
To: Shively, Gregory
Subject: Re: MAX_SESSIONS Increase Impact
On Fri, 7 Dec 2007, Shively, Gregory wrote:
> I'm hoping this is the the right location for this question....
> I've been working with some developers that are desiring to use the
> multiplexing capabilities of OpenSSH, but have hit the default limit
> 10. I've seen some discussions of increasing this default limit
> previously and have noticed that some of the Linux distros have
> increased the number to 64; I'm curious of the impact. The developers
> that I'm working with have discussed approx 90'ish slave processes
> the master ssh process, so I've been thinking about using 128 for
> possible furture increases by this group.
I'd like it to be uncapped and purely dynamic, but we need to audit
the server to make sure it doesn't fatal() when it hits a fd limit in
an unexpected place.
openssh-unix-dev mailing list