This is a discussion on Re: SSL connections in persistent TCP connection. - Openssl ; What you are describing is a performance issue. You are assuming that the bottleneck is OpenSSL, but do you have proof? Are your sessions autonomous or do the clients manage them? If each client searches thru a linked list, then ...
What you are describing is a performance issue. You are
assuming that the bottleneck is OpenSSL, but do you have
proof? Are your sessions autonomous or do the clients
manage them? If each client searches thru a linked list,
then that is a likely source of the problem.
You need data. The gcc -pg option to compile the program
and gprof to see the results will only track the main thread,
but that might be useful. And here is a PDF that discusses
tracking multi-threaded apps:
If that doesn't work, then try to fork new child processes
and run their sessions in threads, and see how each child
performs using gprof. Or else, add performance counters to
your threads yourself, such as (time_at_exit - time_at_entry)
accumulated in a counter for each major function (check out
gprof to get an idea of what it looks like). If you DIY, you
can even collect the time spent in calls to OpenSSL or any
other system function.
Also, memory management could be contributing to the
performance overhead. Are you having the system allocate
variable sized buffers which is causing fragmentation in the
address space? Is the system running out of RAM and
starting to swap heavily under load?
Later . . . Jim
Prabhu S wrote:
> On 2/20/08, *David Schwartz*
> > wrote:
> > But, the application code tries to clear out/shutdown existing
> > SSL session with orderly bi-directional alerts. Once shutdown it
> > creates a new SSL object 'ssl' [ssl = SSL_new (ctx)]
> > for the next session in persistent connection..
> This is nearly impossible to do. It's possible that you did it
> but very unlikely. The basic problem is this -- when you call
> 'read' to get
> the last message of the first session, how do you make sure you
> also don't
> get all or part of the first message of the second session?
> I do not think it is very difficult. The application initiates
> SSL sessions sequentially in a established socket connection.One cycle
> of SSL_connect - DataExchange-SSL_shutdown is followed by another
> cycle of SSL_connect - DataExchange-SSL_shutdown. As such there
> shouldn't be issue of session mix up.At least that is what is observed
> with say 400-500 clients connecting to server simultaneously.
> > When the app simulates limited clients , say, 100, each client makes
> > hundreds of unique SSL sessions successfully in persistent
> > It is under stress of ~800 clients , that I run into issues.
> > Also, the bi-directional alerts do not happen always under
> > high stress..could this be the reason? a possible session data
> mix up?
> Either your code properly separates the sessions or it doesn't. My
> bet is
> that it doesn't because this is very hard to do right.
> Yes. I believe so..I am able to establish hundreds of cycles of new
> sessions in persistent connection. The trouble is under high stress
> sessions fail as indicated by ethereal trace.Sometimes server
> complains of Bad_MAC error on receiving Finished message from client.
> Why do you do things this way? It's just plain wrong. Either layer
> on top of
> SSL or don't, but splitting the difference and "sort of" layering
> SSL and TCP is just plain crazy.
> Multiple sessions are tried in a single TCP connect to reduce the
> overhead of TCP handshake and termination if the client wishes to do
> multiple 'new' SSL connects to server.
> Prabhu. S
> __________________________________________________ ____________________
> OpenSSL Project http://www.openssl.org
> User Support Mailing
> List firstname.lastname@example.org
> Automated List Manager
OpenSSL Project http://www.openssl.org
User Support Mailing List email@example.com
Automated List Manager firstname.lastname@example.org