I was wondering if someone would be able to answer some quick
questions on the RST timeout values.

Toward the end of a session between the client and the server, the
server will send a FIN and the client will ACK the FIN putting the
server in a half closed state.

The client doesn't send a FIN in an attempt to close its connection
but later sends an RST which is perfectly acceptable.

The CP sends this RST packet through to the server.

What happens next is that application attempts to use the same source
port in less that 60 seconds BUT its getting rejected by the CP and
logged as a SYN attack.

I appreciate that the RST timeout value hasn't been reached yet but
this is causing us serious problems.

Why is a timeout value applied to a RST packet and why isn't it simply
dropped ?

Why does the CP treat an RST like a FIN and apply such a timer ?

Any advice on this is greatly appreciated.

TIA