This is an interesting one!!!!. We are doing some tests while injecting
different latencies to see the FTP through-put.

Server: WinXP SP2 with Cerebrus FTP server (64KB buffer size)
Client: multiple command line (linux, winxp)

Lets take an injected 35ms latency link.

Client - Server using iperf gets about 13.5Mbps through-put
symmetrically.

Client - Server GET of a 50MB zip file take about 230 seconds. In this
case its a linux ftp command line client but the same results are seen
with WinXP. Looks like the window szie on the reciever (client) grows
from the default of 8640 to the full 64KB, however the most packets
transmitted is consistently 6 1514 byte packets before the
acknowledgements come out.


Client - Server PUT of a 50MB zip file takes about 30 seconds with the
same 35ms latency. the difference i see is that the number of packets
goes as high as 18 at a time before the ACK's.

The question is: does the command line FTP client (Linux/WinXP) act
differently during a GET - i.e there is a specific socket buffer while
it writes to disk that forces ACKS after (6 1448 byte packets)????

Anybody else have done testing with this? Quick responses would be
highly appreciated.

Thanks, EB