This is a discussion on Re: Optional 'test' or benchmark cipher - openssh ; Chris Rapier wrote: > Its a patch against OpenSSH with support for all of the recent releases. > It's primary goal is to open up the receive window limitation that > imposed by the multiplexing. If the BDP of your ...
Chris Rapier wrote:
> Its a patch against OpenSSH with support for all of the recent releases.
> It's primary goal is to open up the receive window limitation that
> imposed by the multiplexing. If the BDP of your path is greater than 64k
> in the case of 4.6 (or less) or around 1.25MB for 4.7 it can increase
> your throughput assuming the network and end hosts are well tuned.
> I'm not sure if you mentioned this or not but are you pegging your CPU?
> As an aside, using a multicore machine won't help you much as SSH is
> limited to using one core.
One of the machines is pegged: an aging 2x1GHz PIII. It's hard to
say what is happening where, since I'm working with 3 different machine types
(4 if you count 1 running WinXP vs. other running Linux).
[much stuff about network tuning deleted]
The file transfer speed between via scp to a a cpu-limited
target (ish, below) is 10.4MB. The same file transfered over CIFS,
a file-system protocol, runs at 28.7MB/s. Network tuning isn't the
issue, though "end-to-end" latency caused by ssh may be. Someone
asked why I wanted to put a null cipher into the default product:
One reason -- if I'm transferring a CD.iso or DVD(8.5G) image where
either is from one of my CD's or DVD's, I don't need the encryption
for the data. I would still want the password encrypted as a matter
of policy. I figure that "ssh" with a null cipher "has" to be better
than "SMB/CIFS" or NFS.
The "oddest" result is pairing my two top-performer workstations
with each other (one running windows(ath), the other suse-64bit(asa).
Scp from the windows box -> the linux is ***horrible*** (and I'm not
pointing fingers at 'ssh', just thought it might be a fairly good
"tcp-stream" based xfer if it wasn't slowed down unnecessarily by the
slow encryptions. Currently I have no "sshd" running on "ath" (Win),
so "copies" to or from are initiated from the windows box. But
copying to "asa" uses less than 1% of the cpu on either box but
is limited to <1MB/s. Turning the same connection around and initiating
a copy from the winbox (ath) from "asa" to the windows box yields the
"highest" ssh throughput (expected to be faster since both are faster
machines) at "16.7MB/s". Max cpu use is 33% on "asa" (and less on the
Win-box, "ath") -- so neither of those ops are cpu limited.
I'm NOT wanting you to figure out what is going on there -- its
obviously some hardware/software mismatch and ssh could only be a
minuscule contributor -- though the 16.7MB/s worries me a bit in that
neither machine is cpu-bound. But 'note', using one of the cpu bound
machines (ish), got 10.4MB upstream and 12.5 downstream with ssh, though
SMB/CIFS speed from the same machine was 28.7MB/s -- likely near fastest
I'll get with that protocol due to latency.
> Start by doing baseline measurements of the network speed. If you
> haven't looked at it yet Iperf is your best solution for that. Since it
> doesn't rely on pipes, the filesystem, or anything like that its your
> lightest weight option. Since it can also run in UDP mode it can even
> give you information on any unexpected packet loss. Then rerun the tests
> using a sample file to see how much of a bottleneck disk I/O is. At that
> point you can determine how much of an impact the application is having.
Haven't found a source for Iperf yet. But I get nearly
2x performance over SSH with just using SMB. I doubt disk is playing
much of a role (note, that the 250MB file I'm xfering fits in the
buffer cache of 3 out of 4 of the machines).
> And pretty much that's all been implemented in the HPN-SSH patch. You
> can't embed the NoneEnabled directive in a Match block at this time. It
> may or may not be possible but its a good idea and something I'll be
> looking into over the next few weeks.
Was preferring to have the standard ssh support it. Obviously,
I have the sources and can hack something with-or-without a patch, but
I don't want to get into modifying or doing a "special compile" for yet
another piece of software (have done it with a few, and eventually I tend
to give up my 'special mods' over simplicity).
>> That should spell it out very clearly enough to prevent
>> "accidents". What security holes would I be missing?
> If you enable NONE switching for interactive data you do actually open a
> security hole. People *should* have the expectation that anything they
> type into a secure shell should be, well, secure. I do not think that
> necessarily applies to bulk data transfer though - with proper
> safeguards in place.
Well, one of the "requirements", in my proposal was to force
the user (interactive or not) to include a switch on the ssh/scp command
line, spelling out that encryption was turned off for the data. If
the user expects security when choosing a --use-no-encryption option, well,
Unix has traditionally allowed people to shoot themselves in the foot if
they really want to enough. :-) Additionally, I don't want the
non-encryption turned on by default (which I might get if the options
were in the config files).
> But really, I'm not entirely sure what you want.
I want to be able to turn off encryption on the 'normal' openssh
product, under specific circumstances, including benchmarking to see
what openssh's "end-to-end" latency is, but also for xfering large
files over an internal network. While CIFS works well one one of the
computers is Windows, It _seems_ to have lower performance between
> If you *just* want to
> benchmark things with and without encryption its not that hard to hack
> the none cipher back into the cipher lists. If you want a more general
> solution that can be used in a production environment then you should
> probably just use the HPN-SSH patch.
What is "HPN"...I don't recognize it as a cipher. As for "production"...
if I wanted this at a company, I'd tend to believe I was "insane".
"Deliberately disabling encryption at the server and client in a
production environment"? Simply CYA politics should put someone off
doing that -- at least over any public network. But the places where I'd
want to use it are on "internal networks", where I still wouldn't want
my password "open", but for running bulk transfers it can "likely" be
a good speed boost. Nearly all of the "internal networks" I've
been on in the past 7-9 years or so have used some form of ssh --
usually (AFAIK) Openssh.
FWIW, the machines I was testing between:
Mach Speed Mem Speed Ossh Ossl
ID "OS" Processer GHz GB MB/s ver ver
Ath Cygwin/WinXP2 2xCore2-Duo 3.2 3 78.67 4.7p1 .9.8g
Ish SuS93+262312 2 x PIII 1 1 55.56 3.9p1 .9.7g
Ast SuS103+262312 Celeron 0.9 0.5 27.05 4.6p1 .9.8e
Asa SuS103+262312-64 2xCore2-Duo 2 4 170.60 4.6p1 .9.8e
OS - linux machines have same kern version: 126.96.36.199 (vanilla)
Asa using 64bit version; others 32bit
Proc - Hyperthreads OFF; ia32 or x86-64
Mem - Usable mem by OS
DiskSpeeds - as measured by "hdparm --direct -t
used fastest of 5 runs
openssh-unix-dev mailing list