Hello,
I have a problem in understanding the performance impact of setting
swapiness. In short, the theory is "low swappiness, good interactive
response as applications tend to stay longer in memory ... high
swappiness, good system throughput as unused memory is paged out to
make room for more useful things like buffers". However, this doesn't
seem to be the case.

For example, look at this page http://lwn.net/Articles/100978/
At the very bottom are the results of testing swappiness impact on
Altix systems. Naturally, as swappiness increases, so does the amount
of paged out memory used by processes ... however, the I/O bandwidth
decreases.

Why is it so? Shouldn't the dd processes (they test the I/O bandwidth
with dd copies) perform faster when a lot of memory is suddenly
available for buffers?