Jim Seymour wrote:

> There is a structured systems design book I have (I think that's the
> one, anyway) that recommends input be conditioned as early in the data
> flow as possible so it's done and over with, and you can not have to
> worry about unconditioned data floating around in the system, being
> (similarly) conditioned in multiple places (code redundancy), etc.
> Similar concept.

Sorry about the late reply. Been buried on a project for a while.

While I agree in principle with the above sentiment, I think that there
is a significant caveat. The issue is data that traverses trust
boundaries. IMHO, the data should be conditioned, canonicalized,
scrubbed, etc. whenever it is received from a source that is not in the
same trust domain as the destination. Web* applications for instance.
I recently did an audit on a Web application in which the user was
allowed to enter some data into a form and POST it. On the way in, the
data was passed through the gamut of filters one would expect to find on
a well-designed Web application. Then the changes were sent back to the
browser for verification, passing through stringent output filtering.
The gotcha was that the changed data were sent out in form fields which
could easily be manipulated with one's choice of Web MITM package . . .
Tamper Data, Webscarab, Paros, etc. Only this time, the server did not
do any kind of checking on the returned data. Once again, the First
Commandment of input processing: "Do not trust the data unless it comes
from sources within the trust domain of the server . . . and then trust
but verify."

Here endeth the lesson.


firewall-wizards mailing list