william fitzgerald wrote:
>Are firewalls obsolete in a world involving enterprise Webservice SOA?

Short answer: No

Longer answer: That's like asking if brains are obsolete in a world where
everyone is busy blowing theirs out. It's kind of one of those "does not
compute" kind of questions. Firewalls will continue to be useful to the
increasingly small population of people who actually understand anything
about security.

More importantly, your question is hard to answer because you choose to
attempt (unsuccessfully because I am not going to let you get away with
it) to redefine established terminology. For example:

>I use the term firewall loosely to mean network access control. That is,
>its a mechanism to prevent unwanted packets.

Considering that firewalls were initially and foremost layer-7 devices, until
they devolved to being packet-centric in the mid 1990s, you're using the
term wrong. A firewall is a system or systems that sit between two networks
at different levels of trust and enforces a security policy on communications
between them. If you actually use the definition of "firewall" that many of us
have been using since the late 1980s, such a system will never be obsolete
as long as there are untrustworthy systems.

>The current information I have found (web service orientated!) tends to
>say firewalls are obsolete when talking about enterprise SOA given that
>once port 80 and 443 is open on the firewall the SOS services are
>exposed and hence protection happens at the application layer.

What you have done is rediscovered the "incoming traffic problem" -
which is a primary property of firewalls that has been well-understood
since the early 1990s. You're correct that many firewalls (especially
the packet-oriented ones or the so-called 'stateful' ones) don't do
anything useful at layer-7, and serve primarily to force traffic to an
application service which needs to be tough enough to withstand
direct attack specific to that service. And, yes, with things like
"everything tunnelled over web services" remote procedure calls,
the complete set of protocol options at layer-7 is too large to be
controlled, enumerated, or understood - which means that effectively
you are doomed to intermittent epic failures.

Back in the old days, we had similar situations and they amounted
to "block everything except incoming telnet" - well, of course you
can do anything over telnet, just like you can over these newfangled
web frobozzes.

The XML firewalls and whatnot that people are scratching their
heads about are an attempt to regain control over the command
set of options being allowed back and forth through the firewall.
At this point, because the options-set is pretty much beyond
enumeration, you're doomed to epic fail because the only perceived
alternative is to enumerate dangerous options ('default permit' and
'enumerating badness' 'antivirus') rather than enumerating accepted options
('default deny' and 'enumerating goodness') - essentially ceding
the initiative to the bad guys.

So - to summarize - your question PREDICATES the intent
to go about the problem all wrong. That is, of course, an accurate
reflection of how the industry chooses to approach the problem,
and how "web services" are currently designed. So my response
to that is that it's an approach that is doomed to endless
small failures and lots and lots of maintenance. Because, by
choosing the path of duct tape and endless patching, it's
now inherent in the design. By the way, I use the word "design"
loosely; my observation is that there's very little that can
be called "design" about web/web2 services.

None of this should be taken (please) as an attack on you.
It's frustration because the ideas you're expressing are bad
ideas that many of us have fought a long rearguard action
against, knowing we'd fail from the beginning.


firewall-wizards mailing list