interesting to read :-)
source:
http://www-106.ibm.com/developerwork...r-lnxw01Cranky

Where does all the processing speed go?

Level: Introductory

Peter Seebach (crankyuser@seebs.plethora.net )
Freelance writer
02 Feb 2005

Computers are getting faster all the time, or so they tell us. But,
in fact, the user experience of performance hasn't improved much over
the past 15 years. Peter looks at where all the processor time and
memory are going.

About 10 years ago I remember people complaining that Microsoft Word was
too slow on the Mac. You could type faster than the processor handled
input on such a large application. Imagine my disappointment when I
recently discovered that the same thing still holds true. Similarly, my
first computer with a hard drive loaded a small command-line utility in
under a second and a large graphics program in perhaps half a minute.
Those are good specs, but isn't it kind of sad that they haven't changed
much in the past 15 years?

So the question is, where is all the CPU power going? How is it possible
that a machine with a full gigabyte of memory can run out of room to run
applications just as quickly as a machine with six megabytes of memory
did 15 years ago? In this month's The cranky user, I'll get to the
bottom of this big mystery. But first, I want to revisit an old adage
and see where it stands today.

Moore's Law revisited
Moore's Law is easily one of the top five misquoted claims ever. The
simplified version preferred by most pundits is that computers will
become twice as fast every 18 months. In fact, Intel founder Gordon
Moore observed in 1965 that the complexity of chips with the lowest cost
per component was roughly doubling every year, and predicted that this
trend would continue for about 10 years. So Moore's Law doesn't
necessarily mean that performance will double; merely that the number of
components in the most cost-effective designs will. Moore's prediction
was borne out reasonably well by the data, and In 1975 he revised his
estimate to about 18 months, which is the number most people cite today.

Interestingly, Moore's Law doesn't necessarily tell as much about clock
speeds as it does the complexity of system designs. Modern CPUs do a lot
of things differently from older CPUs: for instance, they can execute
multiple instructions simultaneously. I also bears noting that whether
processing speed is doubling at any predictable rate, it has certainly
increased quite a bit.

What's fascinating is that, for most users, performance isn't noticeably
any better today than it was 15 years ago. What's the computer doing
with all this processing time? One Usenet poster, commenting on OS X's
animated rainbow beachball cursor, hypothesized that the processor, like
the user, is busy watching the hypnotic spinning beachball.

Jokes aside, computers are, in fact, doing more than they used to. A lot
of the things computers do are fairly subtle, happening beneath the
radar of a user's perception. Many functions are automatic and, as
discussed in last month's column, you could probably do without some of
them. But still, a lot is going on. Take a look at some of the primary
uses of computer processing power today.

Graphical displays
Many modern systems use antialiasing to render text. This makes text
easier to read and can substantially improve the usability of
low-resolution monitors and LCD displays. On the downside, antialiasing
sucks up a lot of processing power. Visual effects like drop shadows
behind windows and menus, transparent menus and effects, and real-time
effects also consume a lot of processing power. The catch for computer
makers is that most users expect them.

For example, older systems used bitmap fonts, which rendered quickly at
the provided size but looked ugly at any other size. Most modern systems
render outline fonts in real time, which users are now used to. Even
with some caching involved, font rendering adds one more layer of
processor overhead -- but no vendor would dare release an interface with
bitmap fonts today.

The current Mac operating system gets around some graphical overhead by
having the rendering hardware of video cards do additional work. The
video card essentially becomes a second processor, which cuts down on
the graphics processing time. Of course, knowing this makes it even more
disturbing when your computer interface turns sluggish.

Word processing
As I previously mentioned, most users feel a little put out when they
can type faster than a word processor can process words. The worst days
of this trend seems to be behind us now: most word processing programs
started to keep up with even good typists somewhere around the 1-Ghz
clock-speed mark. These days, it's the automatic features on these
programs that can slow down your system.

Automatic checking is a default behavior on most word processing
applications. Some simply underline misspelled words and questionable
grammar while others automatically correct as you type. Not only are
these corrections occasionally inaccurate (many writers turn this
feature off), but the behavior also requires a lot of additional processing.

Other automatic features in the "do I really need this?" category
include one company's famous Office Assistant, as well as many varieties
of formatting and workflow automation.

Safety and security
A certain amount of your system's processing power goes to improved
safety and security features for your applications. Many of these
features come in the form of critical security patches, since the
original code was written without enough attention to sanity checking.
The problem with patches is that they add up over time, meaning that
individual ones only marginally affect performance, but taken together
they can amount to a decent time sink.

Virus scanners are a more serious power hog than patches. As viruses
become a more significant component of the daily user experience,
developers are spending more energy (and processing power) trying to
fight them. Most virus scanners update themselves regularly, which makes
for a small, but noticeable, amount of background activity. They also
scan a lot more files than they once did.

The existence of macro viruses for word processors means that virus
scanners have to scan data files, not just executables. As a result, a
single file might be scanned several times before you access it. Any
file you download is scanned first and, since it's an archive, so are
its contents. You open the archive in your archive utility, which scans
everything in the archive. You extract the file to disk, which causes it
to be scanned. Then you open it, and it's scanned again.

All this scanning chews up a lot of processing time, which affects every
program running on a system. For instance, a video game that uses a lot
of graphical files and loads them on the fly could require the same 20
MB file to be scanned a dozen times during an hour's play.

All that said, security is a worthy and necessary use of processing
power, and the alternatives are worse: spyware and viruses can consume
incredible amounts of time. Another common cause of slow computers, at
least for Windows users, is an accumulation of any number of programs
that snoop on traffic, pop up advertisements, or otherwise make
themselves indispensable to a marketer somewhere.

Program complexity
Program complexity is probably the biggest culprit when your supposedly
speedy processor still runs slow. As applications become more complex, a
certain amount of their code (and thus your processing power) goes into
making them more manageable. This code, which I'll call support code,
actually makes programs easier for developers to write. A very large
program might incorporate nested layers of support code. For instance, a
Linux build of Mozilla might link to 30 or so different pieces of
support code -- including support code for the support code that Mozilla
uses directly.

The code itself is typically very efficient for its task, and it does
make the job of developing large-scale applications much easier. But the
code that enables all these small pieces of code to interact in a
predictable manner adds a small runtime cost. Once again, a small cost
repeated many times adds up to a significant performance hit.

A few of the programs I use on Windows run special programs at system
startup. Each of these programs pre-loads its own shared libraries,
which in turn allows the program to launch more quickly later. At one
point, the delay from the initial appearance of my desktop to my system
being responsive enough for me to start using it was up to about five
minutes. Why? Because it was running a dozen or so programs to make
programs load faster. The irony didn't seem funny at the time, but it
does now.

In conclusion
A certain amount of code bloat is inherent in all this modern
complexity, and I've talked about some of the worst offenders this
month: applications that require processors to do extra work that isn't
really useful to the user; support code that no one really understands
(and that the system may not require), but if you leave it alone it
works; over-patched code resulting from too much emphasis on rapid
development and not enough on sanity checking. Whatever the problem,
overemphasis on time to market is probably a contributing factor.

Another factor is the so-called second-system effect, first discussed by
Frederick Brooks in The Mythical Man-Month: when developers do a second
project, they want so badly to make it better that they often include
ill-considered features that render the resulting system bloated and
unusable. Unfortunately, a lot of the systems in widespread use today
are second systems. Worse yet, the bloat introduced by a second-system
design is often preserved in future revisions to preserve compatibility.

Luckily, the worst is probably over. Around the time when 800-Mhz
processors came out, users stopped the driving need to upgrade
constantly. Most users today can complete their work without waiting
hours for the computer to perform its tasks.

This week's action item: Launch a few applications simultaneously and
time their start-ups. Try it again in five years to see whether the time
has improved.

Resources

* Read about the phenomenon of the overactive user interface in
Peter's column from last month, Everything's automated! (developerWorks,
January 2005).

* Focus on macro viruses -- the central topic of Peter's column on
Usability vs. security (developerWorks, August 2002).

* Catch the predictions of developerWorks readers on the evolution
of Moore's Law in A peek through the veil at 2005 (developerWorks,
January 2005).

* Learn about the contributions of IBM to Mooresian complexity and
modern day processing with Nora Mikes' History of chipmaking at IBM
(developerWorks, March 2004).

* In Reducing the user interface by Mark Molander, discover some
practical ways to cut the data bloat on your UIs as Mark notes that many
applications have far more data and functions than users ever need
(developerWorks, June 2001).

* Bet you didn't know that Wikipedia maintains a page about Moore's
Law.

* Visit The Jargon File for a good description of the second-system
effect.

* Find out how the IBM Global Services Usability Engineering team
can help you improve your products and make them easier to use.

* Also, avail yourself of these valuable resources on developerWorks:
o The Web Architecture zone specializes in articles covering
various Web-based solutions.
o The Developer Bookstore presents a comprehensive listing of
technical books, including hundreds of Web-related titles.

About the author
Photo of Peter SeebachPeter Seebach has been using computers for years
and is gradually becoming acclimated. He still doesn't know why mice
need to be cleaned so often, though. You can contact Peter at
crankyuser@seebs.plethora.net .
--
C:\>