Java Reference
In-Depth Information
WE WILL ULTIMATELY LOSE THE WAR
One aspect of performance that can be counterintuitive (and depressing) is that the performance
of every application can be expected to decrease over time—meaning over new release cycles of
the application. Often, that performance difference is not noticed, since hardware improvements
make it possible to run the new programs at acceptable speeds.
Think what it would be like to run the Windows Aero interface on the same computer that used to
run Windows 95. My favorite computer ever was a Mac Quadra 950, but it couldn't run Mac OS
X (and it if did, it would be so very, very slow compared to Mac OS 7.5). On a smaller level, it
may seem that Firefox 23.0 is faster than Firefox 22.0, but those are essentially minor release ver-
sions. With its tabbed browsing and synced scrolling and security features, Firefox is far more
powerful than Mosaic ever was, but Mosaic can load basic HTML files located on my hard disk
about 50% faster than Firefox 23.0.
Of course, Mosaic cannot load actual URLs from almost any popular website; it is no longer pos-
sible to use Mosaic as a primary browser. That is also part of the general point here: particularly
between minor releases, code may be optimized and run faster. As performance engineers, that's
what we can focus on, and if we are good at our job, we can win the battle. That is a good and
valuable thing; my argument isn't that we shouldn't work to improve the performance of existing
applications.
But the irony remains: as new features are added and new standards adopted—which is a require-
ment to match competing programs—programs can be expected to get larger and slower.
I think of this as the “death by 1,000 cuts” principle. Developers will argue that they are just
adding a very small feature and it will take no time at all (especially if the feature isn't used).
And then other developers on the same project make the same claim, and suddenly the per-
formance has regressed by a few percent. The cycle is repeated in the next release, and now
program performance has regressed by 10%. A couple of times during the process, perform-
ance testing may hit some resource threshold—a critical point in memory use, or a code
cache overflow, or something like that. In those cases, regular performance tests will catch
that particular condition and the performance team can fix what appears to be a major regres-
sion. But over time, as the small regressions creep in, it will be harder and harder to fix them.
I'm not advocating here that you should never add a new feature or new code to your
product; clearly there are benefits as programs are enhanced. But be aware of the trade-offs
you are making, and when you can, streamline.
Search WWH ::




Custom Search