Databases Reference
In-Depth Information
• Run these tests often, so that we quickly become aware of any deterioration in
performance. We might consider incorporating these tests into a continuous de‐
livery build pipeline, failing the build if the test results exceed a certain value.
• Run these tests in-process on a single thread. There's no need to simulate multiple
clients at this stage: if the performance is poor for a single client, it's unlikely to
improve for multiple clients. Even though they are not, strictly speaking, unit tests,
we can drive them using the same unit testing framework we use to develop our
unit tests.
• Run each query many times, picking starting nodes at random each time, so that
we can see the effect of starting from a cold cache, which is then gradually warmed
as multiple queries execute.
Application performance tests
Application performance tests, as distinct from query performance tests, test the per‐
formance of the entire application under representative production usage scenarios.
As with query performance tests, we recommend that this kind of performance testing
is done as part of everyday development, side-by-side with the development of appli‐
cation features, rather than as a distinct project phase. 5 To facilitate application perfor‐
mance testing early in the project life cycle, it is often necessary to develop a “walking
skeleton,” an end-to-end slice through the entire system, which can be accessed and
exercised by performance test clients. By developing a walking skeleton, we not only
provide for performance testing, but we also establish the architectural context for the
graph database part of our solution. This enables us to verify our application architec‐
ture, and identify layers and abstractions that allow for discrete testing of individual
components.
Performance tests serve two purposes: they demonstrate how the system will perform
when used in production, and they drive out the operational affordances that make it
easier to diagnose performance issues, incorrect behavior, and bugs. What we learn in
creating and maintaining a performance test environment will prove invaluable when
it comes to deploying and operating the system for real.
When drawing up the criteria for a performance test, we recommend specifying per‐
centiles rather than averages. Never assume a normal distribution of response times:
the real world doesn't work like that. For some applications we may want to ensure that
all requests return within a certain time period. In rare circumstances it may be im‐
portant for the very first request to be as quick as when the caches have been warmed.
But in the majority of cases, we will want to ensure the majority of requests return within
5. A thorough discussion of agile performance testing can be found in Alistair Jones and Patrick Kua, “Extreme
Performance Testing,” The ThoughtWorks Anthology, Volume 2 (Pragmatic Bookshelf, 2012).
Search WWH ::




Custom Search