Database Reference
In-Depth Information
Fear and Loathing of Complexity: Although most infovis papers do not
have detailed proofs of complexity, technique papers that focus on accelerating
performance should usually include some statement of algorithm complexity.
Straw Man Comparison: When comparing your technique to previous work,
compare against state-of-the-art approaches rather than outdated work. For ex-
ample, authors unaware of recent work in multilevel approaches to force-directed
graph drawing [10] sometimes compare against very naive implementations of
spring systems. At the lower level, if you compare benchmarks of your implemen-
tation to performance figures quoted from a previous publication and your hard-
ware configuration is more powerful, you should explicitly discuss the difference
in capabilities. Better yet, rerun the benchmarks for the competing algorithms
on the same machine you use to test your own.
Tiny Toy Datasets: Avoid using only tiny toy datasets in technique papers
that refine previously proposed visual encodings. While small synthetic bench-
marks can be useful for expository purposes, your validation should include
datasets of the same size used by state-of-the-art approaches. Similarly, you
should use datasets characteristic of those for your target application.
On the other hand, relatively small datasets may well be appropriate for a
user study, if they are carefully chosen in conjunction with some specific target
task and this choice is explained and justified.
But My Friends Liked It: Positive informal evaluation of a new infovis sys-
tem by a few of your infovis-expert labmates is not very compelling evidence
that a new technique is useful for novices or scientists in other domains. While
the guerilla/discount methodology is great for finding usability problems with
products [27], a stronger approach would be informal evaluation with more rep-
resentative subjects, or formal evaluation with rigorous methodology.
Unjustified Tasks: Beware of running a user study where the tasks are not
justified. A study is not very interesting if it shows a nice result for a task
that nobody will ever actually do, or a task much less common or important
than some other task. You need to convince the reader that your tasks are a
reasonable abstraction of the real-world tasks done by your target users. If you
are the designer of one of the systems studied, be particularly careful to make a
convincing case that you did not cherry-pick tasks with a bias to the strengths
of your own system.
5 Final Pitfalls: Style and Submission
After you have a full paper draft, you should check for the final-stage pitfalls.
5.1 Writing Style Pitfalls
Several lower-level pitfalls pertain to writing style.
Search WWH ::




Custom Search