Authors: Mershack Okoe, Radu Jianu
Abstract: The process of evaluating visualizations can be time-consuming. Here, we present a design aimed at automating the process of performing quantitative and qualitative evaluations of graph visualizations by leveraging crowdsourcing, and a set of predefined evaluation modules based on a graph task taxonomy. Specifically, we allow designers to quickly set up a user study with representative graph tasks, measurable metrics, and evaluation methods. Our system then uses a thin-client architecture to automatically generate a web accessible user study from our desktop visualization, places the study on Mechanical Turk, and uses a statistical package to automatically process incoming results. To evaluate our system, we performed three concrete evaluation studies, all of which were configured and deployed in less than an hour. We discuss how our system can be used for automatic evaluations of interactive graph visualizations, how it can facilitate evaluation of alternative designs during iterative design processes, and how it could be used to find good default configurations for graph visualizations.