Authors: Connor C. Gramazio, Jeff Huang, David H. Laidlaw
Abstract: We show how mouse interaction log classification can help visualization toolsmiths understand how their tools are used “in the wild” through an evaluation of MAGI – a cancer genomics visualization tool. Our primary contribution is an evaluation of twelve visual analysis task classifiers, which compares predictions to task inferences made by pairs of genomics and visualization experts. Our evaluation uses common classifiers that are accessible to most visualization evaluators: k-nearest neighbors, linear support vector machines, and random forests. By comparing classifier predictions to visual analysis task inferences made by experts, we show that simple automated task classification can have up to 73% accuracy and can separate meaningful logs from “junk” logs with up to 91% accuracy. Our second contribution is an exploration of common MAGI interaction trends using classification predictions, which expands current knowledge about ecological cancer genomics visualization tasks. Our third contribution is a discussion of how automated task classification can inform iterative tool design. These contributions suggest that mouse interaction log analysis is a viable method for (1) evaluating task requirements of client-side-focused tools, (2) allowing researchers to study experts on larger scales than is typically possible with in-lab observation, and (3) highlighting potential tool evaluation bias.