Poster infovis honorable mention
Authors: Chufan Lai , Zhixian Lin , Can Liu , Yun Han , Ruike Jiang , Xiaoru Yuan
Abstract: In this paper, we propose a technique for automatically annotating visualizations based on the user's textual descriptions. In our approach, the annotating task is fulfilled by performing a series of automatic visual searches. First, the description of the visualization is parsed into search requests for certain visual entities. At the same time, all visual entities exhibited in the visualization, along with their visual properties are extracted using Object Detection techniques based on Mask-RCNN models. Knowing what are there and what to look for, we then fulfill the generated search requests, so as to anchor each descriptive sentence to the described focal areas. In the next step, the corresponding annotations can be crafted efficiently. We have built a prototype tool that allows the user to upload a visualization image with its descriptions, and generates customized annotations.