Accessing images online is often difficult for users with vision impairments. This population relies on text descriptions of images that vary based on website authors’ accessibility practices. Where one author might provide a descriptive caption for an image, another might provide no caption for the same image, leading to inconsistent experiences. In this work, we present the Caption Crawler system, which uses reverse image search to find existing captions on the web and make them accessible to a user’s screen reader. We report our system’s performance on a set of 481 websites from alexa.com’s list of most popular sites to estimate caption coverage and latency, and also report blind and sighted users’ ratings of our system’s output quality. Finally, we conducted a user study with fourteen screen reader users to examine how the system might be used for personal browsing.
Presented by: Darren Guinness, Edward Cutrell, and Meredith Ringel Morris
Affiliations: University of Colorado Boulder, and Microsoft Research
Paper URL: cs.stanford.edu/~merrie/papers/captioncrawler.pdf
Guinness, D., Cutrell, E., Morris M.R. (2018, May). Caption Crawler: Enabling Reusable Alternative Text Descriptions Using Reverse Image Search. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM.