We are surrounded by human-authored visual communication: graphic design, photographs, illustrations, and more. While appreciation of visual content is certainly subjective, there is a surprising amount of agreement in what humans find beautiful and effective. In this talk I will discuss three ongoing and unpublished projects that model human aesthetic taste using crowdsourcing and machine learning; we then use employ these models in interfaces that help users create better visual content. Our first project tries to build better interfaces for selecting fonts than the standard linear menu by understanding how humans perceive font attributes (e.g., is a font ‘dramatic’ or ‘legible’). Second, we try to model similarity of visual style in vector illustration, so that users can search online clip art repositories by visual style when creating clip art mashups. Third, we build models of how others perceive our expressions in portrait photographs, so that we can be practiced and ready when someone points a camera at us.
Bio: Aseem Agarwala I am a principal scientist at Adobe Systems, Inc., and an affiliate assistant professor at the University of Washington's Computer Science & Engineering department, where I completed my Ph.D. in June 2006 advised by David Salesin. I spent three summers during my Ph.D. interning at Microsoft Research, and my time at UW was supported by a Microsoft fellowship. I completed my Masters and Bachelors at MIT majoring in computer science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research Laboratory (MERL) . My Ph.D. dissertation won an honorable mention for the 2006 ACM Doctoral Dissertation Award. My areas of research are digital imaging, computer graphics, computer vision, and data-driven design. My research can be found in multiple products, including Microsoft Photo Gallery, Adobe Photoshop, Adobe Premiere Pro, and Adobe After Effects.