We are surrounded by human-authored visual communication: graphic design, photographs, illustrations, and more. While appreciation of visual content is certainly subjective, there is a surprising amount of agreement in what humans find beautiful and effective. In this talk I will discuss three ongoing and unpublished projects that model human aesthetic taste using crowdsourcing and machine learning; we then use employ these models in interfaces that help users create better visual content. Our first project tries to build better interfaces for selecting fonts than the standard linear menu by understanding how humans perceive font attributes (e.g., is a font ‘dramatic’ or ‘legible’). Second, we try to model similarity of visual style in vector illustration, so that users can search online clip art repositories by visual style when creating clip art mashups. Third, we build models of how others perceive our expressions in portrait photographs, so that we can be practiced and ready when someone points a camera at us.
Bio: Aseem Agarwala I am a principal scientist at Adobe Systems, Inc., and an affiliate assistant professor at the University of Washington's Computer Science & Engineering department, where I completed my Ph.D. in June 2006 advised by David Salesin. I spent three summers during my Ph.D. interning at Microsoft Research, and my time at UW was supported by a Microsoft fellowship. I completed my Masters and Bachelors at MIT majoring in computer science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research Laboratory (MERL) . My Ph.D. dissertation won an honorable mention for the 2006 ACM Doctoral Dissertation Award. My areas of research are digital imaging, computer graphics, computer vision, and data-driven design. My research can be found in multiple products, including Microsoft Photo Gallery, Adobe Photoshop, Adobe Premiere Pro, and Adobe After Effects.
Chances are your phone spends more time in a pocket than in your hand (e.g. in your pants, jacket, or purse). While we might typically think of all this time as “not using our phone”, these times represent an important opportunity for human-computer interaction. In this talk, I’ll describe work we have been doing on context sensing, context sharing, context APIs, and even touch input while your phone is in your pocket.
Bio: I am a Researcher in the Computational User Experiences (CUE) group at Microsoft Research. My general research interests are Human-Computer Interaction (HCI) and Ubiquitous Computing (UbiComp). I spend most of my time creating new human-computer input and output techniques. I also write my bios in the first person. The broad goal of my work is enabling computing to aid people throughout every aspect of their lives. My focus toward this goal is the concept of Always-Available Computing: the idea that computing can and should be at our fingertips no matter where we are or what we are doing. In 2010 I completed my PhD in the Computer Science & Engineering department at the University of Washington where I was advised by Professor James Landay and Dr. Desney Tan. In my dissertation work, I created new human-computer interfaces by exploring techniques to harness the untapped bandwidth of the human body for physiological interfaces to computing. The focus of my work in this area was muscle-computer interfaces. This work has led to many publications and coverage by media outlets including being honored as one of Technology Review's 2010 Young Innovators Under 35.
Note: Due to technical issues, the rest of the talk was not recorded. The rest of the slides will be posted as a separate video file. Slides can be viewed here: vimeo.com/80535518, with the password "slides"
For the last 25 years, technologists and designers have dreamt of integrating computing technology into the world around them. From Ubiquitous Computing to Tangible Media and Things-that-Think, this vision continues to capture researchers’ and designers’ imaginations; and in many ways, it has finally begun to bear fruit. Today’s world is full of smart parking meters, Google Glasses, Smart Watches, FitBits, and sensor networks.
But is a world of things infested with computer chips and electronic materials viable in the current environmental crisis? Electronic materials and products present unique environmental challenges. The manufacture of computer chips requires toxic processes and enormous amounts of energy, which is not recoverable through recycling. Smart objects and materials are composites made from many different types of materials- plastics, metals, textiles, carbons- and as such, are difficult if not impossible to recycle.
Given these challenges, should designers be espousing the indiscriminate migration of technology into the world around us? Or is there a more environmentally ethical and wise approach to technology and design? Equally important, can the needs of the environment dovetail with human centered design? Do people want their lives burdened with uncountable pieces of technology?
Today’s designers also find themselves in the service of the corporation. Industrial design has survived by morphing into product design and becoming a way to create new needs and new products that fill those needs. Given technology’s economic status, many of these products are electronic. Most designers sense that the continued creation of short-lived products is unsustainable- and short-lived electronic products especially so. But how can designers act? Is there a way to behave in an environmentally ethical fashion within the corporate economic framework, or must designers search outside it?
I have spent the last 15 years exploring the vision of smart objects and materials in electronic textiles. My work has included technology research, design, art and entrepreneurship. I have created electronic fashions, interactive and electronic artworks, patents and design products. My talk will present an overview of my work in electronic textiles, including early work at the MIT Media Lab, artworks, and products from my technology design company, International Fashion Machines. I will discuss the creative motivation that drove my work, and the technical and economic lessons learned from it. Finally, I will present my ideas for Technological Minimalism, which grew out of my electronic textile and wearable practice, and the questions that I believe young designers must address in the current environmental crisis.
Bio: Maggie Orth is an artist, writer, and technologist based in Seattle, WA. For the last 15 years her artistic practice has focused on electronic textiles and interactive technology. She has created textiles that change-color under computer control, interactive textile sensor and light artworks, and robotic public art. Orth is an interdisciplinary thinker with 15 years of experience in innovation, technology research, design, and entrepreneurship. Her areas of experience include: sustainability, technology, design-thinking, interface design, usability, product development and design, entrepreneurship, brainstorming, standards, intellectual property, wearable computing, and storytelling and verbal communication. Maggie holds patents, has developed her own innovative UL-listed products, conducted research for DARPA, and worked with companies to develop wearable and technology products.
Maggie developed her art and design in the context of her company, International Fashion Machines, Inc. (IFM), which she founded in 2002. At IFM, Maggie focused on the creative, technical, and commercial development of electronic textiles. She wrote patents, conducted research, and developed her own technology and design products, including the PomPom Dimmer. Maggie holds a PhD in Media Arts and Sciences from the Massachusetts Institute of Technology, Media Lab. She also earned a Masters of Science from MIT's Center for Advanced Visual Studies, and a BFA from Rhode Island School of Design. She has completed two certificates in non-fiction and fiction writing at the University of Washington.
As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The development of such programs needs example-centric programming which involves retrieval of the real world I/O data. However, most of existing text-based integrated development environments (IDEs) are equipped with text-based editors and debuggers. They cannot show real-world I/O data intuitively and provide insufficient support for the programmer's workflow. To address this issue, I introduce use of graphical representations of the real world in text-based IDEs. It allows the programmer to take both advantages of concrete examples and text-based programming.
Bio: Jun Kato (junkato.jp/) is interested in broad area of Human-Computer Interaction, but has been especially focused on designing tools for programming interactions between human and the real world. This is his second summer in Seattle. Last year, he worked for TouchDevelop team of Research in Software Engineering (RiSE) group, Microsoft Research, and developed live programming interface for GUI applications. This year, he works for Adobe Research, Seattle.
In this talk I'll describe the opportunities and challenges of doing user experience research in an environment that is rich with data. I'll share how Facebook's research team uses a multi-method, multi-discipline approach that brings together qualitative and quantitative to design and improve products that 1.1 billion people use every month. I'll describe how the team works, present specific examples of recent projects, and discuss some of the challenges and thorny issues that Facebook faces.
Bio: Judd Antin is a researcher and manager of the Engagement and Core Experiences research group at Facebook. In his research, Judd uses the methods and practices of UX and data science to study mediated interactions and the connections between attitudes and behaviors. Judd's research draws from social psychology, communication, and behavioral economics, and focuses on motivations and incentives for online participation, collective action and social dilemmas, and trust. Judd earned a PhD from the School of Information at UC Berkeley in 2010.