Psychology literature suggests that humans are happier if their basic needs of competence, autonomy, and relatedness are satisfied. Recent studies suggest that this is one of the driving factors behind why video games are so engaging and immersive—they effectively satisfy these basic needs. Our understanding of the benefits of multi-touch technology is at a similar stage of progress as video game literature was before this realization; as researchers, we sometimes find it surprising how satisfied people are with multi-touch technology, and are often at a loss to demonstrate performance benefits.
In this talk, I will discuss the idea that the success of multi-touch technology is due largely to its ability to satisfy human needs. By leveraging physical-like responses from technology that senses body, hand, and finger movement, we improve feelings of competence and autonomy, and the communicative power of these physical actions improves our ability to relate to others when using these devices collaboratively. I will demonstrate these concepts in the context of my own research on 3D interaction using multi-touch tables, and several applications of this research in therapy, education, and gaming.
Mark Hancock is an assistant professor at the University of Waterloo in the department of Management Sciences and Associate Director of Research Training for the Games Institute, and is currently visiting Microsoft Research in Redmond. Before starting at the University of Waterloo, he completed his PhD at the University of Calgary, and his MSc at The University of British Columbia. His research includes the design and development of interfaces and interaction techniques for digital surfaces, with a focus on physical-like 3D interaction.
We know how to design beautiful and novel experiences but, at the end of the day, novelty doesn’t last nor does it drive sustaining value for customers/audiences/participants and organizations alike. Lasting value on a personal level and premium value on a business level come from the same place and we now have models with which to strategize, design, develop, and deploy products and services that maximize both—if we only begin to use them. Nathan will discuss new models for developers of all types (design, engineering, management, and strategy alike) to use to create the deepest, most meaningful customer experiences that drive relationships targeted on the best value for everyone concerned.
Nathan Shedroff is the chair of the ground-breaking MBA in Design Strategy at California College of the Arts (CCA) in San Francisco, CA. This program prepares the next-generation of innovation leaders for a world that is profitable, sustainable, ethical, and truly meaningful by uniting the perspectives of systems thinking, design thinking, sustainability, and generative leadership into a holistic strategic framework.
He is a pioneer in Experience Design, Interaction Design and Information Design, is serial entrepreneur, and researches, speaks and teaches internationally about meaning, strategic innovation, and science fiction interfaces. His many books include: Experience Design 1.1, Making Meaning, Design is the Problem, Design Strategy in Action, and the new Make It So.
He holds an MBA in Sustainable Management from Presidio Graduate School and a BS in Industrial Design from Art Center College of Design. He worked with Richard Saul Wurman at TheUnderstandingBusiness and, later, co-founded vivid studios, a decade-old pioneering company in interactive media and one of the first Web services firms on the planet. vivid’s hallmark was helping to establish and validate the field of information architecture, by training an entire generation of designers in the newly emerging Web industry.
Nathan is on the board of directors for Teague and the AIGA.
This talk presents co-performance as a potential tool for anonymous pseudonymity. We can define anonymised pseudonyms as identities for communication, untraceably disassociated from a performer, that are persistent in use long enough to establish some measure of social reputation. Digitally authoring or controlling an identity allows for provenance to be hidden through cryptographic systems. However, mass storage and processing can uncover 'signatures' and 'fingerprints' in diverse communication modes. These include approaches as varied as writing style, time-of-day analysis, and camera or other hardware profiling.
Anonymous pseudonyms enable creative experimentation and are a healthy extension of multifaceted identity. Historical norms in writing, performance and innovation demonstrate the broad-reaching benefit of creativity under the security of invented names and characters. On the web, particularly in the early years of the internet, easy forms of anonymous reputation management have proven invaluable for numerous contexts of social interaction and debate. Untraceable pseudonyms also make open critique and whistleblowing possible.
Co-performed identities, in which multiple people control a single character, offer an opportunity for increasing the complexity of attempts at 'fingerprinting'. Examples are presented to illustrate this design space, including historical cases such as Nicolas Bourbaki, expert co-performers such as commercial puppeteers, and trends in internet use in which prototype co-performed identities seem to have spontaneously emerged.
Bio: Ben Dalton is currently investigating the theme of 'design for digital pseudonymity' at the Royal College of Art, London. Ben is a Principal Lecturer in the Faculty of Art, Environment & Technology at Leeds Metropolitan University, and is on sabbatical to undertake PhD research in to Digital Public Space as part of the AHRC (Arts & Humanities Research Council UK) funded Creative Exchange project. Ben has recently shown work, given talks and run workshops on themes of digital identity performance and control, including FACT Liverpool, RCA London, FutureEverything Manchester, Today's Art The Hague, Berghs Stockholm, Abandon Normal Devices Liverpool, WWW2013 Rio de Janeiro, Sensuous Knowledge Bergen, and DIS Newcastle.
Ben has a background in ubiquitous computing and mobile sensor networks from the MIT Media Lab, and has conducted research in the Århus University Electron-Molecular Interaction group, University of Leeds Spintronics and Magnetic Nanostructures lab, and Jim Henson’s Creature Shop, London. Recently he has been a regular guest Professor at the Bergen National Academy of Art and Design, teaching workshops on interaction design. Ben was a co-investigator on two EPSRC (Engineering & Physical Sciences Research Council UK) funded research projects in to visualising pedestrian usage patterns in interactive urban spaces and wearable computing sensors for ubiquitous computing applications. He is also currently co-directing the Data is Political project in to the aesthetic, ethical and spatial dimensions of information and its relation to power, the production of knowledge, and construction of urban spaces.
Although measurements are ubiquitous in documents and visualizations they can be challenging to understand due to unfamiliar units (e.g,. grams, decaliters) or unfamiliar magnitudes (e.g., 380m, 3 tons). Professional designers and educators create visual and interactive aids that enact strategies known to help people understand measurements, including (1) re-unitization: re-expressing a measurement using a new unit (e.g., 1,384m is 10 times the length of a ship), (2) scale conversion: re-expressing a measurement along one scale like count in another scale, like price or volume (e.g., if 25,929 represented a number of pillows it would equate to $715,121.82 or a volume of 1,136m3), (3) proportional analogy: re-expressing the ratio between a pair of measurements using a pair of more familiar measurements with an equivalent ratio (e.g., the difference between 47g and 1113g is like the difference between the weight of a spatula and a computer monitor), and (4) putting measurements in context: an unfamiliar measurement is compared to a set of more familiar measurements (e.g., 60kg is compared to the weight of a bicycle and the weight of an oven). I will present a set of tools to facilitate these strategies at scale. These tools rely on a database of objects and their measurements, including weight, height, length, volume, and cost. We developed this database using a three stage pipeline that employs online semantic databases like WordNet and ImageNet, object databases like Amazon and Wikipedia, and crowdsourcing techniques. Interactive applications apply automated versions of the strategies to facilitate understanding of measurements as a user interacts with text articles or data visualizations.
Mobile interaction suffers from two fundamental issues: the small form factor of mobile devices – the device constraint – and the divided attention of users by real-world tasks – the user constraint. Meanwhile, the rich sensing capabilities of mobile and wearable devices as well as their seamless integration into our everyday activity create new opportunities for mobile interaction beyond the pointing and clicking of traditional GUIs. In this talk, I first describe how we can significantly reduce user effort in mobile interaction, at scale, by leveraging gestural input. I then describe how to empower developers to leverage these new input dimensions in their applications, such as gestural and contextual input, through new tools and frameworks. Through these systems, I will discuss how these input dimensions, though natural to the user, deeply challenge traditional interactive computing, and how we can address this challenge by providing high-level tool support.