In a video produced by EMC TV, we present the Living Observatory initiative, in which a dense sensor network documents a wetland restoration process. This effort supports ongoing research in restoration ecology and explores how sensor data might extend human perception to change our relationship with the environment.
We have been developing a dense sensor network to document ecological processes resulting from a large-scale wetland restoration taking place at Tidmarsh Farms, a decommissioned 577-acre cranberry farm in southern Massachusetts. Every 30 seconds, thousands of data points are streamed to a server, capturing a rich picture of the environment in flux as the restoration proceeds. It has been imperative for us to find ways to represent this information graphically for a variety of users and audiences, ranging from research collaborators studying wetland ecosystems to the visiting public. To achieve this aim, we built MarshVis, a system that visualizes data from the sensor network, highlighting spatiotemporal and inter-sensor relationships while also exposing the system operation. We implemented a number of web-based apps and developed strategies for real-time and historical exploration, as well as dynamic mapping. Our work is motivated by the need for interactive graphical tools that shed light on the delicate, interdependent ecological processes that make a natural environment sustainable. How can we expand the boundaries of public perceptions of natural phenomena at every scale?
MarshVis was created by Qiansheng Li as part of his 2014-2015 visiting research in the Responsive Environments Group at the MIT Media Lab.
The FingerSynth is a wearable musical instrument made up of a bracelet and set of rings that enable its player to produce sound by touching nearly any surface in their environment. Each ring contains a small, independently controlled exciter transducer commonly used for auditory bone conduction. The rings sound loudly when they touch a hard object, and are practically silent otherwise. When a wearer touches their own (or someone else’s) head, the contacted person hears the sound through bone conduction, inaudible to others.
The bracelet contains a microcontroller, a set of FET transistors, an accelerometer, and a battery. The microcontroller generates a separate audio signal for each ring, switched through the FETs, and can take user input through the accelerometer in the form of taps, flicks, and other gestures. The player controls the envelope and timbre of the sound by varying the physical pressure and the angle of their finger on the surface, or by touching differently resonant surfaces. Because its sound is shaped by direct, physical contact with objects and people, the FingerSynth encourages players to experiment with the materials around them and with one another, making music with everything they touch.
G. Dublon, J. A. Paradiso, “FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere,” to appear at the International Conference on New Interfaces for Musical Expression (NIME), London, U.K., 2014.
ListenTree is an audio-haptic display embedded in the natural environment. A visitor to the installation notices a faint sound appearing to emerge from a tree (or several), and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to both feel and hear crystal clear sound through bone conduction. To create this effect, a specialized audio exciter transducer is weatherproofed and attached to the underground base of a tree (or trees), transforming the tree into a living speaker that channels audio through its branches and provides vibrotactile feedback. Any kind of sound can be played through the tree, including live audio or pre-recorded tracks. In this video, filmed at the Centro Nacional de las Artes in Mexico City, visitors to a festival marking the Mexican Day of the Dead listen to poetry and stories about the origins of the tradition.
ListenTree has been presented in site-specific installations at the Centro Nacional de Las Artes in Mexico City (2014), RIDM International Documentary Film Festival in Montreal (2014), MIT Museum in Cambridge (2014), ICAD2014 in New York City, and CHI2015 in Seoul.
Our appliances are becoming more connected but their control interfaces do not understand them as part of an interconnected world. We are stuck using one dimensional controllers in a multi-dimensional world. How do we design interfaces that let us control our environment in the dimensions we perceive them? In the Responsive Environments Group we are exploring this question for lighting applications