Description of the project
Fusiform Polyphony is a series of 6 interactive robotic sculptures that compose their own music with input from participant facial images. Micro video cameras mounted on the ends of these robots, move toward people’s body heat and faces while capturing human snapshots. These images are digitally processed, pixelated and produce constantly evolving generative soundscape, where facial features and interaction are turned into sound melody, tone and rhythm.
These elements fused, manifest the viewer as participant / actor and conductor in defining new ways of interacting with robots and allow the robots to safely interact with humans in complex and natural environments. An important element of this installation is to see self, through the robots artificial eyes, as each robot tracks and captures images in the process of showing the nature of algorithmic robotic vision.
These works are covered in human hair and explore new morphologies of soft robotics, an emerging field, where natural materials make the works approachable and friendly. The hair serves to point to a human robotic hybrid moment in our own evolution, where the intelligence of robots is more fully fusing with our own, in allowing new forms of robotic augmentation. Each robot has different colored hair creating individual character for each.
The live camera based video of the robots is processed through MAX MSP and Jitter and projected to the periphery of the installation on 5 screens. When the robot is at head height a sensor at the tip of the robot is triggered and a facial snapshot is taken.
This snapshot is held in the small area of the projected screen to the upper right. That snapshot is broken down into a 300 - pixel grid and the variations of red, green and blue data of each pixel is extracted and interfaced to Max MSP to Ableton Live a sound composition tool which selects the musical sample determining rhythm, tempo and dynamics.
The robotic aspects of this work are controlled with 6 Mac Minis with solid-state drives wired to individual, midi-based controller to sensor and motor drive units. The Mac Minis are all networked to a Mac Pro Tower which processes the video of the 6-selected images, interfacing them to the Ableton Live sound program.
Changing pixel data constantly changes Ableton virtual instrument selection sets with random seeds coming from the snapshots. The robotic structures are were created with 3D modeled cast urethane plastics, monofilament and carbon fiber rod and laser cut aluminum elements supporting the computers microprocessor and motor drive systems.
These robots structure, inform, enhance and magnify, people’s behavior and interactions as they auto generate a unique and a constantly evolving generative soundscape. They take the unique multicultural makeup of each person and create “facial songs” where those songs joining with 6 robotic / human soundscapes, creates an overall human polyphonic and video experience.