1. An Instrument for the Sonification of Everday Things

    01:10

    from Dennis P Paul Added 136K 1,559 45

    This is a serious musical instrument. It rotates everyday things, scans their surfaces, and transforms them into audible frequencies. A variety of everyday objects can be mounted into the instrument. Their silhouettes define loops, melodies and rhythms. Thus mundane things are reinterpreted as musical notation. Playing the instrument is a mixture of practice, anticipation, and serendipity. The instrument was built from aluminum tubes, white POM, black acrylic glass, a high precision distance measuring laser ( with the kind support of Micro-Epsilon ), a stepper motor, and a few bits and bobs. A custom programmed translator and controller module, written in processing, transforms the measured distance values into audible frequencies, notes, and scales. It also precisely controlls the stepper-motor’s speed to sync with other instruments and musicians. More Information: http://dennisppaul.de/an-instrument-for-the-sonification-of-everday-things/

    + More details
    • Quotidian Record

      02:04

      from Brian House Added 75.1K 367 10

      Quotidian Record is a limited edition vinyl recording that features a continuous year of my location-tracking data. Each place I visited, from home to work, from a friend's apartment to a foreign city, is mapped to a harmonic relationship. 1 day is 1 rotation ... 365 days is ~11 minutes. http://brianhouse.net/works/quotidian_record

      + More details
      • Two Trains - Sonification of Income Inequality on the NYC Subway

        04:46

        from brian foo Added 35.3K 111 6

        This song emulates a ride on the New York City Subway's 2 Train through three boroughs: Brooklyn, Manhattan, and the Bronx. At any given time, the quantity and dynamics of the song's instruments correspond to the median household income of that area. Read more about the composition and process of creating this song here: https://datadrivendj.com/tracks/subway Data-Driven DJ (https://datadrivendj.com) by Brian Foo (http://brianfoo.com) is a series of music experiments that combine data, algorithms, and borrowed sounds.

        + More details
        • Pure Data read as pure data - 2010

          12:30

          from N1C0L45 M41GR3T Added 20K 381 11

          Pure Data read as pure data is an audio visual trip through the back of the binary code, and its hidden qualities: structure, logic, rhythm, redundancy, composition... In this video version, as a tautological process, the content of the Pure Data application is read as pure data, directly displayed as sounds and pixels. A direct immersion in the heart of data flows. Based on Pd version 0.42.5 extended (Mac OS X Intel release) http://puredata.info/downloads Extrude function coded by Nicolas Montgermont http://nim.on.free.fr/ http://artoffailure.org/ Rework of a series started in 2002: http://peripheriques.free.fr/audio/between01_live_ecm-gantner_2003.mp3 http://peripheriques.free.fr/audio/between01_live_le10neuf_2003.mp3 http://vimeo.com/22040561 As an installation, the machine investigates its own hard drive's content. More info: http://peripheriques.free.fr/blog

          + More details
          • Rhapsody in Grey - Using Brain Wave Data to Convert a Seizure to Song

            04:00

            from brian foo Added 11.7K 72 8

            This song generates a musical sequence using EEG brain wave data of an anonymous epilepsy patient. It examines the periods before, during, and after a seizure. The goal is to give the listener an empathetic and intuitive understanding of the brain’s neural activity during a seizure. Please note: I have no formal education or training with diagnosing or interpreting a seizure using EEG brain scan data. I have done my own research to the best of my abilities, but all in all, this is a purely creative endeavor and should in no way be interpreted as scientific research or be used in any context other than in this creative one. For sake of transparency, I have detailed my process for creating this song in the link below and have made all relevant code publicly accessible. Feel free to reach out to me if you notice any glaring inaccuracies. Learn more about the process of creating this song: https://datadrivendj.com/tracks/brain Data-Driven DJ (https://datadrivendj.com) by Brian Foo (http://brianfoo.com) is a series of music experiments that combine data, algorithms, and borrowed sounds.

            + More details
            • vinyl+ • Expanded Timecode Vinyl

              02:19

              from Jonas Bohatsch Added 11.3K 216 7

              vinyl+ Expanded Timecode Vinyl 2009/10 Project description: vinyl+ is an interactive installation, experimenting with the expansion of timecode vinyl. Virtual objects are projected onto the surface of a white record and come to life when the record is played. Their behaviour changes depending on the rotational speed of the record as well as the position of the turntable‘s needle. The vinyl acts as the screen, interface and apparent carrier for generative audiovisual software pieces. The combination of turntable, computer and projector results in a new device, oscillating between analog and digital, hard- and software. Users are encouraged to spin the record for- and backwards and to carefully reposition the needle. Exhibitions: 2009: Alias in Wonderland, Vienna, Austria 2010: EMAF, Osnabrück, Germany FILE, São Paulo, Brazil NEWAIR, Vienna, Austria NODE, Frankfurt, Germany 2011: Cloud Sounds, NIMK, Amsterdam Supported by City of Vienna/Department of Culture (Wien Kultur). Also thanks to Native Instruments for support! For more info visit http://jonasbohatsch.net Yes, we had some problems with the focus when shooting this video...

              + More details
              • MiND Ensemble World Premiere Performance

                46:25

                from Robert Alexander Added 9,082 45 0

                Website: http://www.themindensemble.com/ This world premiere performance dives into an uncharted realm of human expression, as the conscious and subconscious minds of the performers become the very fabric of creation. Imagination becomes tangible, while hopes and fears are brought to life through EEG brainwave scanning technology. This performance was made possible with generous support from the Yahoo! Boost Award. The MiND ensemble (Music in Neural Dimensions) is a new-media performance group that utilizes custom interfaces to explore the mind-machine-music connection. The traditional paradigm of creativity and art has been as following: there is an artist, a thought process, and fixed medium which reflects those thoughts, leading to the realization of the artist’s expressive vision. Neurofeedback radically shifts this paradigm. Now there is an artist, a thought process, and a dynamic medium that actively interfaces with the very thought processes of the artist himself, a form of expression that drastically reshapes the way we conceive of the creative process. This presents a unique design problem: how can we optimize interaction within a completely intangible instrument? The ensemble hopes to make strides towards a deeper level of understanding of this question, and a significant contribution to both the scientific and artistic communities in the form of software tools and reference material. The MiND ensemble promotes a richly creative personal awareness in which the mind is the medium. ------------------------------------------------------------------- Ensemble Members: Robert Alexander David Biedenbender Laura Gaines Annlie Huang Suby Raman Sam Richards Dan Charette Visual Artist: Teresa Dennis Cinematography: Jacques Mersereau Meditation led by Master Wasentha Young of the Peaceful Dragon School of T'ai Chi Ch'uan and Chi Kung. Lighting Design - Jeff Alder Lighting Support - Charlie Klecha Front of House Engineer - Matt Rose Recording Engineers - Peter Raymond, Patrick Wakefield Video Studio Director - Jacques Mersereau Audio Studio Director - Dave Greenspan This performance was made possible with generous support from Yahoo! Labs and Design Lab 1 at the University of Michigan. MiND Synth - Software Tutorial: http://www.youtube.com/watch?v=OxOwwANvx1E

                + More details
                • PHENAKISTOMIXER 3.0

                  01:24

                  from Miss Take Added 8,849 290 7

                  Phenakistoscope was an early animation device that used a spinning disk of sequential images and the persistence of vision principle to create an illusion of motion. Original phenakistoscope discs had slits cut into them and had to be viewed using a mirror. Phenakistomixer appropriates this by precisely synchronising disc rotation with the shutter of a video camera to achieve similar effect, and is used as a live visual performative tool. Phenakistomixer version 3.0 is inspired by early 30s visual synthesizer Variophone in which optical sensor linearly scans monochromatic plates and translates reflected light intensity to sound waves. Concept and Animation: Vesna Krebs Programming and Sound: Borut Kumperscak Sound engine: Berkan Eskikaya, Louis Pilford

                  + More details
                  • VOSIS: Image and Video Audification Synthesizer for iPad

                    03:35

                    from Ryan McGee Added 6,935 31 0

                    VOSIS is a synthesizer that uses scanned synthesis of greyscale image pixel data from photos or live video input. It is also a tool for image audification/sonification and visual music performance. Visit www.imagesonification.com for more technical details. Available for free on the App Store! Operation: - Simply touch anywhere on an image to play sounds with the ADSR envelope - Toggle LOOP and double-tap to add a looped region - 6 voice plyphonly (3 touch regions and 3 looped regions) - Press IMG to select new image from photo library - Press CAM to use live video and toggle between front and rear cameras - PAN toggle enables stereo panning depending on horizonal image position - LEVEL modifies the brightness of pixels and MASK is a level bit-reduction - CUTOFF provides low-pass and high-pass filters - THRESH filter is useful to accentuate differences between image regions - SCAN RATE changes the ammout of time corresponding to an image region - BACKGROUND controls the opacity of the source image, no effect on sound - Double-tap the upper-right corner to show/hide the GUI - VIBRATO uses the accelerometer’s roll to modulate frequency - REVERB toggles a Lexicon emulation reverb mix and decay - TUNING creates major or minor chords with multiple touch voices - 2 finger left/right and up/down gesture control of filters anywhere on screen - Projector output for instant visual music performance (and no fingers in the way of images) VOSIS = Voice of Sisyphus Image Sonification - Software by Ryan Michael McGee | www.ryanmcgee.com - “Voice of Sisyphus” is an artwork by George Legrady | www.georgelegrady.com - Created with ofxUI by Reza Ali | www.syedrezaali.com

                    + More details
                    • Air Play - Smog Music Created With Beijing Air Quality Data

                      04:46

                      from brian foo Added 6,856 11 2

                      This song was generated using three years of air quality data in Beijing. The daily measurements of air pollutants alter the sounds and visuals over the duration of the song. Read more about the composition and process of creating this song here: http://datadrivendj.com/tracks/smog Data-Driven DJ (https://datadrivendj.com) by Brian Foo (http://brianfoo.com) is a series of music experiments that combine data, algorithms, and borrowed sounds.

                      + More details

                      What are Tags?

                      Tags

                      Tags are keywords that describe videos. For example, a video of your Hawaiian vacation might be tagged with "Hawaii," "beach," "surfing," and "sunburn."