1. This piece is inspired by the Youtube movie named “Pendulum Waves”. The mathematical aesthetics of pendulums motion still attract us strongly. Although each of them swings individually in its own cycles, we sometimes see the linkage between pendulums, synchronized rolls and shapes of waves.

    We feel the beauty of motion in every moment. That is the gift of important human ability “Interpretation” that makes people enjoying art possible. In this piece, I create sound texture completely synchronized to 15 pendulums motion. I make the Pd patch based on the formula of pendulum motion that triggers 15 sine wave oscillators. Every 51 oscillations of the longest pendulum, whole pendulums are synchronized and restarted. ADSR envelopes and delay effects control tastes of sound textures. This is the challenge to “Interpretation” of pendulum motion described in sound.

    uni-weimar.de/medien/wiki/PDCON:Concerts/Seiichiro_Matsumura

    # vimeo.com/36473361 Uploaded 97 Plays / / 0 Comments Watch in Couch Mode

  2. The number of viable options for physical control over digital synthesis processes has grown tremendously in recent years. Alongside custom-built hardware controllers, several types of commercially available technologies are being used for this purpose as well. These include multitouch surfaces like the iPad, and an array of hardware originally developed for use with video games, such as Nintendo's Wii remote, the Sony PS3eye camera, and Microsoft's Kinect sensor. In addition to being relatively inexpensive, this technology has the advantage of providing sophisticated sensor data in a standardized format. For a geographically dispersed community of digital artists, standardization and accessibility are often critical. To complement this widely available hardware, there is a need for a standard software library that parses the resulting data streams to further improve accessibility and ease of use. Such tools are very important for remote collaborations in general, but they are particularly needed for digital musical instrument design—a field in which the creator of an instrument is too often its sole performer.

    This paper introduces the Digital Instrument Library (DILib) for Pure Data, a library of externals and abstractions that were developed for a course on digital instrument design in the Audio Technology program at American University. DILib is intended to streamline the process of realizing instruments that make use of built-in laptop hardware, accelerometers, infrared fingertip tracking, full body tracking, multitouch surfaces, and other types of interfaces. Each interface abstraction implements a parsing scheme that routes available data to consistently named send variables to be received and applied by users. In some cases, the data is also interpreted before being transmitted. For instance, multitouch trackpad data provided by the laptop interface abstraction is processed by a new external object that preserves continuity of points between each frame of data as it is reported.

    DILib will be maintained on a long term basis with the intent of providing designers of new digital musical instruments a stable means of accessing data from an ever-increasing number of control sources. It is hoped that DILib will also facilitate the process of recreating instruments shared by the Pd community.

    # vimeo.com/36456382 Uploaded 63 Plays / / 0 Comments Watch in Couch Mode

  3. Click Tracker is a tool designed for composers, conductors and instrumentalists working with modern music. It allows users to prepare an accompanying click track of any musical score, independently of its complexity. Also, several multimodal features that take advantage of both visual and aural feedback, making it suitable for musical study or learning contexts.

    The Click Tracker is available for public download and has been used in both practice and concert performance by several professional instrumentalists and composers.

    # vimeo.com/36436609 Uploaded 92 Plays / / 0 Comments Watch in Couch Mode

  4. This paper describes a system of realtime music notation using TrueType fonts (TTFs), running in the Graphics Environment for Multimedia (GEM) in the Pure Data computer music environment (Pd). The system makes use of dynamic object creation in Pd to create subpatches linked to a stave object, so that custom made abstractions for notes, rests, tempo marks, barlines and time signatures are added to the patch on-the-fly to create a visual score.

    The origins of this system derive from attempts in previous decades by the author to contrive an effective system for the creation of automated systems for musical score procession. The first of these was a continuous sheet of acetate with a graphic vocal score 7 metres long printed on it, to be scrolled by hand across the screen of an overhead projector [1]. Five further scores were created in a form of proportional polyrhythmic notation devised by the author and a prototype system for displaying scrolling scores was created in 2001 at the University of East Anglia using Max. However, the use of bitmaps and the refresh rate of the graphics resulted in a jerky display that was hard to read. These scores originate as fixed scores designed on paper, and the idea of a live notation system based on fonts was considered to offer a degree of flexibility with material that is often found in electronic music that could be applied to instrumental music, such as generative scores, aleatoric structure and feedback between the performer (or ensemble) and computer. The preliminary result of this enquiry is the Gemnotes system, which takes as its input a simple score language to render a score from notation display objects in live performance. A further consideration is that this system may be used with the Pd-extended distribution without modification, so that instrumental musicians do not need to understand complex computing issues such as compilation of source code in order to use a score patch. Initially the system is inefficient, and so methods for improving the performance of the system are discussed with a view to creating score patches that do not require external software to work.

    1. ↑ Te Deum, 1997

    uni-weimar.de/medien/wiki/PDCON:Conference/Gemnotes:_A_Realtime_music_notation_system_for_pure_data

    # vimeo.com/36419881 Uploaded 264 Plays / / 0 Comments Watch in Couch Mode

  5. Image Processing Production lines featuring industrial vision are becoming more and more widespread. That kind of automation needs systems able to capture pictures, analyze and learn from them in order to take appropriate action. These processes are often heavy and applied to high-definition images with important frame rate. Powerful calculators are thus needed to follow the ever growing production rate. NVIDIA is currently designing interfaces providing a CUDA[1] allowing parallel data computation. This could increase the performance of every operating system using graphical processing units (GPU). A CUDA program is made up of two parts: one running on the host (CPU) and the other exploiting the device (GPU). The non-parallelizable stages of the program are run on the host, while the parallelizable ones are run on the device. Pure Data, thanks to its graphical modular development environment, allows fast prototype development. Those factors led us to start a research program dedicated to the realization of image processing modules for Pure Data written in CUDA. First, we will adapt the most often used algorithms (already existing within the GEM library). Our first results are encouraging. For instance, regarding RGB image conversion to grey scale image, tests demonstrate that GPU computing grants an average accelerating factor of 109 comparing to the “only-CPU” based computing. However a CPU + GPU architecture has a weakness regarding data transfers between the local memory and the graphics card. Most of the computation time (more than 90%) is spent on those transfers. There is indeed a double transfer between CPU and GPU for each CUDA function block in Pure Data. Considering this, performance is not optimal. We will thus spend some time in the future of the project to minimize those transfers. The idea is to have one first transfer, from CPU to GPU, at the start of the program and one second backward transfer at the end containing the result from the whole process. In conclusion, image processing algorithms by the graphics card is a really effective solution for complex processing. Integrating CUDA blocks inside Pure Data facilitates and accelerates the prototyping of applications. This would suit every field requiring a high frame rate, a high resolution, an important amount of operations or computation-greedy processes. It does include use for industry, medical or artistic purposes.

    References
    1. ↑ Compute Unified Device Architecture

    uni-weimar.de/medien/wiki/PDCON:Conference/Image_Processing_Algorithm_Optimization_with_CUDA_for_Pure_Data

    # vimeo.com/36434429 Uploaded 112 Plays / / 0 Comments Watch in Couch Mode

Follow

Pure Data Convention 2011

PdCon11

Browse This Channel

Shout Box

Channels are a simple, beautiful way to showcase and watch videos. Browse more Channels. Channels