This video was recorded during a Blank pages live improvisation session in Pure Data conference. In such a context participants start with an open canvas and create music on the fly with graphical programming.
This video was recorded facing the outside window of LEAP, looking at Alexanderplatz and catching the refections of the reactive lights of "I am display", the people, and the climate changes. There is no editing.
Authors: Peter Venus, Marian Weger, Cyrille Henry, Winfried Ritsch
The Extended View Toolkit is a set of abstractions developed using Pd/GEM and the openGL capabilities of GEM.
The motivation to create systems, that are able to produce media with a wider projection area lies in our own visual and aural capabilities: nature gave us the ability to gather visual information with an angle of almost 180 degrees and to detect acoustic sensations in 360 degrees around us. Although relevant information gets filtered, so that there is a center of focus we can concentrate on, these abilities form our main ambience perception. Even more, we tend to "turn our heads" towards an event that attracts our attention.
Based around this motivation and the idea of the universal usage of such a toolkit, it was intended to create a open-source and community driven project to experiment and realise panoramic video in combination with complex projection environments to create immersive media experiences in different setups. Furthermore, the camera system built for the project incorporated common webcams to make experimenting possible at low development costs.
The toolkit can be divided into two basic parts: the image-merging part and the projection part. The source for the image-merging (stitching) part is generated with a multiple-camera system, where the cameras are horizontally aligned on a circle to cover a panoramic-viewing area. With the help of the toolkit, those sources can then be combined into one continuous image or video stream. The abstractions of the toolkit are taking care of the edge-blending, as well as the correction of lens-distortion, which is caused by optics and arrangement of the cameras. Those features are implemented with the help of openGL shaders in order to keep CPU-usage at a minimum.
The second part of this set of abstractions aims at the creation of multi-screen and multi-projector environments, not only to display a correct representation of the generated panoramic material, but to enable easy creation of immersive visual media environments and video mapping. These abstractions can be combined to form big screens out of multiple projectors with edge blending between overlapping borders of single projectors and contains a vertex model that can be adjusted in 4 points for plane surfaces, respectively 9 points for curved surfaces and correction of projection distortions. Thus, projection mapping on challenging geometric objects and forms is possible and easy to realise. Since every single projection-abstraction reads the same framebuffer, the abstractions can be adjusted in a way, that just a portion of this framebuffer is displayed on each instance individually. That way it is possible to combine multiple abstractions to form fragments of a big image, which reassemble into one big continuous screen. These vertex and texture features are also implemented with openGL shaders within GEM.
The Extended View Toolkit was originally developed at the IEM for the Comedia-Project “Extended View Streamed“.
Multi-instantiation is the process of taking a single declaration of an object, or a patch, and creating several instances of it, replicating the structure, while possibly varying the initial state. An important application, among many others, is supporting the implementation of polyphonic instruments.
There are two distinct design options: the process of replication may be initiated and controlled either from the outside, or from the inside of the replicated patch. The latter possibility is explored in this paper in an attempt to advocate for self-replication as a conceptually simple, yet quite generic and powerful mechanism.
Developed by the workshop teacher Marco Donnarumma within a research project at The University of Edinburgh, Xth Sense is a framework for the application of muscle sounds to the biophysical generation and control of music.
It consists of a low cost, DIY biosensing wearable device and an Open Source based software for capture, analysis and audio processing of biological sounds of the body (Pure Data-based).
Muscle sounds are captured in real time and used both as sonic source material and control values for sound effects, enabling the performer to control music simply with his body and kinetic energy.
Forget your mice, MIDI controllers, you will not even need to look at your laptop anymore.
The Xth Sense biosensor was designed to be easily implemented by anyone, no previous experience in electronics is required. The applications of the Xth Sense technology are manifold: from complex gestural control of samples and audio synthesis, through biophysical generation of music and sounds, to kinetic control of real time digital processing of traditional musical instruments, and more.