Dynamic Confluence (2020)
Screen-based or interactive installation projection
George Legrady Studio
A variable dimension, dynamic visualization system with real-time stereo spatialized audio generation that functions on its own or else can be activated by motion-captured data from viewers observing the projection. The animation consists of a collection of images (in this video 36 black and white photographs taken in the Yucatan in 1980) that are positioned within a 3D virtual space based on Voronoi tessellation by which an aesthetic configuration is achieved. Once situated in this dimensional space, each image begins to move around based on a set of varying behavior parameters. The overall structure aims to remain the same even though changes in gravitational pull allow each of the image panels to go beyond their constraints.
Individual sounds are triggered when images shift their positions, and the configuration of the 63 sound samples dynamically combine to generate an evolving texture of sonic elements of various density, inspired by the composer Iannis Xenakis’ compositions “Orient-Occident” (1960) youtube.com/watch?v=-IIprq9p498 and “Bohor” (1962): youtube.com/watch?v=DODVNHukY0I
A short video with sound can be viewed here: vimeo.com/428403506 This video features a sampling of variations of different states in the ways that the images in combination with the sounds come together, otherwise the system just runs on its own creating an on-going visual and aural experience.
--
The custom software was developed in two phases. The initial studies for positioning a set of images in virtual three-dimensional environments implementing the Voronoi algorithm within a multi-focal anamorphic perspective, evolved from the 2013 National Science Foundation sponsored “Swarm Vision” research project: vimeo.com/85000265 This series titled “Anamorph-Voronoi”, was developed in collaboration with researcher Jieliang (Rodger) Luo begun around 2016. The software has been used by the Studio to produce series of artistic works-on-paper and lenticular panels.
The behavior modeling and interactions of image planes in a dynamic setting based on inorganic particle modeling, and organic group behavior was developed in the spring of 2020 in collaboration with Media Arts & Technology Ph.D student Mert Toka, also introducing a number of unique features such as variable dynamic audio presence based on the location of the images within the virtual 3D space, and the activation of sounds according to the location of the orbiting images as they define their space while maintaining group coherence.
--
Related Works by George Legrady Studio
Anamorphic Fluid: Kyoto Water Lilies: vimeo.com/225051559
Swarm Vision: vimeo.com/85000265
Auto Vision: vimeo.com/111252770
Exquisite Vision: vimeo.com/109427560
Voice of Sisyphus: vimeo.com/239322215
--
Hardware-Software Installation
There are two versions 1) automated software, 2) motion activated by spectators using a kinect type camera. Equipment requirements includes a good McIntosh with 32 to 64Gig RAM, a high-quality HD project, around 5000 lumens, and two to four active loudspeakers such as the Mackie HR824 power studio monitors.
--
George Legrady Studio research and projects explore the intersections of optical-machine representations, computer media, and digital data-based interactive installations and investigate how technologies transform visual content, imposing a meaning onto the visual content they process. George Legrady is distinguished professor in the Media Arts & Technology graduate program at the University of California, Santa Barbara where he directs the Experimental Visualization Lab. He is a Guggenheim Fellow recipient in Visual Arts.