Latest generative audiovisual experiment by Lanvideosource. Video patch extracts and compares words taken from three news websites (italian, european and global). Structure created interacts in real-time with audio frequencies and is partially sequenced. Video patch sends back triggers to audio section generating audio-video interaction feedbacks. Words extracted and decontextualized are constantly remixed in the patch, obtaining new meanings or creating a kind of subliminal messages. Everytime the patch is run, structures, camera movement or the news-of-the-day change, making it in part unpredictable.
Video is entirely made in VVVV. Patch is rendered off-line, no post cuts. Only motion blur is applied in post processing.
Audio is made in Reaktor5 and sequenced in cubase.
Little about process:
Patch is based on Lsystem and Curvesimple. Words are extracted in realtime by decoding rss feeds of news websites then the text is decomposed and separated according to words frequency.
The synthesized part of sound is created in Reaktor5. Audio samples are added in cubase. Midi tracks are recorded in Cubase, sent to VVVV (for sequencing and global synchronization) and played by Reaktor. Within VVVV, separated tracks containing individual audio layers are played. Three bands of frequency are analyzed then audio triggers control form, camera, light and other parameters..
VVVV modulates back by midi some parameters of audio as volume or position of sounds.
Sequencing was necessary for plaing and recording the patch not in real time. Frames recorded in HD format was then composed in a video file.
It's more simple than it seems.
Rencently added to visualcomplexity
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?