Audio Reactive Visualiser Test
With some prior experience in audio visual production and performance I wanted to explore this area some more through further practical tests.
I have previous experience as a DJ in live environments… manipulating music and sounds in realtime either for myself (when alone) or for myself and an audience.
I have also prepared visual ‘VJ’ performances for multiple projections onto 2D surfaces, as well as mapping them onto 3D objects such as spheres. For these performances, the systems I designed used a live audio input to modulate multiple parameters of effects and settings within an instance of Resolume Arena software. The basis for this set/patch/system were several 2D video clips which I had prepared. These were layered within the software and the playback ‘head’ speed and direction altered throughout the performance. Visual effects were layered on top and enabled or bypassed at different times for different effects. I also used video input from cameras (some static, some mobile) as layers. The resulting system was one which could be left for some time, with variations introduced in response to the audio happening in the space, but also can optionally be manually played.
With the above VJ system I observed the audience often responding enthusiastically when they noticed their agency in the visuals via the camera feeds.
What interests me is the realtime interactions between audience, visual artist, musical artist. The feedback loops which are present in this live situation where every person in the space is an active participant.
The following video shows a piece I previously designed and exhibited which included a visual performance projection mapped onto a sphere:
What I haven’t tried yet that I would like to explore is making a system where one artist can control both visuals and music at the same time.
My previous exploration of visuals relied on 2D recorded and/or live video as layers. I would like to try out generative visuals as this will allow finer control over the source of the visuals, thus greater control as a visual artist. This also opens up great potential for live parameter modulation by live audio and or control signals such as timecode and bpm.
Audio-reactive visuals enable live visualisations of audio data streams. By filtering, processing and manipulating an incoming audio signal certain parameters can be controlled by different aspects of the incoming sounds.
Systems that work like this enable visual applications and/or artists to create visual interpretations of sounds or music in realtime. This means the system or people creating the visuals don’t need to know what sounds will be played… enabling for example a musician to perform and visuals to be created on the fly that respond to this performance. This carries the advantage of the musician being able to see the visuals, and these can inform the performance, in a feedback loop between visual system and musician.
I decided to explore this topic, and began with the program TouchDesigner by Derivative. This application is a visual programming environment with vast capabilities for digital signal processing, as well as being capable of integrating with many different systems and applications.
TouchDesigner is a visual development platform that equips you with the tools you need to create stunning realtime projects and rich user experiences. Whether you’re creating interactive media systems, architectural projections, live music visuals, or simply rapid-prototyping your latest creative impulse, TouchDesigner is the platform that can do it all.
https://derivative.ca/about-derivative
For my first test I began by loading a music file of a piece of music I created using a combination of software and hardware. I followed a tutorial to learn how to assemble this patch, then observed the results of modulating various parameters. I was pleased with the results which I feel are very usable as an element in a visual performance.
Here is a screen recording of the resulting patch, along with some tweaking of parameters: