02 Research
![02 Research](/content/images/size/w960/2024/01/brainstorm-immersive-arts.png)
Abstract concept beginnings
I begin by thinking about playing with interactive instruments, how audiences can be given agency within audiovisual performances, and how can the typically-hierarchical structure in live entertainment (band or DJ at the top, visual artist next, audience at the bottom) be flattened.
I also want to explore the idea of an audiovisual instrument that can be played by one or more people.
Technical considerations for an audiovisual instrument
A system of this kind will require methods for;
- generating visuals
- displaying visuals
- generating audio
- playing audio
- controllers / user interfaces
- enabling these elements to talk to one another using control data
By choosing digital tools development should be quick and have many possibilities that would take a long time to implement using analogue hardware.
While considering pieces of software to be part of this system I found I needed to think further into the future in terms of this experience- specifically around the mediums and in particular distribution of the experience to users. There seems to be two differing options in this respect; in-person experiences (for example installations, dome shows, etc) and virtual experiences (as a VR app accessible remotely using a VR headset, or with AR).
I realise that the software I choose could limit these somewhat.
For example, if I choose a system that uses tools such as TouchDesigner and/or Ableton Live, these are only usable for live experiences in one physical location (the audio could be broadcast but the visuals can’t be rendered for multiple live audience members who are remote).
Alternatively, by using a tool such as Unity or Unreal Engine game engines, there is the possibility of both a live show in a physical location and packaging the experience as an app which could be more widely distributed. Additionally, they come with very good interactive design potential… suitable for non linear game-like experiences.
My problem in deciding is that I am much more familiar with the tool Ableton Live and after watching many TouchDesigner tutorials this software also sings to me personally, as well as featuring tight integration with Ableton. A project using both of these would likely see me hitting the ground running so to speak… an important factor when considering my time availability for this project.
But as much as I am drawn to the above, I can’t stop thinking about the benefits of developing a project using one of the above game engines, and how many more people my experience would have the potential to reach (with VR headset sales booming as described in my previous blog posts). To elaborate on this, I ideally want to aim for high impact projects, and bigger audiences is one way to achieve this.
Further, some of the new features of Unreal Engine 5 are stunning- namely Nanite and Lumen. Plus the addition of MetaSounds which, while consisting of basic audio building blocks and math functions has the potential to build complex audio systems (making Unreal a possibility as a standalone piece of software for an audiovisual instrument/experience). I have done minimal development in this engine, which leaves a big question mark over how fast I can produce desirable results with it. At the time of my research UE5 is in preview release (not official release yet) so there may be bugs. I have read that Unreal Engine 4 projects are forward compatible with UE5, meaning I could safely begin development in the stable and supported UE4, then on official release of UE5 port the project there to benefit from the new features. Finally, Unreal Engine is getting a lot of funding, and as a result will likely be around for a long time, with constant updates, making it a savvy tool choice for a professional to learn. It would also open up new possibilities for work for me as a creative.
I’m also interested in returning information from the visual engine to the audio engine, and after inspecting the Unreal Engine OSC documentation I can see it is possible to both receive and send OSC, opening up two way communication between the two pieces of software, thus opening up the possibility of Unreal ‘players’ affecting the audio.
I have also seriously considered Notch for high quality 3D generation and live rendering. However when I looked into this the license carries a fee, whereas Unreal Engine is free until a commercial product earns over $100,000 gross.
At this point I am considering opting for Unreal Engine 5 for visuals, combined with Ableton Live as an audio engine to develop my prototype system. This means I can only run it in a physical location (as an interactive experience), but with the potential to replicate the Ableton Live audio section in UE5 down the line if I choose. Of course it would be possible to livestream a non-interactive 360 or 2D video to any number of streaming platforms such as YouTube live so that remote audiences can experience it without interactivity (except this is not what I am aiming for in terms of audience experience at this point).
I found an example of someone working in this manner on Youtube here: https://youtu.be/0QdOqG0BBAI. This artist uses Ableton Live, Max for Live (M4L) and Unreal Engine 4. The artist explains that he uses M4L devices to convert midi notes and audio envelopes into OSC data which is then sent to Unreal. He uses a UE Blueprint router to bring in the data. This artist also offers a Patreon where patches and learning is shared- this could be an ideal way for me to get up and running, allowing more time for me to build in complexity and do testing; https://www.patreon.com/semandtrisavclub. In this video the artist demos the live system with a large 2D projection; https://www.youtube.com/watch?v=tfVBHfrlGY8
Some other useful bookmarks on this topic here:
https://www.youtube.com/watch?v=xnOv_Pq-LPQ
https://docs.unrealengine.com/4.27/en-US/WorkingWithAudio/OSC/
Although the above system needs to be performed live in a physical location, it makes me think about how it would be well suited to so called location-based VR, where the audience wear a VR headset and view the visuals with the affordance of 3D vision enabled through stereoscopy. I also think it would also look very good in a fulldome.
While researching this I begin to think about the possibility of designing a VR environment for controlling an Ableton set- a VR instrument if you like.
I also see a possibility to offer a visualiser as a packaged exe file from Unreal to musicians wanting to add reactive / generative visuals to their performances.
My next steps will be to search for examples of other artists using audiovisual performance tools, taking a closer look at how they have implemented them and what the affordances of their systems are.
What are some of the approaches and methods others are using for live audiovisual performances?
I decided to take an exploratory approach to data collection for this question, specifically using a series of case studies as my research approach.
To see what others are doing in this space I turned to internet searches and in particular found many excellent examples (often with explanations) on YouTube.
In my following posts I’ll list some of the most relevant of these along with my observations, notes and comments.
Brainstorms
Here I share some of my brainstorms.
![](https://immersivearts.files.wordpress.com/2022/05/brainstorm-immersive-arts.png)