Immersive Arts Final Project Critical Evaluation

Immersive Arts Final Project Critical Evaluation

Summary of project

This immersive arts project war born out of my research into and around how to elicit the emotion of positive awe in an audience. It aimed to do this by drawing from themes of number, sacred geometry, nature and vastness to create a transcendent experience. Research by Keltner & Haidt (2003) identified two components of awe; perceived vastness and complexity. This project aimed to achieve this through the use of immersive techniques, technologies, mediums and through content and subject matter. The content is largely focussed on nature, art and music which have been found to elicit awe (Shiota, et al. 2007). This piece seeks to also connect the audience to nature by featuring number in the forms of music and the platonic solids because mathematics is the basis of nature (Emmer 2005).

Some technical limitations held back several aspects of production yet the results have been somewhat promising but need further work to continue to test and develop the prototype, particularly with more capable technologies such as a higher performance graphics computer, as well as other mediums including fulldome and virtual reality with head mounted display (HMD).

The final prototype demo of this piece used two projectors to create an ultra-widescreen display, along with a quadraphonic sound system to test the surround capabilities of the system. Listing every technical element and aspect of this piece is beyond the scope of this evaluation but the work can all be seen in the Ableton Live and Touch Designer program files (also see screenshots later in this article).

Here is a video showing some recorded clips of the demo.

Proposal Aims

Three main aims were included in the original proposal that detailed the platform and context of choice for this project. I will now evaluate the successes and failures of each.

1. Build a robust core system capable of generating evolving visuals and sounds that can be performed with

This aim was completed successfully. A successful interactive sound and visual performance tool was created. Additionally, the system has been designed in a (modular) way that makes it well suited to future adaptations and modifications for a variety of applications and mediums.

In brief, the core system uses Ableton Live as the audio engine, TouchDesigner as the visual engine, a webcam as a sensor, and three midi controllers for manual control of the system. The midi controllers used are: Roland TR-8 which also acts as a sound source, Traktor Kontrol X1, LK iOS app on an iPad. Further to this, the iOS app MusiKraken is used when the system is in ‘installation mode’ as a touch controller and hand gesture controller (converting sensor signals to MIDI before sending them to Ableton Live).

Ableton Live is capable of driving a surround sound system, directing channels or groups of channels around a physical space through the use of the ‘Ableton Surround Panner’ device.

A sound palette was assembled through the use of several virtual and hardware instruments. By applying key and scale filtering to these, all the notes generated are in the key of G and use the pentatonic minor scale which was selected for several reasons:

  1. Notes within one key and scale will all sound coherent together
    1. Pentatonic scales do not include semitones meaning there is no tension between the notes. This makes it easy for anyone to improvise with regardless of musical skill level. This scale is used often for educating children in learning to play music, by people such as Zoltan Kodaly, Carl Orff and Rudolph Steiner which is a good indicator that this scale is a suitable choice for an improvisational interactive piece such as this.
  2. By dividing the octave into just five notes more of the frequency spectrum is freed up, leaving more space for more textural spectral content that will be generated by additional harmonics and effects.
  3. This scale was developed independently by many ancient civilisations (John Powell 2010) which resonates with the themes of this piece- sacred geometry and ritual.
  4. Pentatonic scales are made up of five notes which reflects the number of platonic solids which are used as visual elements in this piece.

By creating several clips per track in Ableton Live, and grouping these into scenes the system allows for flexible playback options. For example the combination of playing sounds and sequences can be improvised by the performer by triggering any clip they choose. Further, scenes offer a method for simultaneously triggering groupings of sequences across instruments.

Ableton Live’s clip ‘follow actions’ offer a way to automate what comes next, for example during the demo scenes were set to trigger the following scene, so a linear master sequence was created. This was also applied in the installation mode, whereby an infinite loop of scenes was created by stepping through them after X number of repeats, and finally returning back to the beginning. Further to this the potential for a more varied evolving soundscape is a few clicks away through the use of the randomisation functions within scene and clip follow actions which I would use this for any future showing of the installation mode of this piece.

2. Present the piece in the form of a show using the fulldome medium

This was not achieved within the scope of the project due to a lack of access to a fulldome during development or production. It is unfortunate that this could not be arranged because I believe there would be greater success in eliciting awe with this medium due to the highly immersive affordances of it, certainly more so than a flat screen (Lambert & Phillips 2012). For example, it would be much easier to achieve the illusion of great scale in a fulldome. The system that was developed is certainly capable of outputting a fulldome format, so this is something that can be done in the future.

Instead of a fulldome the piece was presented using an ultra widescreen projection. When the audience stands close to this it does fill quite a large portion of their vision, but nowhere near the common 180 degrees (or more) half sphere of a fulldome. I attempted to drive four 1080p projectors but the computer was not capable of running that many pixel output from the TouchDesigner network at a higher frame rate than around 14Hz. By using just two projectors 30Hz+ was achieved which was satisfactory, so the final display was 3840 x 1080 pixels. This process was also useful in that it showed me that the computer I was using would not be able to output to fulldome in a minimum desired resolution of 4k.

In the absence of a fulldome I explored the potential to view the piece using a VR headset as this medium shares some affordances of the fulldome that lead to high levels of presence and immersion, with the added affordance of stereoscopic vision (3D vision). Although somewhat longwinded to set up at the software level I successfully achieved the ability to view the TouchDesigner scene using an Oculus Quest headset, however a few key issues led me to pause VR development during production;

  1. The frame rate dropped to around 12Hz which is far too low for VR which it is commonly recommended needs upwards of 72Hz to minimise motion sickness.
    1. I tried to address this by optimising and simplifying the TouchDesigner network, but this only improved the rate to around 24Hz.
    2. In future development I will use a faster computer and graphics processor to achieve a higher frame rate, coupled with optimisation techniques that I am continuing to learn as I become more proficient in TouchDesigner.
  2. After initially developing the 3D geometry elements of the piece and the results being somewhat flat, I began to build a TouchDesigner TOP (Texture Operator) network which processed the 2D rendered video streams. Part of this included feedback loops to create generative visuals. This resulted in a much improved aesthetic that was more to my liking, but in order to apply this to a VR headset which is comprised of two video feeds (one per eye) I had to duplicate the TOP network. This resulted in two discrete monoscopic renders which when viewed together through the HMD had many misalignments in detail resulting in a perceived double vision of these effects. Without finding a solution to this I was led to rethink my approach to VR development with TouchDesigner, thinking it not possible to use separate TOP processing per eye, at least where artefacts created through feedback are present. I imagine that I would need to stay within 3D for creating generative elements.

Other mediums that I could have used include;

  • 2D Livestream
  • 360° Livestream
  • Rendered 2D video
  • Rendered 360° video

The main reason I didn’t go with any of the above list is that I wanted to try to stick to the original proposal as closely as possible although these options are interesting and can be explored in the future. In particular a projection system allows for a communal temple-like shared experience for the audience.

3. Leave audience with greater wellbeing by eliciting awe in them

Eliciting awe takes many components; visuals, sound, environment, audience onboarding and off boarding, as well as advanced technical systems and skills. This proved an over ambitious goal given the timescale, especially considering the delays in critical technologies being delivered very late in the project, some only available the day before the demo.

The demonstration of the prototype was to too few audience members to gather sufficient data. However some of those that experienced it reported positive feelings of being absorbed in the piece and enjoyment of their interaction through movement with the music and visuals.

In the future the next steps for this will be to show the piece to a larger audience and follow the plan outlined in the proposal which entails replicating the techniques to measure the ‘small-self’ of an individual outlined in the white paper by Allen (2018). This will include surveys before and after (immediately and at a later date) for all audience members.

Based on my research outlined in my proposal I predict that the greater the field of view covered by the images the easier it will be to elicit awe. This is why I place importance on the medium and for this piece would prefer fulldome. That said, this is just one aspect contributing to achieving awe, others being content, arrangement, and interactivity. All of these aspects need further rounds of iterative audience testing and system development to improve the intended outcomes in the audience. For this to happen, ideally there would be a location with projection and surround sound systems already set up, so that the tech setup is minimised and only requires computers and some sensors to be set up.

I am still interested to discover what elements of an immersive experience most contribute to feelings of awe. It is possible that nature scenes may not have such a high impact as vastness or complexity, which can only be explored through further audience testing.

Production notes

I began production by working through my planned schedule, most of this was on track for the first six weeks. This included R&D and planning of various elements of the proposed system, along with building the audio engine and creating musical content which included sound design, mixing, effects, and note and part sequencing.

Story

In order to guide my creative decision making I developed a story for the piece which was made up of several scenes within the piece. This allowed me to create a more dynamic piece with sections of contrast. This provided a journey for the audience, and a guide for the performer that resulted in a more coherent piece and innovative artefact.

The story I settled on was based on the cyclic journey of birth, life and death. I detail this below (along with how I might depict this in sound and visuals):

  1. Nothingness
    1. Darkness, noise
  2. Beginning
    1. Feedback patterns, some emerging forms, audience interaction
  3. Birth
    1. Platonic solids, musically more recognisably human
  4. Evolution
    1. Increasing complexity, speeding up tempo
  5. Supernova
    1. Star like imagery, blinding light
  6. Chaos
    1. Distortion, loss of recognisable forms, density
  7. Order
    1. Nature, beauty, musical coherence
  8. Beauty of the universe journey
    1. Return of the platonic solids, journey of visuals and sound
  9. Death
    1. Return to the beginning of the piece
    2. Audience interaction again
    3. Slowing down of tempo
  10. Nothingness
    1. Noise, darkness

I like that the story takes the audience on a journey that finishes at the same place. In this way this piece could repeat infinitely. Infinity relates to the vastness of the universe.

Sound & Music

Ableton Live proved an excellent choice for building the audio engine for this piece due to the seemingly limitless possibilities for sound creation and processing which let me imagine something and then create it quite quickly, which was also helped by my extensive prior experience with the program. This choice of software allowed this aspect of the project to come together within allocated time.

I began by describing the voices for the piece which included; sound bed, rhythm low, rhythm high, chord stabs and effects. I then created several sound generators for each of these. This approach prevented me getting carried away with sound design. As I developed each voice I improvised with them and tweaked the overall mix of the elements. Eventually I arrived at a place where I was happy with all the elements, saving my work to return later to begin arrangements.

When it came time to create the scenes for the story, I found it most natural to work with audio first, using Ableton’s scenes to group clips across voices. By using scene follow actions I was able to specify the number of repeats before moving to the next scene. I also specified different beats per minute amounts for some scenes to have slower or faster sections.

This scene automation was useful in constructing the story journey, but ultimately moves from improvised to pre-arranged structure. There are pros and cons to this; while I favour the spontaneity of an improvised performance, in a system of this complexity which used audio and visual elements, there is only so much one person with two hands can do at once, so it is necessary to lean on some pre-arrangement. Saying that, Ableton and this system is capable of launching other clips at the will of the performer, allowing for some breaking out of the pre-arrangement.

An advantage of this pre-arrangement capability is that the system can operate without a performer, in what I am calling ‘installation mode’. This makes the piece easier to install and less expensive to run. It also allows for the piece to run infinitely by moving from the last scene back to the first.

I wanted to infuse the sound with some of the qualities of nature in line with the themes of the piece. I approached this in several ways. Firstly I use nature recordings as source samples and used different audio engines to manipulate and replay these. One was a granulator which creates new sounds from source material and can either reproduce quite recognisable sounds or entirely new textures based on those sounds.

Another notable instrument I used was Tree Tone from the Inspired by Nature Max for Live Ableton pack. This uses tree branching algorithms to generate tones. I set it to use the piece’s scale and key. This produced a dreamy ambient soundscape with a smooth texture, which I found ideal as one of the elements that made up the sound bed layer. Using algorithms from nature supports the aim of creating content that uses nature as its basis that came from my research.

I used Ableton tracks as busses to mix other tracks together so that I could add performance controls to them, this included a DJ style filter which can sweep from a low pass to a high pass setting. By mapping these filters to physical knobs on my controllers I could manually remove frequencies for each of the busses during the performance. Further to this I added an LFO to each of these that could be activated via a button press which when engaged animated the filter sweeps. This was useful in the ‘chaos’ section of the story where I set the LFOs to use dramatic settings including noise source for randomness and large track synchronised jumps.

I used send/return tracks to send sounds to effects processors that included two reverbs and a delay. One of these reverbs was Valhalla Shimmer which is capable of turning any sound into a very long beautiful ambience. By feeding sounds into this the piece gained a sort of body as elements continued beyond their existence, also reflecting the theme of birth, life and death- with remnants of certain sounds after they disappear.

While developing the piece I only had access to a stereo speaker system. For this reason I output all tracks through one master stereo channel. On this channel I added a filter, as well as a multi band compressor to glue and squash all elements together into a more coherent sounding whole, a form of live mastering of the composition. When it came time to set up a surround system I realised that every channel that was now removed from the master would need to be processed separately, that is they were no longer running through this multi band compressor. This resulted in a less coherent sounding whole and I learned that a surround piece should really be mixed in advance of a performance, on the performance system so that a good balance can be arrived at ahead of time, unfortunately this was not possible due to a lack of technology access.

Here is a recording of one run through of the performance:

Visuals

The largest unknown going into the project in terms of development time was using TouchDesigner (TD) which is the area I have least experience in. My predictions around this proved accurate as I encountered a steep learning curve for the software. Without knowing the capabilities or methods for achieving things within this program I kept hitting barriers which meant taking time to learn more, which I did by watching free videos on YouTube, reading and engaging with the the Derivative TD user forum and TD documentation. I spent as much time as I could absorbing as much knowledge as possible around TD, which was essential to moving forward towards desirable outcomes. This process felt a little like feeling my way in the dark, and as a result of this I had to try lots of things and work with those that fit best within the context of the project, while abandoning failed experiments. This process resulted in quite a high number of application crashes so I developed the habit of saving frequently. TD automatically saves a new file per save to make rolling back simple, and I found this essential on some occasions.

I began with TD by using 3D models of the platonic solids, importing these and then trying different ways to manipulate, texture and light them. Using these objects I recreated the layout of the following image showing ‘Pythagorean Morphology’.

Pythagorean Cosmic Morphology

Working with this as a subject fits with the theme of this piece, as Pythagoras is the first person known to have carried out a scientific study into pentatonics, although evidence for the use of the scale has been found many thousands of years prior to this (Minnix 2016).

I found that by animating the locations of what I call the orbiting objects, interesting dynamic movements could be created. I experimented with manual control of these positions using MIDI controllers, as well as using LFOs to modulate their positions.

For the centre object, I merged all platonic solids into one object, and used transform operators to modulate each object’s size over time, again with LFOs. This resulted in an undulating form in the centre. I felt this was appropriate here as this depicts how the universe is built on basic number and the universal significance of these forms.

I was very satisfied with the results of this 3D work, and these animated objects became a central subject within the piece. I found the resulting animations quite mesmerising.

To give the subject a setting, I created a ‘skybox’ sphere with a 360 photo of space textured on it’s inside, and expanded it around the subject. This gave the impression of the subject floating in space, with stars and solar systems visible. I animated this to give it a slight spin to make the scene feel more alive. The approach worked well because parallax diminishes over distance so the 2D photo becomes plausible.

I tried several textures on the platonic solid models, but arrived in the end at a white wireframe as most suitable for the piece because it was best at highlighting the geometric forms and had the advantage of creating interesting patterns and textures when the geometries crossed paths within one’s line of sight.

After texturing and lighting the scene still appeared quite flat and cold. To breathe some life into it I began developing a TOP network to process the rendered images from cameras placed in the scene. I found that by duplicating virtual cameras with the exact same settings except for the geometries they were rendering, I could separate the geometries (these being the skybox and platonic solids objects) for separate processing before compositing them back together. This allowed for much more flexibility over the end results such as being able to manually mix between elements, have strong effects that make the scene completely unrecognisable, then add the platonic solids back at a later stage through compositing. I found this process very powerful and only got there after unsatisfactory results when working with just one camera render of the whole scene.

I took time to explore all the available operators and components on offer for image manipulation, and there was a lot of trial and error. This resulted in my discovery of the feedback edge component which is a pre-made feedback network capable of a wide range of visual results. I found that tweaking the parameters of this resulted in results from mild to wild, and also capable of generative feedback reaction-diffusion patterns.

I gravitated towards these reaction-diffusion patterns as they have an organic evolving nature which fits well with the themes of this piece. Being found in chemistry, biology, physics and ecology these patterns are found throughout nature and can be described using mathematical equations. For these reasons they fit with both the theme of number and the mathematics of the universe, but also with the the sections of the story relating to the birth of life and evolution.

Scene creation for the visuals was done through the use of saving presets for the feedback network, and triggering these when each Ableton scene launched. I set a blend time of five seconds for each of these feedback preset changes to make the changes more gradual, but in future I would spend some more time customising each transition with unique blend times for more subtle or dramatic effects.

Within the scope of the project and ambitious timeline and subsequent learning curve, I am extremely pleased with the learning undertaken and outcomes of the visual aspects of the piece.

Audience interaction

By making the piece interactive the audience are given some agency with the aim to immerse them more in the experience. Initially I had planned to limit the influence of the audience to quite a small amount over the piece to try to prevent a game-like environment being created where the audience spends more time playing than taking in the piece in order to try and maximise feelings of ‘small-self’ However, when the use of a fulldome was abandoned within the timescale my thoughts on this shifted. With the absence of the great levels of immersion possible with a fulldome, It now seemed a greater level of interaction with the audience was needed as a mechanism for increasing immersion so that they would become more absorbed in the visuals within the frame boundaries of the projectors. An issue I grappled with was that this was being developed quite late in the project, and I felt uncomfortable that it had not been designed from the start of the process. I wanted to avoid the feeling of it being bolted on just for the sake of it at the end of the project. I could have omitted interactivity entirely but I decided against that to allow audience testing as described in my proposal.

My aim was to allow the audience to interact with both sound and visuals. I experimented with the use of an Intel RealSense depth camera. While the sensor was working with TD, the skeleton and gesture tracking was not available due to an unavailable legacy premium license (RealSense being discontinued by the time of this production). I wanted to try using a Kinect sensor instead which exposes all of its skeleton tracking to TD, but I did not have access to one. This would have enabled the exploration of more complex user interactions.

Through some research combined with trial and error I built a network in TD that used frame caching and comparison to only show movement in the area captured by the sensor. I then used blob tracking on this, and set it up so that when a blob (movement) was detected a MIDI note on message was sent via OSC to Ableton, then when movement stopped a corresponding note off message was sent. It was at this point that I realised I could do this with just a webcam, so I switched to this as it was less computationally heavy to process than the depth camera.

Now that I had a MIDI note in Ableton I used a MIDI randomiser to spread the value across a wide range of notes so it would be different every time, then I fed this through a key and scale filter and then into an arpeggiator. The result was that when someone in front of the camera moves, an in-key and scale flurry of notes is played. I chose a synthesiser as the sound source for this voice and set it to short percussive notes with lots of reverb to add decay, resulting in a bright but dreamy sound. After some parameter tweaking I arrived at what I thought was a satisfactory level of musicality and playability.

To allow the audience to interact with the visuals I used the TD network I had already developed, this time routing the image showing movement to the main TOP network. I tried merging it with the geometries render before the feedback network but found that I didn’t want the shapes of the audience present at all times. I did want them to be a source for the feedback network however. The way I achieved this was by creating a clone of the feedback network just for the audience video feed. I set all the parameters for this cloned network to be tied to the first network, so they both always had the same settings. I then composited this with the post feedback data from the main network. I tried all the layer blending options out and found several that had different but desirable results, so I created several composites all with different blending types and used a switch to blend between them so I could adjust this through either automation or during performance. In this way it became possible to either focus on audience movement, or platonic solids, or a blend of both.

I found that the resulting imagery could be quite ghostly with the very slight delayed and simplified shadow of the audience shape moving around the screen. I was pleased with this result as it has direct obvious feedback to the audience demonstrating clearly that they have agency.

For an ‘installation mode’ of the system I configured an iPhone with the MusiKraken app to sense hand position pose and distance using the TrueDepth camera. This app offers a modular environment for processing signals and converting them into MIDI before sending them over a network to a computer. This was simple and straightforward to set up, and, although iPhones are relatively expensive devices, they are widely available making this a good technique for this and other projects.

In this mode the hand distance controls the distance of the 3D objects, while opening the hand increases the distance of the orbiting objects, movement in x direction changes the glow effect of the objects, while movement in the y direction changes the trail effect of the objects. Additionally the app was configured with an XY touchpad which enables the feedback presets to be selected at will. This resulted in a very effective controller capable of intricate changes in the visual composition of the piece that is simple enough for someone to learn without being told what to do, perhaps by placing a large visible hand shape around the sensor to encourage them to hover their hand over it. More audience testing is required with feedback gathered for future iterations. I personally thought this an excellent tool for creating striking imagery, opening up this system as a generative art composition tool.

Linking the audio and visual systems

This was achieved through the use of the TDAbleton package created by Derivative. This collection of TD components allow two way communication between Ableton and TD via Open Sound Control (OSC) which can be on a local machine or over a local network. It achieves this through Ableton’s Live Object Model.

For this piece I used a router to create a local network which enabled communication between a PC running TD and a Mac running Ableton. It also enabled MIDI controllers to be connected to the system using wi-fi (in this case iPads and iPhone).

TD doesn’t have a native timeline sequencer, so I used Ableton for this purpose. Using TDAbleton the system monitored the current Ableton scene and used this value to trigger presets I had created and saved for the TD feedback network.

By creating empty Ableton racks with macro knobs I could assign these to any parameter I wanted in TD, so I began to do this and eventually set up around 22 controls. This let me use Ableton’s automation system and MIDI controllers to change TD settings (I used an iPad running LK app to manually manipulate these controls during performance). This was essential in pre-arranging parameter changes over time within scenes such as object sizes, noise levels etc. I found an issue with this approach in that when the parameter changes reached TD they were jittery. This may have been in part due to the relatively low step size of MIDI- just 128 steps, but also perhaps due to OSC over the network. I needed to interpolate these values to smooth them and I found the solution in the TD Lag CHOP (channel operator) which allowed me to smooth start and end changes by independent amounts per control. Due to the slow evolving nature of this piece this approach was sufficient, but if I wanted to achieve faster more intricate movements in the future I would test using MIDI controllers directly within TD and see what difference this makes- I predict it would be smoother this way.

Using TDAbleton I sent master and bass track levels to TD and used these to modulate the sizes of the orbiting objects and centre object respectively. This added a dynamic quality to the visuals that was linked to the current sound output. The results, even after some Lag CHOP smoothing were quite jittery, which I thought was acceptable but I had wanted a more organic result. In future I would explore using level controls to adjust visual effects settings instead of object sizes or positions as I think this might result in a more organic aesthetic. Admittedly I am aware I have high expectations for any art I produce but in the timeframe the quality of my piece (outside of the live show which had equipment set up issues) was as consumer ready and high enough quality as pieces I have seen in public institutions in the past year.

Demo set-up

The set up for the performance and installation demo was really too late during production. This was because of a few reasons, firstly some of the tech took a long time to arrive, secondly the venue was located too far from my place of residence for frequent visits. As a result I only had around one and a half days to get set up. This involved working with tech such as the projectors and surround system for the first time.

This resulted in imperfect blending of the projectors. It also meant exploring the processing limitations of the PC powering TD very late on.

In terms of the surround system I needed to start using and testing the surround panner device within this timeframe and I encountered some issues- there seemed to be some bugs with this device, perhaps because it was an old version. I ended up only using one sound with auto circular pan during the performance, which ended up being quite unnoticeable during the piece.

More work needs to go into setting up this system to work bug free with a surround system, and also to balance all the elements and test the results on the system to create a more immersive soundscape for the audience.

I also ran into an issue that I hadn’t anticipated in that the space was darker than the room I had been prototyping in. This was good because the contrast of the screens were enhanced by the low light, but the webcam I was using as a sensor relied on visible light to capture the scene. In an attempt to tackle this I adjusted the TD network and blob tracking to be more sensitive, but then I found that when bright elements were present on the projectors other objects in the room triggered blob detection. The result was a kind of feedback loop where the light triggered sound and visuals. This is an interesting concept in itself, and, with more time I would like to explore this creatively by tweaking parameters to create new forms of visual led generative visuals and sounds. However, in the context of the piece as I intended to present it, this meant the audience interaction was not working as intended. I had two choices;

  1. Show the piece with ambient lights on in the room, with lower contrast in the projections, with responsive audience interactivity
  2. Show the piece with no ambient light with highest contrast projections with less responsive audience interactivity

In the end I showed it with the lights off, but also demonstrated the user interactivity with those viewing it after the performance by turning the lights on.

I could solve this issue in the future by using an infra red camera, lighting the space with an infra red light so that visible light did not interfere with this aspect of the system.

Despite the issues the demonstration went ahead and was largely a success. The performance lasted approximately 12 minutes (determined by the pre-arranged scenes sequenced in advance). For further showings I would consider a similar or longer piece, with longer ambient sections to help ground the audience in the experience.

Conclusion

To sum up, although some key elements from the original proposal were not achieved within the timeline, largely the prototype is a success. The system is ready for public showing and performance, but requires further testing to measure the level of awe it elicits in the audience. Producing an immersive experience of awe requires many components and this piece has gone a long way towards developing those, and useful learning has been made with clear next steps in each of these areas.

One significant limitation of this piece is in distribution- due to its reliance on Ableton and TD it can’t be packaged up as one app and delivered digitally and run on other computers. Any computer running this system will need an Ableton Live 11 Suite license, and at least a TouchDesigner TouchPlayer license, plus some manual setting up of the network, OSC and sensors.

The system is essentially a large piece of creative code with many variables. In its current form it is capable of a wide variety of audio and visual outputs and configurations. It carries the potential for future further modifications and exposure of parameters to arrangement, performers, audience or a mixture of these.

This was a huge technical and artistic undertaking in the timeframe, the learning and work for which I completed independently. A suitable next step then might be to seek collaborators in other individuals and/or organisations to distribute work across teams in future projects.

I can see the potential for a wide range of applications of this system, some examples being;

  • AV instrument installation
  • dance performance
  • science and mathematics exploration focusing on geometry
  • live AV performance for entertainment
  • live stream AV performances
  • Music video creation
  • Visuals creation and rendering for VJs
  • As a tool for generative art

I plan to continue working with this system, evolving it for different use cases. It is at a stage where consumer shows can be delivered with it, which I plan to do. By making video recordings of these shows a portfolio can be assembled along with social sharing of clips to show people and institutions what I am capable of as a creative coder and artist.

Overall I feel that the artefact produced was innovative in that it was designed around in-depth research, using a range of technologies to achieve an audiovisual experience based on a storyline and themes that result in a recognisable but never identical experience for an audience whether the system is in performance mode or installation mode.


References

Shiota, Michelle & Keltner, Dacher & Steiner, Amanda. (2007). The nature of awe: Elicitors, appraisals, and effects on self-concept. Cognition and Emotion. 21. 10.1080/02699930600923668.

Keltner, D., & Haidt, J. (2003). Approaching awe, a moral, spiritual, and aesthetic emotion. Cognition and Emotion. https://doi.org/10.1080/02699930302297

John Powell (2010). How Music Works: The Science and Psychology of Beautiful Sounds, from Beethoven to the Beatles and Beyond. New York: Little, Brown and Company. p. 121. ISBN 978-0-316-09830-4.

Summer Allen 2018 “The Science of Awe”. A white paper prepared for the John Templeton Foundation by the Greater Good Science Center at UC Berkeley

Minnix, Willy 2016. The Mystical Pentatonic Scale and Ancient Instruments, Part I: Bone Flutes https://www.ancient-origins.net/artifacts-ancient-technology/mystical-pentatonic-scale-and-ancient-instruments-part-i-bone-flutes-020826(accessed 01/09/2022)

“Without mathematical structures it is not possible to understand nature: mathematics is the language of nature.” (Emmer 2005).

Lambert, N. and M. Phillips (2012). “Introduction: Fulldome.” Digital creativity (Exeter) 23(1): 1-4.

Bibliography

Five Notes To Rule Them All: The Power of the Pentatonic Scale https://www.percussionplay.dk/five-notes-to-rule-them-all/ (accessed 01/09/2022)

Jesse L. Breedlove, Ghislain St-Yves, Cheryl A. Olman, Thomas Naselaris (2020). Generative Feedback Explains Distinct Brain Activity Codes for Seen and Mental Images. Current Biology Volume 30, Issue 12.

Creating Generative Visuals with Complex Systems – Simon Alexander-Adams https://www.youtube.com/watch?v=VBzIPLh-ECg

Reaction–diffusion system https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_system (accessed 01/09/2022)

Patterns in nature https://en.wikipedia.org/wiki/Patterns_in_nature (accessed 01/09/2022)

OPREAN, D., 2015. Understanding the immersive experience: Examining the influence of visual immersiveness and interactivity on spatial experiences and understanding, University of Missouri – Columbia.

Marija Nakevska, Anika van der Sanden, Mathias Funk, Jun Hu, Matthias Rauterberg. Interactive storytelling in a mixed reality environment: The effects of interactivity on user experiences. Entertainment Computing, Volume 21. 2017