04 Research

04 Research
Photo by Steve Johnson / Unsplash

Algorithms, AI and the human-machine

I began thinking in more depth about the algorithms behind generative audiovisual systems. The fact that these are programmed by humans makes them a product of humanity… a kind of machine-augmentation allowing new possibilities for the architect, designer or user.

This human-machine collaboration interests me very much, in part because of the huge impact humans relationships with technology have had on culture and now the planet. And also for the unknown coming changes they will bring. This line of thought leads to science fiction, dystopias, protopias and afrofuturism.

In thinking about what the algorithm might be for my system, I began to think about what aspects will be controllable by an audience member or a performer. What controls will there be and what effect will they have on the environment?

I began thinking more of a system in terms of algorithm, performer, audience… which will dominate, how will the balance play out over time? Are they working together, or is there some kind of battle?

This led me to ponder the concept of agency in the system. Notably my reading led me to intelligent agents (IA) which are a simple and limited form of artificial intelligence. The most basic example often used for an IA is a thermostat- it senses the environment (temperature), has an internal need (desired temperature) then takes action to either turn on or off the heating (in this example the heaters are the IA’s ‘actuators’) in order to change the environment.

Going back to my audiovisual instrument concept, I imagined the possibility of the algorithm being made up of several IAs, each with their own desires, sensors and actuators, all contributing to the environment.

Generative music

I began researching generative music composition. This is something I’d studied around 2005 but the internet and AI wasn’t then what it is now. At that time I created a generative interactive music piece which was largely based on many simple probability options which allowed the looping piece to deviate change continually.

I looked into AI composition- deep learning machine learning algorithms trained on data sets of music. I found the results quite impressive, for example Aiva (Artificial Intelligence Virtual Artist) https://www.aiva.ai/ which specialises in classical composition, was trained on Bach, Beethoven and Mozart. While I find this interesting, I am more drawn to a live interactive collaboration between human and machine- specifically where the experience enhances, encourages or leads to new levels of human creativity and novelty. Nevertheless, Aiva serves as a good working example of how AI is disrupting creative industries (in this case library and film score music creation).

I discovered several interesting generative music apps for iOS and Mac OS, in particular the generative music DAW Wotja https://intermorphic.com/wotja/ which does a very good job of creating pleasing ambient evolving music. I found that listening to its automatic compositions mixed with nature sounds is an excellent way for me to focus in my office. This app features MIDI output and is quite configurable so could serve as part of, or the whole of a generative music system (thus is a contender for being part of my project).

More on AI

This thread of research made me remember a podcast about the work of artist Kevin Mack who has created several pieces of VR art such as Anandala which feature abstract IA agents that exhibit their own behaviours and language- interacting and responding to the viewer who floats around the same virtual environment as them. The playfulness of this appeals to me as an interesting way to experience a literal embodied relationship with AIs. This is especially relevant in recent times with AI and machine learning driving so much of our experience.

In terms of an audiovisual system I can imagine an IA that has the desire for harmony, for example, whereby it always wants to effect the musical and/or visual output of the system to make it more in-tune, symmetrical etc. There could be another IA that wants chaos, and this raises interesting possibilities in terms of creating a story over time, perhaps with sections where some IAs get more agency than others, or where the users can steer or feed them to change their agency levels.

I don’t know if I will expose the algorithm to the users of this system as part of the narrative, but this is definitely an option I am interested in exploring, and one which could emerge during production of the project. As mentioned in my introductory blog post, I plan to take an agile approach during production. It really depends on the final algorithm and how it is structured/programmed.

https://voicesofvr.com/798-vr-artist-kevin-mack-architecting-awe-with-vr-native-cooperative-ai-agents/

neuroscientist Craig Chapman(https://www.cifar.ca/bio/craig-chapman) likes to say “movements provide a window into deciding and thinking.” I learned from going through Mack’s experience, that we typically think of as “intelligence” may be based upon how an entity reacts to us based upon our movements within an environment. Moving in an unpredictable way that’s reactive to our own movements seems to be a critical threshold in our judgement of the intelligence of artificial agents within virtual environments”

http://www.kevinmackart.com/anandala.html

I also thought about automata- self-operating machines and in researching these found a long history of human fascination with these mechanical creations which were often used for entertainment and also for religious ceremonies (in ancient Egypt for example). Mechanical systems can be thought of as a sort of physical algorithm, and this makes me thing about the Antikythera mechanism “The Oldest Known Computer is a Mechanism Designed to Calculate the Location of the Sun, Moon, and Planets.” Efstathiou, K. and M. Efstathiou (2018).

Like many great discoveries, the Antikythera Mechanism was found by accident. In time, however, analysis using X-ray and other advanced imaging revealed its true nature, and the Antikythera Mechanism is now considered as important for technology and sciences as the Acropolis for the architecture and arts. The object is the remains of the earliest known analog computer. Now we know that it was an extremely advanced mechanism that could be used to calculate and predict astronomical events. This article shows how researchers from the Aristotle University of Thessaloniki, Greece, used sophisticated imaging tools to gather data and create a working model to test their theories against the recreated mechanism itself.

The story of how this seemingly insignificant ‘lump of corroded bronze and wood’ object was discovered in a shipwreck reflects somewhat my experience of learning about the fundamental building blocks of sound and light… like discovering treasure under ones own nose.

Thoughts

I think that programming more advanced AIs will likely take more time, so I need to keep in mind to what extent the AIs enhance the audience experience. Should they be key to the story, and if so how intelligent do they need to be to get the desired results?

Bibliography

Gaston Maspero (2009). Manual of Egyptian Archaeology: A Guide to the Studies of Antiquities in Egypt. BoD. p. 108. ISBN 9783861950967.

Efstathiou, K. and M. Efstathiou (2018). “Celestial Gearbox.” Mechanical Engineering 140(09): 31-35.