08 Research

08 Research
Photo by Daniele Levis Pelusi / Unsplash

Controlling the experience (for agency)

Sensors

Q: what are the best physical controllers for changing settings?

I experimented with using the depth camera on an iPhone 11 Pro via an app called MusiKraken which converts phone sensor data into MIDI which can be transmitted over a local network and can detect gesture controls such as hand opening and closing, tilt, distance. I found it a simple interface but lacking in tactile feedback, and slightly temperamental in terms of control due to there being an invisible bounding box that the sensor worked within, outside of which the controls would often jump to one extreme or the other. This issue was also highlighted by artist Ginger Leigh (Synthestruct) in her excellent workshop on interactive controls for live performance (Leigh 2020).

One feature of MIDI is the 0-127 resolution which I find often good enough for note data and some other controls but not so great for controlling other things such as audio rate operators due to the steps being audible, resulting in a more apparent stepped digital control.

I also explored phone accelerometer and positional sensors which can be quite effective once in someones hand – their hand becomes the controller and can be positioned anywhere they are comfortable while maintaining control.

I also used MIDI controllers that use knobs, faders and touch strips and like the tactile feedback and precision control these offer.

Other ideas for sensors I have had include head mounted microphones to sense audience breath. This data could be used creatively to effect the immersive experience.

I looked at NFC chips which can be bought as tags or credit-card size PVC. They can be written to and then read in order to initiate certain actions by a device. These could be a good way to have an audience move around a physical space and interact with an experience. It is possible they could use their phones to scan the tags, or they could use the tags to go to scanners. This could offer a treasure hunt like exploratory experience to the audience.

Cameras offer a simple way to collect live data from an environment. Through basic filtering shapes of an audience can be extracted and used to interact with virtual elements such as particles.

Kinect depth sensors are useful for sensing objects and people within specified ranges, and can gather audience silhouettes and positions for tracking and processing.

“The Kinect v2 can physically sense depth at 8 meters. So Yes you can sense objects at 5M. However 4.5M is where you can reliably track body joints. Anything beyond 4.5 meters your body tracking yields inconsistent results. Objects are still sensed, but you have to write custom code to do anything useful, there’s nothing in the framework sdk that automatically detects people or objects other than the raw depth values.” – Andy Link 2014

“The field of view for the color camera is 84.1 degrees horizontally and 53.8 degrees vertically. For the depth camera it’s 70.6 degrees horizontally and 60 degrees vertically.” – Nikolaos Patsiouras 2018

https://social.msdn.microsoft.com/Forums/en-US/c95d3e40-6ed6-47a1-a206-5ff26c889c29/kinect-v2-maximum-range?forum=kinectv2sdk

Generally, a greater distance can be captured using Lidar over infra red depth sensors.

Touchscreen devices offer an excellent option for custom control sets. Using apps such as Lemur or TouchOSC (which I’ve used previously) virtual sliders, buttons, faders and knobs can be assembled in any configuration and set to control anything via OSC.

Which sensors I ultimately choose will depend on whether I am designing for the performer or the audience, and will be informed by the aims and story of the piece.

Bibliography

Creating Interactive Controls for Live Performance – Ginger Leigh. Nov 10, 2020 https://youtu.be/AM60OPmrUG0