Prototype Reflective Commentary
Further Evaluation
Throughout my previous post detailing a prototype immersive project, I included some evaluation within the article. Here I go into more depth evaluating this process and piece.
Through my meandering learning and experiments, some common threads emerged into what became an immersive prototype. I am fairly satisfied with the process as a whole, but my nature is that I enjoy more time for further experimentation, idea refinement and product iterations. I also like to move beyond prototyping into user testing and product development.
Through the process, I was learning some new applications and techniques, most notably Unity which is the first 3D game engine I have used. As a result, the learning curve was predictably steep at the start, with lots of questions arising, each with the need for me to refer to a combination of the official documentation, web searches and online videos. I felt like within this project I got over the initial hump of this learning curve. Now I feel a lot more confident and empowered to work within Unity, as well as with comparable tools such as Unreal… my mind is firing with the new ideas and possibilities of this software category. Some of the obvious affordances are the ability to create non-linear experiences, complex interactions between objects and users, and multiplayer for shared experiences.
As a result of this exploratory approach, I didn’t do much technical planning ahead because I didn’t know how fast I would progress or what the potentials of the software were. Any future work with the Unity engine will benefit from these experiences, so I will be in a position to plan more, defining clearer artistic and technical goals to achieve in a project.
Photogrammetry outcomes
My opinion on this aspect of the project is mixed. I am disappointed with the resulting model of the room, it can easily be improved on in a future iteration. I feel I didn’t make the best use of the full-frame camera, but given another session with it, I would be able to vastly improve the results simply by taking many more photos- especially focussing on individual objects and making sure to capture them from all angles. Further to this, more time aligning the images in Metashape would give a better starting point for model creation. I would also simultaneously send the images to the PhotoCatch app/service to see the results from there and consider paying a small fee for the model download.
Audio outcomes
Audio and sound design are very important to me. Through this process, I explored some of Unity’s audio features. There is a lot of potential within this engine for working with audio. Very complex audio scenes can be built up by mixing sources, altering sound settings, and using the spatialisation effects and reverb zones. Further to this, I came across FMOD which is a (proprietary) complete audio engine replacement for Unity that appears to offer quite a lot of advantages to enable faster workflows and more flexible sound handling. For future audio-heavy projects, I will be sure to consider using FMOD.
Concept outcomes
Telepresence with archiving
Of the two main themes I explored, this prototype best succeeds as a kind of archival time capsule of my living room, as well as presenting a recording of me, showing some of my character and thoughts at the time. It also succeeds in presenting some of the physical objects that were part of my life at the time. I can imagine this experience being of value to my family in the future, especially my children. I would also like to share it with my relatives who live in different parts of the world, so they can get insights into some aspects of me and my life.
Telepresence for artist and audience
This prototype succeeds as a way for me as an artist to present and share things I’ve created such as audio compositions. However what I am still really interested in is the live-telepresence between the artist and audience, and this project didn’t achieve that. It would not take too much effort to add this functionality. The ideas I have around this are to bring in live streams to the space, these could be 2D such as those from Mixcloud, or 360 which is possible with YouTube. Further to this, multiplayer functionality can be added to the experience so more than one person can be in the space together, greatly enhancing the possibilities for this experience.
This, I believe will become very relevant when applied to the evolution of social media with the interactivity and self expression in your ‘home-world’. This area of thought is building on my previous research into social vr.
Immersion outcomes
This piece leans on the inherent affordances of VR to achieve immersion- presence and immersion achieved through stereoscopic vision, wide field of view, 6 degrees of freedom etc. This combined with the photogrammetry room scan is, I think, quite effective and much of a success. When my partner entered the experience she was taken by the feeling of being in the room, and when she took off the headset was surprised that she was in a different place. This was a useful observation exercise in this respect, one that I expect would be repeated with further audience testing.
Using different audio sources and settings allows various aural experiences to be had in the space… directional audio sources, ambisonic audio sources and stereo sources are all used depending on suitability per recording resulting in a varied aural experience, which I think helps with immersion. The ambient background sound that loops throughout gives a subtle realism to the space that eliminates the cold silence of an aurally-bare digital space.
I think the experience would benefit from being able to navigate to a range of different spaces where different stories can be told. These could easily be created as new Unity scenes.
Prototype next steps
Some of my ideas for progressing with this piece are listed here in brief:
- Improved user onboarding and offboarding
- Multiplayer to;
- make it a social experience
- allow for personal tours of the space
- allow for artistic collaboration
- Model and texture optimisations for an improved realism aesthetic and better performance
- More dramatic transition effects
- for example, I’d like to explode or fade out the room when the user selects an experience, revealing an entirely different space where that part of the experience is played out, before returning to the main room
- Extra rooms and spaces
- A narrated guided tour around these
- a combination of photogrammetry captured spaces and synthetic creations
- Higher quality photogrammetry
- Playback of live performance recordings (audiovisual)
- This could also be 360 video recordings of performances that the user can experience with 3 degrees of freedom or 2D streams on canvases in and around the space.
- Better sound design
- Ambient background music for each item experience that fades in and out
- Distribution
- Unity can be adapted to build for many devices
- This experience was built to run on Oculus Quest 1 which is essentially an Android smartphone, so this should be very performant on a wide range of devices
- WebXR deployment
- using the Mozilla Unity WebXR exporter
- More rapid development through pre-made assets
- There are many assets available on the Unity asset store, often available at low costs. These can be used to greatly speed up production, and I will seek to take this approach where possible in the future.
Meta-what?
During this project, the word metaverse has moved from being a phrase for those researching and developing immersive media into the mainstream through the famous (or infamous?) Facebook /Meta rebranding launch. With this people are beginning to think about the new affordances of immersive media, which is an important stage in the evolution of new technologies…
This is confirmed by headlines such as ‘How the metaverse won Christmas’ from news site CNBC with reports that the Oculus app was number 1 on the Apple app store over Christmas 2021, confirms that VR headset (mostly the more affordable Oculus Quest range) sales are at a high.
Trying to judge just when users may shift to creating their own ‘user generated’ 3D worlds is tricky, but I find it helpful to consider the following:
- Everett M. “Ev” Rogers’s innovation adoption lifecycle bell curve
- And the advisory IT firm Gartner’s hype cycle phases.
- And I find the bringing together of these by Tom Graves, with the addition of Simon Wardley’s concept of “Pioneers, settlers and town planners”, particularly insightful and useful.
Based on the above, I would place VR somewhere in a transition between the ‘early-adopters’ / ‘settlers’ phase and the ‘Town-Planners’ / ‘Early-Majority’ phase. As Graves writes;
For the central sections of the lifecycle – Early-Majority / Town-Planners, and Late-Majority / Exploiters – the returns should generally exceed the direct costs: in other words, those activities are likely to be ‘profitable’, in the mainstream sense of the term.
Tom Graves
The reason this is useful as an immersive artist relates to a few key areas, significantly; return on investment, marketing and distribution. In other words, it’s much safer to design and build products and experiences in this space now.
This is also important for artists wanting to reach a wider audience in the short term. This is an important factor for my current works.
In my previous article on the metaverse, I detailed some of the protocols that I thought would be important in building the future of the tech connected world. Since that article OpenXR has been adopted much more widely and there are now plenty of practical examples and turorials on how to use it for various platforms… Unity included. In that article I also talk about WebXR, and I have a some clear next steps for how to convert this prototype into a fully functioning WebXR version which would make it much more accessible as most smartphones can run this with compatible browsers.
Another important area for me to consider when moving forward with development are the ethical considerations for any immersive piece. I will always aim to develop experiences in line with the thinking outlined in my prior article on Ethics in immersive storytelling.
Summary
I could go into a much finer detail evaluating this piece and planning possible next steps etc, but for now I will leave it as is. On the whole this was a great learing experience and has grounded me in position where I am confident in designing and building immersive projects with 3D elements, interactivity and mixed media.