Edinburgh Napier University
School of Computing
Mixed reality allows listeners to choose whether they wish to augment or replace their auditory environment in order to create a truly unique soundscape. They can use this to hear more than has ever been possible by making use of the microphones built into the ever-expanding number of Internet of Things devices. Alternatively, end users can choose to transform their acoustic experience by masking so that they hear much less. The difference between merely isolating themselves with passive headphones, is that of context, listeners’ own actions, as well as the events taking place around them can alter the audio. Any form of real time data captured by sensors can be used to affect auditory content. These can vary from simple entertainment, efficiency improvements through to life saving actions, whilst still allowing the pre-existing auditory environment to be perceived in conjunction with all of the other available senses. The sonic aspects of mixed reality can not only facilitate superhuman abilities, but also respite and safety, and most importantly it is already a principle that we are all familiar with.
A video game approach can be adopted for the design of mixed reality audio, ensuring that elements are either figure or ground. A figure represents something that can be interacted with, whereas ground denotes an acoustic backdrop to provide context. Foreground (figure) sounds are actively attended to, whereas the background (ground) sounds are typically ignored. Midground sound events are often omitted as they can easily be confused as figure, with gamers become frustrated when they cannot interact with them fully. There is an expectation that everything that is clearly audible can be engaged with. A similar assumption is made by those experiencing mixed reality. A well-designed system allows sonic space for the pre-existing auditory physical world to be the midground, so that users can allow elements to be interpreted as foreground or background according to need.
This is a self-funded project.
A first degree (at least a 2.1) ideally in Interactive Media or similar with a good fundamental knowledge of Sound Design and Programming.
English language requirement:
IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted.
- Experience of fundamental Sound Design
- Competent in Programming
- Knowledge of Psychoacoustics
- Good written and oral communication skills
- Strong motivation, with evidence of independent research skills relevant to the project
- Good time management
- Familiarity with C++ and C#.
Applications from potential part-time students are welcomed.
- Cooke, H., Pike, C., & Healey, P. G. (2019, November). Designing an Interactive and Collaborative Experience in Audio Augmented Reality. In Virtual Reality and Augmented Reality: 16th EuroVR International Conference, EuroVR 2019, Tallinn, Estonia, October 23–25, 2019, Proceedings (Vol. 11883, p. 305). Springer Nature.
- Ekman, I. (2013). On the desire to not kill your players: Rethinking sound in pervasive and mixed reality games. In FDG (pp. 142-149).
- Kern, A. C., & Ellermeier, W. (2020). Audio in VR: Effects of a soundscape and movement-triggered step sounds on presence. Frontiers in Robotics and AI.
- Rogers, K., Ribeiro, G., Wehbe, R. R., Weber, M., & Nacke, L. E. (2018, April). Vanishing importance: studying immersive effects of game audio perception on player experiences in virtual reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
- Rovithis, E., Floros, A., Moustakas, N., Vogklis, K., & Kotsira, L. (2019). Bridging Audio and Augmented Reality towards a New Generation of Serious Audio-Only Games. Electronic Journal of e-Learning, 17(2), 144-156.
- Salselas, I., Penha, R., & Bernardes, G. (2020). Sound design inducing attention in the context of audiovisual immersive environments. Personal and Ubiquitous Computing, 1-12.