Augmented-Mixed Reality Audio for Hearables: Sensing, Control and Rendering

Guadalajara, Jalisco, Mexico

Abstract: Augmented and Mixed Reality (AR/MR) is considered to be one of the most promising technologies for the future of computing, and audio cues play a crucial role in enhancing realism, social connection, and spatial awareness in various AR/MR applications. These applications range from education and training, gaming, remote work, to virtual social gatherings. In this talk, we will explore the integration of fundamental and advanced signal processing techniques for AR/MR audio. The speaker will draw from a recent feature article in IEEE Signal Processing Magazine to provide researchers and engineers in the signal processing community with a comprehensive understanding of the needs required for the next wave of AR/MR. The talk will focus on the major signal processing and machine learning techniques for intelligent sensing, real sound control, and virtual sound rendering for Spatial Augmented Reality Audio (SARA). These techniques play a critical role in ensuring that audio cues in AR/MR applications are accurate, realistic, and enhance the overall user experience. The presentation will provide a comprehensive overview of the current state of the art in AR/MR audio and inspire the development of new and innovative applications in this exciting field. By highlighting the challenges and opportunities in AR/MR audio, we want to encourage further research and development in this area, helping to drive innovation and push the boundaries of what is possible in AR/MR technology. R Gupta, JJ He, R Ranjan, WS Gan, F Klein, C Schneiderwind, A Neidhardt, K Brandenburg, V Valimaki “Augmented/Mixed Reality Audio for Hearables: Sensing, Control and Rendering,” Feature Article in the IEEE Signal Processing Magazine, May 2022. Speaker(s): Prof. Woon-Seng Gan, Guadalajara, Jalisco, Mexico