Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Compensating for distance compression in audiovisual virtual environments using incongruence

Finnegan, Daniel J. ORCID: https://orcid.org/0000-0003-1169-2842, O'Neill, Eamonn and Proulx, Michael J. 2016. Compensating for distance compression in audiovisual virtual environments using incongruence. Presented at: 2016 CHI Conference on Human Factors in Computing Systems, San Jose, California, USA, 7-12 May 2016. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York: ACM, pp. 200-212. 10.1145/2858036.2858065

[thumbnail of chi-2016.pdf]
Preview
PDF - Accepted Post-Print Version
Download (2MB) | Preview

Abstract

A key requirement for a sense of presence in Virtual Environments (VEs) is for a user to perceive space as naturally as possible.One critical aspect is distance perception. When judgingdistances, compression is a phenomenon where humans tend to underestimate the distance between themselves and target objects (termed egocentric or absolute compression), and between other objects (exocentric or relative compression).Results of studies in virtual worlds rendered through head mounted displays are striking, demonstrating significantdistance compression error. Distance compression is a multisensory phenomenon, where both audio and visual stimuli are often compressed with respect to their distances from the observer. In this paper, we propose and test a method for reducing crossmodal distance compression in VEs. We report an empirical evaluation of our method via a study of 3D spatial perception within a virtual reality (VR) head mounted display. Applying our method resulted in more accurate distance perception in a VE at longer range, and suggests a modification that could adaptively compensate for distance compression at both shorter and longer ranges. Our results have a significant and intriguing implication for designers of VEs: an incongruent audiovisual display, i.e. where the audio and visual information is intentionally misaligned, may lead to better spatial perception of a virtual scene.

Item Type: Conference or Workshop Item (Paper)
Status: Published
Schools: Computer Science & Informatics
Publisher: ACM
ISBN: 978-1-4503-3362-7
Funders: EPSRC
Date of First Compliant Deposit: 15 July 2019
Date of Acceptance: 7 May 2016
Last Modified: 26 Oct 2022 07:11
URI: https://orca.cardiff.ac.uk/id/eprint/124223

Citation Data

Cited 25 times in Scopus. View in Scopus. Powered By Scopus® Data

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics