When we listen to sounds presented over headphones we normally perceive the source of the sound as coming from a location inside our head. However, the illusion of a sound source located outside the head can be generated if the sound is first filtered in a manner that mimics the normal filtering of the outer ear. This illusory sound field is referred to 'virtual auditory space' (VAS).
One of the objectives of the psychophysical work is to determine the acoustical basis of the illusion of VAS. Current simulations of auditory space vary in their fidelity between listeners. This research aims to uncover the acoustical basis of these individual differences and allow the realisation of a more generalized simulation.
In determining the efficacy of a three dimensional audio display an objective measure of performance is required. The term fidelity is used with specific reference to localisability of an auditory target: i.e.. the accuracy of target localization afforded by the three dimensional audio display.
An important area of work in this laboratory is determining the fidelity of the VAS that is generated for different listeners. The listeners localization accuracy for short noise burst is first tested using over 400 location in free space. This is compared with their localization accuracy using virtual auditory space stimuli generated using their head related transfer functions HRTF. In general, have have found that the localization performance under both of these conditions is almost identical.
We have also been examining how the spatial separation of competing sounds effects the ability of one sound to mask or obscure the other. Presenting stimuli in VAS using a headphone delivery system provides precise control over all of the monaural and binaural cues utilized by the auditory system. We have presented broad band sounds in VAS and shown that the unmasking produced by placing the sources of the signal and the target in different locations in space is due to a number of different cues.
We are also interested in using VAS to study the monaural and binaural contributions to our perception of auditory motion. A functional description of the HRTFs is obtained using the KLT. The weights of the most significant number of Eigen Functions are then interpolated using spherical spline techniques. VAS can then be constructed for any arbitrary locations in virtual space, not just those locations for which the HRTF recordings have been made.
This work is currently examining
We have also been using neural networks to examine the efficacy of localization information in the transformed inputs to the auditory system. These transformations are designed to model the effects of neural encoding of the transfer functions of the outer ear.
The models reflect a number of physiologically and psychophysically informed processes which include
The medium term aim of this project is to convert the network to an analyzer residing on a VLSI chip to allow the construction of an anthropomorphic robot head capable of localizing a sound to the same level of accuracy as a human.
Much of the work described above is on going with some data having already been published in preliminary form (see reference list).
Return to Index