Dr Amy Beeston
Postdoctoral Research Fellow
0113 343 4560
School of Music, Room 1.13
BMus (Hons), MMus, PhD, MASA, MCMI
My work is highly interdisciplinary and combines acoustics, psychophysics and computer science to better understand human sound perception, and to develop robust machine listeners for real-world tasks.
My work has always involved the combination of music and science. I completed my undergraduate studies in Music Technology (University of Edinburgh) and a masters in Sonology (Royal Conservatory, The Hague). My PhD was based in Computer Science (University of Sheffield) and examined compensation for reverberation in human listeners and machines.
While based in the Speech and Hearing Research Group at Sheffield, I worked on a number of audio projects in the fields of computer-assisted language learning and in medical computing. I joined the University of Leeds in June 2017 to work on a project with Alinka Greasley and Harriet Crook which explores the music listening behaviour of people with hearing impairments.
- human listening
- machine listening
- room acoustics
- live settings
(2017). Perception of isolated chords: Examining frequency of occurrence, instrumental timbre, acoustic descriptors and musical training. Psychology of Music. .
DOI: 10.1177/0305735617720834, Repository URL: http://eprints.whiterose.ac.uk/118163/
(2014). Perceptual compensation for the effects of reverberation on consonant identification: Evidence from studies with monaural stimuli. Journal of the Acoustical Society of America. 136(6), 3072-3084.
‘AUTOMATIC ASSESSMENT OF ENGLISH LEARNER PRONUNCIATION USING DISCRIMINATIVE CLASSIFIERS’. In 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP)., 5351-5355. 5351-5355
‘Automatic assessment of English learner pronunciation using discriminative classifiers’. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2015-August, 5351-5355. 5351-5355
‘Recognition of reverberant speech by missing data imputation and NMF feature enhancement’. In To be confirmed.. In: REVERB challenge workshop in conjunction with ICASSP 2014 and HSCMA 2014, 10/05/2014
‘Consonant confusions provide further evidence that time-reversed rooms disturb compensation for reverberation’. In Proceedings of Forum Acusticum. 2014-January.
‘Groundwork for a resource in computational hearing for extended string techniques’. In To be confirmed., 662-669. 662-669 10th International Symposium on Computer Music Multidisciplinary Research (CMMR), Marseilles, France, 15/10/2013 - 18/10/2013
‘Perceptual compensation for the effects of reverberation on consonant identification: A comparison of human and machine performance’. In 13th Annual Conference of the International Speech Communication Association 2012, INTERSPEECH 2012. 2, 1714-1717. 1714-1717
‘Perceptual compensation for effects of reverberation in speech identification: A computer model based on auditory efferent processing’. In Proceedings of the 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010., 2462-2465. 2462-2465
Research Projects & Grants
I am working on the AHRC-funded Hearing aids for music project.
Research Centres & Groups
- Visiting Academic, Department of Computer Science, University of Sheffield
- Member of the Acoustical Society of America (MASA)
- Member of the Chartered Management Institute (MCMI)
- Working group member of the Yorkshire Sound Women Network
Title: Perceptual compensation for reverberation in human listeners and machines
This thesis explores compensation for reverberation in human listeners and machines. Late reverberation is typically understood as a distortion which degrades intelligibility. Recent research, however, shows that late reverberation is not always detrimental to human speech perception. At times, prolonged exposure to reverberation can provide a helpful acoustic context which improves identification of reverberant speech sounds. The physiology underpinning our robustness to reverberation has not yet been elucidated, but is speculated in this thesis to include efferent processes which have previously been shown to improve discrimination of noisy speech. These efferent pathways descend from higher auditory centres, effectively recalibrating the encoding of sound in the cochlea. Moreover, this thesis proposes that efferent-inspired computational models based on psychoacoustic principles may also improve performance for machine listening systems in reverberant environments.
A candidate model for perceptual compensation for reverberation is proposed in which efferent suppression derives from the level of reverberation detected in the simulated auditory nerve response. The model simulates human performance in a phoneme-continuum identification task under a range of reverberant conditions, where a synthetically controlled test-word and its surrounding context phrase are independently reverberated. Addressing questions which arose from the model, a series of perceptual experiments used naturally spoken speech materials to investigate aspects of the psychoacoustic mechanism underpinning compensation. These experiments demonstrate a monaural compensation mechanism that is influenced by both the preceding context (which need not be intelligible speech) and by the test-word itself, and which depends on the time-direction of reverberation. Compensation was shown to act rapidly (within a second or so), indicating a monaural mechanism that is likely to be effective in everyday listening. Finally, the implications of these findings for the future development of computational models of auditory perception are considered.
Available to download at http://etheses.whiterose.ac.uk/8351/