Fifteen percent of American adults report experiencing hearing difficulty and up to 50% of young adults in developed countries are exposed to sound levels that put them at risk for noise-induced hearing loss. These statistics reflect a pressing need for tests that can detect hearing loss at its earliest stages, offering a better chance for successful outcomes. However, speech tests have only had limited success in practice. Moreover, many listeners who appear to have normal hearing still report hearing difficulty, particularly for understanding speech in background noise.
Listeners' error functions for individual speech sounds, centered at their 50% point. The figure shows that most tokens cluster together to form similar "z"-shaped functions once token-level error thresholds are accounted for. This suggests that adjusting for token-level differences can explain the majority of errors that listeners' make.
Several projects in our lab are investigating how we can develop better hearing tests to address these issues. Typically, speech-based tests average across different talkers and consonants. As a result, they are less sensitive to subtle differences in fine-grained acoustic cues in speech. In contrast, normal-hearing listeners are highly accurate at identifying speech sounds above a critical signal-to-noise threshold, defined for an individual token. A test that averages across speech sounds loses this critical token-level information.
Indeed, differences in the acoustic properties of specific sounds (e.g., differences between consonants [/b/ vs. /p/], talkers [female vs. male], and coarticulatory contexts [/bI/ vs. /ba/]) account for the majority of errors made by normal-hearing listeners. Thus, in order to measure a listener's ability to understand speech, we must investigate their perception of specific tokens. In collaboration with our colleagues at Illinois (Dr. Jont Allen's group), we working towards speech tests based on these principles that we hope will be better able to identify the effects of hearing loss on speech recognition.
We are also using neurophysiological measures to develop more accurate tests, in collaboration with our colleagues at Nemours A.I. duPont Hospital for Children (Dr. Thierry Morlet's group). This work follows from our studies using the event-related brain potential (ERP) technique to measure cortical responses to specific speech sounds and experiments investigating perceptual coding in the auditory brainstem response. By measuring how listeners process certain acoustic cues and phonetic distinctions, we hope to develop tests that can detect early stages of hearing loss, as well as cases of auditory neuropathy in infants and children, which are difficult to detect using current measurement techniques.