Research

Grouping Mechanisms and Linguistic Processing

Many different objects create sounds in our environment (as in a crowded coffee shop). Though the sounds are created by multiple objects at different locations, they all combine into one signal that gets to our ears. The listener must mentally decode that signal into representations of the objects making the sounds. How does this happen? For non-speech sounds, this grouping is based on Gestalt heuristics (similarity, continuity, location, etc). For example, a listener can easily follow the hiss of a train whistle, because it has a similar sound that continues smoothly from one location to the next. However, though speech sounds are acoustically dissimilar (hissing /s/, harmonic vowel /a/, popping /p/), they are often grouped together (perceived as the word ‘sap’). How do listeners string together sounds that violate acoustic grouping principles with such ease and regularity? This project explores this question by examining how grouping mechanisms accommodate and interact with speech sounds.

Project contact: Marjorie Freggens


Mechanisms Underlying Auditory Selective Attention

Many definitions of selective attention tend to reference two components: a facilitatory mechanism that enhances the signal of interest, and an inhibitory mechanism that suppresses irrelevant and potentially distracting signals. Because both mechanisms contribute to the same observable outcome of improved selective attention, it has remained challenging to behaviorally dissociate the two mechanisms. Although there are a number of studies providing behavioral and neural evidence of the independence of these mechanisms in visual selective attention, the distinction is currently less clear in the auditory domain. The primary goal of this project is to obtain clear behavioral evidence for each mechanism and to use that information to better understand the operation of auditory selective attention.

Project contact: Heather Daly


Behavioral and Neural Mechanisms of Audiovisual Speech Integration

How do listeners use both visual and auditory speech cues to understand what is being said? How does the use of each modality change if a speech signal is degraded or otherwise difficult to understand? We are investigating the underlying neural mechanisms of audiovisual integration in speech perception using electroencephalography (EEG) and behavioral paradigms. In particular, we study which cues from each modality are most informative for robust integration, and how the relative weighting of auditory and visual speech cues is impacted by noise in either speech stream. This work is in collaboration with Tony Shahin, Ph.D. at the University of California, Merced, under NIH/NIDCD grant #DC013543.

Project contact: Hannah Shatzer


Neuroplasticity and Speech Perception in Cochlear Implant Recipients

Postlingually deaf adults who receive cochlear implants experience a wide variety of outcomes related to speech perception post-implantation. This variability may be explained by underlying neural and cognitive factors that differ across individuals, such as the degree of plasticity experienced in auditory cortex during and after deafness. This project longitudinally explores the relationship between markers of neuroplasticity and speech perception performance in new cochlear implant recipients over the first few months post-implantation, in hopes of determining whether increased plasticity during deafness can facilitate or impede rehabilitation of auditory speech perception after implantation. This project is a collaboration with Aaron Moberly, M.D., and Kara Vasil, Au.D. at the Ohio State Eye and Ear Institute.

Project contact: Hannah Shatzer