Grouping Mechanisms and Linguistic Processing
Many different objects create sounds in our environment (as in a crowded coffee shop). Though the sounds are created by multiple objects at different locations, they all combine into one signal that gets to our ears. The listener must mentally decode that signal into representations of the objects making the sounds. How does this happen? For non-speech sounds, this grouping is based on Gestalt heuristics (similarity, continuity, location, etc). For example, a listener can easily follow the hiss of a train whistle, because it has a similar sound that continues smoothly from one location to the next. However, though speech sounds are acoustically dissimilar (hissing /s/, harmonic vowel /a/, popping /p/), they are often grouped together (perceived as the word ‘sap’). How do listeners string together sounds that violate acoustic grouping principles with such ease and regularity? This project explores this question by examining how grouping mechanisms accommodate and interact with speech sounds.
Project contact: Marjorie Freggens
Mechanisms Underlying Auditory Selective Attention
Many definitions of selective attention tend to reference two components: a facilitatory mechanism that enhances the signal of interest, and an inhibitory mechanism that suppresses irrelevant and potentially distracting signals. Because both mechanisms contribute to the same observable outcome of improved selective attention, it has remained challenging to behaviorally dissociate the two mechanisms. Although there are a number of studies providing behavioral and neural evidence of the independence of these mechanisms in visual selective attention, the distinction is currently less clear in the auditory domain. The primary goal of this project is to obtain clear behavioral evidence for each mechanism and to use that information to better understand the operation of auditory selective attention.
Project contact: Heather Daly