Research


**For a video introduction to our lab’s research, click here!

Our brains construct rich perceptual experiences from the rawest of visual inputs: spots of light hitting different places on our eyes. In a fraction of a second, we integrate this information to recognize objects, deduce their locations, and plan complex actions and behaviors. But although visual perception feels smooth and seamless, this process is far more complex than it appears. Our lab studies human behavior and brain function to investigate how visual properties such as color, shape, and spatial location are perceived and coded in the brain, and how these representations are influenced by eye movements, shifts of attention, and other dynamic cognitive processes.

We use a variety of tools in our lab, including neuroimaging (fMRI & EEG), gaze-contingent eye-tracking, human psychophysics, and computational analyses. Below are some links to papers from the lab on the topics of: Visual perception and attention across eye movements, Linking object identity and locationNeural representations for visual processing, and Interactions between attention and working memory. Links to all papers from the lab can be found in Publications.


Visual perception and attention across eye movements

When we move our eyes to explore the world – as we do multiple times each second – the images sent to our brain are erratic snapshots, much like watching a movie filmed by a shaky cameraman. Yet the world does not appear to “jump” with each eye movement; instead we perceive a stable, seamless experience. How do our brains achieve this feat? And what can we learn when it fails? A central theme of our research is that retinotopic (eye-centered) representations are the native language of the visual system, and successful perception requires constant updating of visual information with each eye movement. Much of this research stems from a paradoxical phenomenon we discovered called the “retinotopic attentional trace”: even when required by the task to remember a spatiotopic (world-centered) location, subjects’ attention briefly persists at the (wrong) retinotopic location after an eye movement. This process can have fundamental implications for spatial attention and memory as well as feature/object perception, as described in our “dual-spotlight theory” of remapping. Check out these papers:


Linking object identity and location

A second line of research focuses on another fundamental challenge of visual stability: How does our visual system combine information about objects’ features and identities with their locations? And how is this what-where binding affected when the “where” information needs to be updated? Work in the lab has uncovered two novel phenomena along these lines. First, when spatial attention is split across two different locations (as in divided attention or during remapping across eye movements), features from objects at these two locations can blend together, resulting in feature-mixing errors. Second, an object’s location plays such a special role during object recognition that it is automatically bound to feature/identity representations, resulting in a “Spatial Congruency Bias” where people are more likely to judge two sequential objects as the same shape, color, orientation, or even facial identity when they were presented in the same location. Ongoing work in the lab is using both feature-mixing and Spatial Congruency Bias paradigms as tools to explore a variety of theoretical questions about object-location interactions. Check out:


Neural representations for visual processing

More broadly, we are interested in how spatial locations and relationships are coded in the brain, and how these representations are influenced by attention and other top-down factors. The brain is known to contain several “maps” of visual space, but an open question is whether these representations reflect simply the location on the retina, or if some brain regions represent more ecologically relevant coordinate systems. Recent work in the lab has used fMRI multivariate pattern analysis to decode whether representations in different parts of the brain are retinotopic or spatiotopic, as well as combinations of fMRI and EEG to explore how these representations are dynamically updated across eye movements and other shifts of internal and external attention. Another line of work in the lab has focused on the emergence of 3D spatial representations in the brain. We also collaborate with multiple other labs to explore the neural mechanisms of face perception, scene processing, and working memory. Check out these:


 Attention and working memory

Finally, our lab explores the mechanisms and interactions of attention and working memory more generally. Much of our work described above on attentional remapping and dynamic attention / feature-binding involves manipulations of attention sustained via working memory, and/or uses behavioral paradigms common to the working memory literature. Our research also aims to understand the intricate interactions between attention, perception, and working memory. What is the relationship between external and internal attention? Do visual representations held in working memory interact with each other? Are the processing filters regulating perception, attention, and working memory similarly disrupted when we get distracted? Check out :