Research

Our research addresses several topics in visual cognition and cognitive neuroscience. We use a variety of tools in our lab, including human psychophysics, gaze-contingent eye-tracking, fMRI, ERP, and TMS.

Attention and memory across eye movements

When we move our eyes to explore the world – as we do multiple times each second – the images sent to our brain are erratic snapshots, much like watching a movie filmed by a shaky cameraman. Yet the world does not appear to “jump” with each eye movement; instead we perceive a stable, seamless experience. How do our brains achieve this feat? And what can we learn when it fails? A central theme of our research is that retinotopic (eye-centered) representations are the native language of the visual system, and successful perception requires constant updating of visual information with each eye movement. Much of this research stems from a paradoxical phenomenon we discovered called the “retinotopic attentional trace”: even when required by the task to remember a spatiotopic (world-centered) location, subjects’ attention briefly persists at the (wrong) retinotopic location after an eye movement. This process can have fundamental implications for spatial attention and memory as well as feature/object perception.

  • Golomb, J.D., Chun, M.M., and Mazer, J.A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience. 28(42): 10654 –10662.
  • Golomb, J.D., Nguyen-Phuc, A.Y., Mazer, J.A., McCarthy, G., and Chun, M.M. (2010). Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. Journal of Neuroscience. 30(31): 10493-10506.
  • Golomb, J.D., Marino, A.C., Chun, M.M., and Mazer, J.A. (2011). Attention doesn’t slide: Spatiotopic updating after eye movements instantiates a new, discrete attentional locus. Attention, Perception, and Psychophysics. 73(1): 7-14.
  • Golomb, J.D. and Kanwisher, N. (2012). Retinotopic memory is more precise than spatiotopic memory. Proceedings of the National Academy of Sciences USA. 109(5): 1796-1801.
  • Tower-Richardi, S.M., Leber, A.B., and Golomb, J.D. (2016). Spatial priming in ecologically relevant reference frames. Attention, Perception, & Psychophysics. 78: 114-132.

Linking object identity and location

A second line of research focuses on another fundamental challenge of visual stability: How does our visual system combine information about objects’ features and identities with their locations? And how is this what-where binding affected when the “where” information needs to be updated? Work in the lab has uncovered two novel phenomena along these lines. First, when spatial attention is split across two different locations (as in divided attention or during remapping across eye movements), features from objects at these two locations can blend together, resulting in feature-mixing errors. Second, an object’s location plays such a special role during object recognition that it is automatically bound to feature/identity representations, resulting in a “Spatial Congruency Bias” where people are more likely to judge two sequential objects as the same shape, color, orientation, or even facial identity when they were presented in the same location. Ongoing work in the lab is using both feature-mixing and Spatial Congruency Bias paradigms as tools to explore a variety of theoretical questions about object-location interactions.

  • Golomb, J.D., L’Heureux, Z.E., and Kanwisher, N. (2014). Feature-binding errors after eye movements and shifts of attention. Psychological Science. 25(5):1067-78.
  • Golomb, J.D. (2015). Divided spatial attention and feature-mixing errors. Attention, Perception, & Psychophysics. 77: 2562-69.
  • Golomb, J.D., Kupitz, C.N., and Thiemann, C.T. (2014). The influence of object location on identity: A “spatial congruency bias”. Journal of Experimental Psychology: General. 143(6): 2262-78.
  • Finlayson, N.J. and Golomb, J.D. (2016). Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Research. 127: 49-56.
  • Shafer-Skelton, A., Kupitz, C.N, and Golomb, J.D. (2017). Object-location binding across a saccade: A retinotopic Spatial Congruency Bias.

Neural mechanisms of attention and spatial representation

More broadly, we are interested in how spatial locations and relationships are coded in the brain, and how these representations are influenced by attention and other top-down factors. The brain is known to contain several “maps” of visual space, but an open question is whether these representations reflect simply the location on the retina, or if some brain regions represent more ecologically relevant coordinate systems. Recent work in the lab has used fMRI multivariate pattern analysis to decode whether representations in different parts of the brain are retinotopic or spatiotopic, as well as combinations of fMRI and EEG to explore how these representations are dynamically updated across eye movements and other shifts of internal and external attention. Another line of work in the lab has focused on the emergence of 3D spatial representations in the brain. We also collaborate with multiple other labs to explore the neural mechanisms of face perception, scene processing, and working memory.

  • Golomb, J.D. and Kanwisher, N. (2012). Higher-level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex. 22: 2794-2810.
  • Golomb, J.D., Nguyen-Phuc, A.Y., Mazer, J.A., McCarthy, G., and Chun, M.M. (2010). Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. Journal of Neuroscience. 30(31): 10493-10506.
  • Golomb, J.D., Albrecht, A.R., Park, S., and Chun, M.M. (2011). Eye movements help link different views in scene-selective cortex. Cerebral Cortex. 21: 2094–2102.
  • Lescroart, M.D., Kanwisher, N., and Golomb, J.D. (2016). No evidence for automatic remapping of stimulus features or location found with fMRI. Frontiers in Systems Neuroscience. 10: 53.
  • Finlayson, N.J., Zhang, X., and Golomb, J.D. (2017). Differential patterns of 2D location versus depth decoding along the visual hierarchy.