What I do
I study how we use our tacit knowledge of grammar to mentally assemble, access, and manipulate linguistic structures during moment-by-moment language understanding. What fascinates me is that language understanding is such a complex cognitive task, yet it seems to fast, easy, and effortless. This implies that there are some extremely sophisticated mechanisms “under the hood,” and I want to understand what those mechanisms are and how they work (… “how the gears clank and how the pistons go and all the rest of that detail” ~Allen Newell). But those mechanisms are not infallible. Sometimes we fail to accurately implement even the simplest of tasks, like linking a verb to its subject. Ultimately, we have to explain both the successes and the failures to develop a comprehensive theory human language processing.
From this perspective, my work seeks to address two broad questions in (psycho)linguistics:
- Processing mechanisms: How do we mentally encode, access, and update structured linguistic representations during moment-by-moment language comprehension? What are the linking hypotheses that relate mental linguistics processes to observed experimental measures (e.g., reading times, judgments)?
- Cognitive architecture: What is the relationship between the grammar and parser? What are the relative contributions of shallow “good-enough” processing and deep grammatical analysis in comprehension?
Linguistic illusions. I’ve been chiseling away at these questions for a while now by studying linguistic illusions, which are cases where we systematically mis-interpret the meaning of a sentence, causing us to misperceive ill-formed sentences as if they were well-formed (an illusion of grammaticality) or well-formed sentences as if they were ill-formed (an illusion of ungrammaticality). By looking at where we do and don’t fall for illusions, we learn about how linguistic representations are dynamically constructed and how they change as the sentence unfolds over time. We gain new insights about the micro-structure of the underlying representations (e.g., how fine-grained grammatical information is coordinated in working memory) and how we access specific pieces of information in those representations (via retrieval) for interpretation.
Methods. This work is informed by a combination of theoretical, experimental, and computational methods, drawing on a range of linguistic phenomena such as agreement, anaphora, ellipsis, polarity items, and thematic binding. I have also investigated syntactic prediction, and most recently, I’ve started to examine the effect of syntactic complexity on processing dynamics. I am particularly interested in testing and developing explicit computational models of the proposed representations and cognitive processes that operate over those representations. These models make our theories more explicit and they give us a flexible workspace to explore new hypotheses that we can then test in the lab.
Why I do it
What makes this work both exciting and challenging is figuring how to relate the fine-grained details of linguistic computation (i.e., the “nuts and bolts”) with the big picture issues concerning the relationship between the mental grammar and the cognitive architecture (i.e., the “bird’s eye view”). Importantly, this research doesn’t exist in a bubble. What gives this work meaning beyond the pursuit of knowledge is engaging the public in language science. I take very seriously the commitment to community outreach. I often work with community schools, provide opportunities for public engagement (e.g., via the North American Computational Linguistics Open Competition), and enjoy educating students at all ages about the wonders of language, which connects us all.