Title: Status report: Dissociating syntactic and semantic processing with disentangled deep contextualized representations
Presenter: Cory Shain
Abstract: Psycholinguists and cognitive scientists have long hypothesized that building syntactic structures on the one hand and building meaning representations on the other may be supported by functionally distinct components of the human sentence processing system. This idea is typically studied in controlled settings, using stimuli designed to independently manipulate syntactic and semantic processing demands (e.g. using “syntactic” vs. “semantic” violations), a paradigm which suffers from poor ecological validity and an inability to quantify the degree to which an experimental manipulation truly disentangles syntax and semantics. In this study, we follow recent work in natural language processing in attempting to learn deep contextualized word representations that automatically disentangle syntactic and semantic dimensions, using multi-task adversarial learning to encourage/discourage syntactic or semantic content in each part of the representation space. In contrast to prior work in this domain, our system produces strictly incremental word-level representations in addition to utterance-level representations, enabling us to use it to study online incremental processing patterns. Early pilot results suggest that our model effectively disentangles syntax and semantics, paving the way for using its contextualized encodings to study behavioral and neural measures of human sentence processing in more naturalistic settings.
Title: Status report: Coreference Resolution Improves Incremental Surprisal Estimation
Presenter: Evan Jaffe
Abstract: Coreference is an attractive phenomenon to examine for memory-based processing effects, given its definition of linking current and past material in discourse to form useful representations of meaning. Memory decay is a neat explanation for distance-based processing effects, and there are results showing individuals with amnesia or Alzheimer’s have degraded usage of pronouns and referring expressions. However, prediction-based effects are also a popular topic in sentence processing, resulting in numerous studies using incremental surprisal to model human behavior. Previous work (Jaffe et al 2018) found a potential memory effect for a coreference-based predictor called MentionCount when regressed to human reading time data, but did not control for the possibility of coreference driving prediction effects. Two experiments are presented that show 1) the value of adding coreference resolution to an existing parser-based incremental surprisal estimate, and 2) still show a significant effect of MentionCount even when baseline surprisal includes coreference.