Clippers 11/19: Byung-Doh Oh on Incremental Sentence Processing

Modeling incremental sentence processing with relational graph convolutional networks

We present an incremental sentence processing model in which syntactic and semantic information influence each other in an interactive manner. To this end, a PCFG-based left-corner parser (van Schijndel et al. 2013) has previously been extended to incorporate the semantic dependency predicate context (i.e. pair; Levy & Goldberg, 2014) associated with each node in the tree. In order to further improve the performance and generalizability of this model, dense representations of semantic predicate contexts and syntactic categories are learned and utilized as features for making left-corner parsing decisions. More specifically, a relational graph convolutional network (RGCN; Schlichtkrull et al. 2018) is trained to learn representations for predicates, as well as role functions for cuing the representation associated with each of its arguments. In addition, syntactic category embeddings are learned together with the left-corner parsing sub-models to minimize cross-entropy loss. Ultimately, the goal of the model is to provide a measure of predictability that is sensitive to semantic context, which in turn will serve as a baseline for testing claims about the nature of human sentence processing.