Clippers 2/25: Byung-Doh Oh on Incremental Sentence Processing with Relational Graph Convolutional Networks

Modeling incremental sentence processing with relational graph convolutional networks

We present an incremental model of sentence processing in which syntactic and semantic information influence each other in an interactive manner. To this end, a PCFG-based left-corner parser (van Schijndel et al. 2013) has previously been extended to incorporate the semantic dependency predicate context (i.e. pair; Levy & Goldberg, 2014) associated with each node in the parse tree. In order to further improve the accuracy and generalizability of this model, dense representations of semantic predicate contexts and syntactic categories are learned and utilized as features for making parsing decisions. More specifically, a relational graph convolutional network (RGCN; Schlichtkrull et al. 2018) is trained to learn representations for predicates, as well as role functions for cuing the representation associated with each of its arguments. In addition, syntactic category embeddings are learned jointly with the parsing sub-models to minimize cross-entropy loss. Ultimately, the goal of the model is to provide a measure of predictability that is sensitive to semantic context, which in turn will serve as a baseline for testing claims about the nature of human sentence processing.