Clippers 11/30: Willy Cheung on Neural Networks and Cataphora

With the recent explosion and hype of deep learning, linguists within the NLP community have used carefully constructed linguistic examples to do targeted assessment of model linguistic capability, to see what models really know and where they fall short. In the spirit of these studies, my project aims to investigate neural network behavior on a linguistic phenomenon that has not received much attention: cataphora (i.e. when a referring expression such as a pronoun precedes its antecedent). I investigate the behavior of two models on cataphora: WebNLG (a model trained for NLG as described in Li et al 2020, based on pretrained T5 model in Raffel et al 2019), and the Joshi model (a finetuned model for coreference resolution described in Joshi et al 2019, based on the pretrained BERT model in Devlin et al 2019). The general idea is to test whether these models can distinguish acceptable and unacceptable examples involving cataphora. Some factors I will be investigating include 1) preposed (ie fronted) vs. postposed clauses. 2) cataphora across subordination vs. coordination of clauses. 3) a special case of pragmatic subordination with contrastive “but”.