I will presenting the work I’ve been doing on my QP2 in which I am attempting to use RSA (Rational Speech Act) approaches to improve a question generation model. This work is an extension of previous work on Transparent Interactive Semantic Parsing, in which we develop a dialogue agent that helps a human user query a knowledge base in natural language. The dialogue agent parses an NL utterance into a SPARQL query, decomposes it into pieces, retrieves answers, then translates the entire process into a series of natural language sub-questions so that the user can validate the results or make corrections as necessary. The current work focuses on the sub-question generation sub-task, in which it is very important for the question to accurately and coherently represent the meaning of its SPARQL query. To this end, I experiment with RSA-style approaches of explicit modeling of a listener to improve the generator. Specifically in this work I focus on a “reconstructor”-based method in which a listener model is trained to recover the original meaning representation (SPARQL query) from a base speaker model. I will show my experiments with self-training using the reconstructor-based model and detail my in-progress work with a “distractor”-based approach, in which the model attempts to generate an utterance that distinguishes an input from possible distractor inputs.
Nanjiang will workshop plans for her dissertation, which will be on explaining NLI disagreement.
In the last few years, deep learning approaches using the pretraining/finetuning approach have become state-of-the-art on a number of language tasks. Due to the success of pretrained neural language models, the following question has been raised: to what extent can good general linguistic representations be learned from language modeling alone? One line of research that aims to test this treats pretrained neural language models as linguistic experiment subjects, using the probabilities output by neural models as a proxy for acceptability on linguistic data in minimal pairs. With this approach, I will present tests on data from one particular cataphora study on GPT2, and will also discuss ongoing work in this vein.