Large language models (LLMs) can be powerful conversational agents but are prone to hallucinations (made up responses) and toxic content. In our upcoming paper at the Taming LLMs workshop at INLG-23, we show that ChatGPT’s responses can be curated and distilled down to a T5 model that is much better behaved!
In our upcoming SIGDIAL-23 paper, we observe that language models perform poorly on answering questions requiring precise temporal reasoning, motivating a neuro-symbolic approach to a conversational assistant for patient prep.
Our new SIGDIAL-22 paper reports on broad coverage experiments with the Penn Discourse Treebank where we quantify and analyze the extent to which including discourse relations in the input to a pretrained neural language model helps to accurately generate discourse connectives conveying the intended meaning. Notably, we find that cognitive discourse processing heuristics can help explain the error patterns one finds when trying to predict discourse connectives without telling the model the intended relation!
I was recently interviewed for a Voices of Excellence podcast about my research on natural language generation. With masterful editing, I may have even made some sense!
Delighted to receive this year’s Lumley Interdisciplinary Research Award — together with Eric Fosler-Lussier, Doug Danforth, William Schuler, Kellen Maicher, Alan Price, Laura Zimmerman, and Laura Wagner — for our work over the past few years on the virtual patient project.
We’re releasing a new dataset, INSPIRED, along with our ACL-22 Findings paper, Towards Transparent Interactive Semantic Parsing via Step-by-Step Correction. The paper documents the many steps we took to obtain a high-quality dataset of crowdsourced paraphrases intended to spur progress in research on interactive semantic parsing, with the ultimate aim of enabling users to obtain answers to complex natural language questions from knowledge bases with high confidence. Analyses of baseline models show the benefit of taking context into account and the potential for user interaction to enable much higher task success.
I’m delighted to announce that we’ve received one of the President’s Research Excellence Accelerator awards for our proposal entitled “Towards a Conversational Assistant for Patient Prep”! Our virtual patient team including Doug Danforth (College of Medicine), Eric Fosler-Lussier (CSE) and William Schuler (Linguistics) will be expanded to include Subhankar Chakraborty (Wexner Medical Center) for this effort. The aim of the project is to take initial steps towards developing an automated conversational assistant that can help patients properly prepare for medical procedures. This assistant will need to go beyond the capabilities of our virtual patient system in pro-actively engaging users and integrating information over extended interactions.
Our INLG-21 paper revisits our experiments with reimplementing a classic rule-based NLG system with a neural model, finding that representing discourse relations remains essential for best performance in low-data settings even when using pre-trained models.
Our INLG-21 paper shows that combining constrained decoding with self-training and pre-trained models makes it possible to reduce data needs for a challenging compositional neural NLG dataset down to the hundreds — a level where crowdsourcing is no longer necessary!
I’m looking to hire another postdoctoral scholar to work on a Facebook-sponsored project whose aim is to investigate ways of adding linguistically informed structure to neural natural language generation (NNLG) models in order to enhance both the quality and controllability of NNLG in dialogue systems, with an eye towards improving discourse coherence and semantic correctness in particular. The scholar will have significant freedom to create a research agenda in this area as well as opportunities to advise students and collaborate with other linguistics and AI faculty in the department and across the university. Please see the official job posting for further details.