Does AGI = LLM + search?

In practical applications such as text-to-SQL parsing, we show in our upcoming ACL-24 paper that success in combining LLMs with search relies heavily on discriminator quality, pointing to the need for more research on improving discriminator quality before meaningful performance gains can be achieved.

Does your neural NLG system need discourse relations? Our PDTB experiments say so

Our new SIGDIAL-22 paper reports on broad coverage experiments with the Penn Discourse Treebank where we quantify and analyze the extent to which including discourse relations in the input to a pretrained neural language model helps to accurately generate discourse connectives conveying the intended meaning. Notably, we find that cognitive discourse processing heuristics can help explain the error patterns one finds when trying to predict discourse connectives without telling the model the intended relation!

Lumley Interdisciplinary Research Award

Delighted to receive this year’s Lumley Interdisciplinary Research Award — together with Eric Fosler-Lussier, Doug Danforth, William Schuler, Kellen Maicher, Alan Price, Laura Zimmerman, and Laura Wagner — for our work over the past few years on the virtual patient project.

New INSPIRED dataset for transparent interactive semantic parsing

We’re releasing a new dataset, INSPIRED, along with our ACL-22 Findings paper, Towards Transparent Interactive Semantic Parsing via Step-by-Step Correction. The paper documents the many steps we took to obtain a high-quality dataset of crowdsourced paraphrases intended to spur progress in research on interactive semantic parsing, with the ultimate aim of enabling users to obtain answers to complex natural language questions from knowledge bases with high confidence. Analyses of baseline models show the benefit of taking context into account and the potential for user interaction to enable much higher task success.

PRE Accelerator award to jump start work on a conversational assistant for patient prep

I’m delighted to announce that we’ve received one of the President’s Research Excellence Accelerator awards for our proposal entitled “Towards a Conversational Assistant for Patient Prep”! Our virtual patient team including Doug Danforth (College of Medicine), Eric Fosler-Lussier (CSE) and William Schuler (Linguistics) will be expanded to include Subhankar Chakraborty (Wexner Medical Center) for this effort. The aim of the project is to take initial steps towards developing an automated conversational assistant that can help patients properly prepare for medical procedures. This assistant will need to go beyond the capabilities of our virtual patient system in pro-actively engaging users and integrating information over extended interactions.