Looking for postdocs to work on structure in neural NLG!

I’m looking to hire multiple postdoctoral scholars to work on a new project whose aim is to investigate ways of adding linguistically informed structure to neural natural language generation (NNLG) models in order to enhance both the quality and controllability of NNLG in dialogue systems. The scholar will have significant freedom to create a research agenda in this area as well as opportunities to advise students and collaborate with other linguistics and AI faculty in the department and across the university. The official job posting is not yet available but informal enquiries may be sent by email in the meanwhile!

ACL-19 paper on constrained decoding (with discourse structure!) in neural NLG: Real progress in the battle to rein in neural generators?

Thrilled to note that our forthcoming ACL-19 paper (also here) on using constrained decoding together with hierarchical discourse structure in neural NLG is now out! The paper shows that constrained decoding can help achieve more controllable and semantically correct output in task-oriented dialog with neural sequence-to-sequence models. While the battle to rein in neural generators continues, we’d like to think this work at least represents a quite successful skirmish!

INLG-18 paper on LSTM hypertagging: Grammar-based realizers rock on?

Our paper on LSTM Hypertagging at INLG-18 shows that (partially) neuralizing a traditional grammar-based surface realizer can achieve substantial performance gains. One might have thought that by now end-to-end neural methods would’ve been shown to work the best on the surface realization task. However, our (unpublished) attempts to train an attentional sequence-to-sequence model on the exact same OpenCCG inputs worked poorly, consistent with the poor performance observed by Marcheggiani and Perez-Beltrachini in their INLG-18 paper where they experimented with both sequence-to-sequence and graph-to-sequence models on the related task of generating from the deep representations from the 2011 shared task on surface realization. Along with recent parsing results showing that grammar-based methods can outperform neural ones, this suggests that the highest quality outputs might still be obtained using grammar-based realizers. See the slides from our talk for further discussion and next steps.

Your new Squibs and Discussions editor for Computational Linguistics

After a transitional period I have taken up the position of Squibs & Discussions editor for the Computational Linguistics journal, and the first squib for which I’ve served as editor is now available early access online. It’s a meta-review of effectiveness of BLEU, by Ehud Reiter, where he concludes that there is insufficient evidence for using BLEU beyond diagnostic evaluation of MT systems — a conclusion drastically at odds with much current usage.

Keep submitting your interesting squibs!

BEA-13 paper on using paraphrasing and neural memory-based classification in a virtual patient dialogue system

Lifeng Jin, David King, Amad Hussein, Doug Danforth and I have found that to tackle the long tail of relatively infrequently asked questions in a virtual patient dialogue system, it pays to combine paraphrasing for data augmentation with neural memory-based classification, as together the two methods yield a nearly 10% absolute improvement in accuracy on the least frequently asked questions. The paper will appear next week at the 13th Workshop on Innovative Use of NLP for Building Educational Applications at NAACL HLT 2018 in New Orleans.