Our INLG-21 paper revisits our experiments with reimplementing a classic rule-based NLG system with a neural model, finding that representing discourse relations remains essential for best performance in low-data settings even when using pre-trained models.
Author: Michael White
Amazing progress on data efficiency
Our INLG-21 paper shows that combining constrained decoding with self-training and pre-trained models makes it possible to reduce data needs for a challenging compositional neural NLG dataset down to the hundreds — a level where crowdsourcing is no longer necessary!
Looking for another postdoc to work on coherence and correctness in neural NLG!
I’m looking to hire another postdoctoral scholar to work on a Facebook-sponsored project whose aim is to investigate ways of adding linguistically informed structure to neural natural language generation (NNLG) models in order to enhance both the quality and controllability of NNLG in dialogue systems, with an eye towards improving discourse coherence and semantic correctness in particular. The scholar will have significant freedom to create a research agenda in this area as well as opportunities to advise students and collaborate with other linguistics and AI faculty in the department and across the university. Please see the official job posting for further details.
Discourse relations matter for neural NLG
Our INLG 2020 paper experiments with reimplementing a classic rule-based NLG system with a neural model, finding that representing discourse relations is crucial for making accurate comparisons.
Outstanding Paper at COLING 2020!
Our paper on best practices for data efficient neural NLG won one of the outstanding paper awards for the COLING 2020 industry track!
DSNNLG 2019 a success!
Our INLG-19 workshop on Discourse Structure in Neural NLG was a success, with fantastic invited talks, thought-provoking discussion and interesting posters. Thanks to all who participated! The proceedings are now in the ACL Anthology.
Postdoc applications for Facebook-sponsored project on neural NLG open till August 18
I’m delighted to now be able to announce that my new project on structure in neural NLG is sponsored by Facebook! Postdoctoral scholar applications are open until August 18.
Looking for postdocs to work on structure in neural NLG!
I’m looking to hire multiple postdoctoral scholars to work on a new project whose aim is to investigate ways of adding linguistically informed structure to neural natural language generation (NNLG) models in order to enhance both the quality and controllability of NNLG in dialogue systems. The scholar will have significant freedom to create a research agenda in this area as well as opportunities to advise students and collaborate with other linguistics and AI faculty in the department and across the university. The official job posting is not yet available but informal enquiries may be sent by email in the meanwhile!
ACL-19 paper on constrained decoding (with discourse structure!) in neural NLG: Real progress in the battle to rein in neural generators?
Thrilled to note that our forthcoming ACL-19 paper (also here) on using constrained decoding together with hierarchical discourse structure in neural NLG is now out! The paper shows that constrained decoding can help achieve more controllable and semantically correct output in task-oriented dialog with neural sequence-to-sequence models. While the battle to rein in neural generators continues, we’d like to think this work at least represents a quite successful skirmish!
INLG-18 paper on LSTM hypertagging: Grammar-based realizers rock on?
Our paper on LSTM Hypertagging at INLG-18 shows that (partially) neuralizing a traditional grammar-based surface realizer can achieve substantial performance gains. One might have thought that by now end-to-end neural methods would’ve been shown to work the best on the surface realization task. However, our (unpublished) attempts to train an attentional sequence-to-sequence model on the exact same OpenCCG inputs worked poorly, consistent with the poor performance observed by Marcheggiani and Perez-Beltrachini in their INLG-18 paper where they experimented with both sequence-to-sequence and graph-to-sequence models on the related task of generating from the deep representations from the 2011 shared task on surface realization. Along with recent parsing results showing that grammar-based methods can outperform neural ones, this suggests that the highest quality outputs might still be obtained using grammar-based realizers. See the slides from our talk for further discussion and next steps.