Clippers 9/10: Michael White on Constrained Decoding in Neural NLG

Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue

(joint work with Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani and Rajen Subba)

Neural methods for natural language generation (NNLG) arrived with much fanfare a few years ago and became the dominant method employed in the recent E2E NLG Challenge. While neural methods promise flexible, end-to-end trainable models, recent studies have revealed their inability to produce satisfactory output for longer or more complex texts as well as how the black-box nature of these models makes them difficult to control. In this talk, I will propose using tree-structured semantic representations, like those used in traditional rule-based NLG systems, for better discourse-level structuring and sentence-level planning. I will then introduce a constrained decoding approach for sequence-to-sequence models that leverages this representation to improve semantic correctness. Finally, I will demonstrate promising results on a new conversational weather dataset as well as the E2E dataset and discuss remaining challenges.