Our paper on LSTM Hypertagging at INLG-18 shows that (partially) neuralizing a traditional grammar-based surface realizer can achieve substantial performance gains. One might have thought that by now end-to-end neural methods would’ve been shown to work the best on the surface realization task. However, our (unpublished) attempts to train an attentional sequence-to-sequence model on the exact same OpenCCG inputs worked poorly, consistent with the poor performance observed by Marcheggiani and Perez-Beltrachini in their INLG-18 paper where they experimented with both sequence-to-sequence and graph-to-sequence models on the related task of generating from the deep representations from the 2011 shared task on surface realization. Along with recent parsing results showing that grammar-based methods can outperform neural ones, this suggests that the highest quality outputs might still be obtained using grammar-based realizers. See the slides from our talk for further discussion and next steps.