Clippers 2/1: Moniba Keymanesh on Fairness-aware Summarization for Justified Decision-Making

Abstract: In consequential domains such as recidivism prediction, facility inspection, and benefit assignment, it’s important for individuals to know the decision-relevant information for the model’s prediction. In addition, predictions should be fair both in terms of the outcome and the justification of the outcome. In this work, we focus on the problem of (un)fairness in the justification of the text-based neural models. We tie the explanatory power of the model to fairness in the outcome by using a multi-task neural model and an attribution mechanism based on integrated gradients to extract high-utility and low-bias justifications in form of a summary.

In this talk, I will first introduce the notion of fairness in justification and present a data-preprocessing approach based on summarization to detect and remove bias from textual data. Finally, I will share experimental results on food inspections and teaching evaluations.

Speaker Bio: Moniba Keymanesh received her B.Sc. degree in Software Engineering from Amirkabir University of Technology and her M.Sc. degree in Computer Science and Engineering from the Ohio State University. She is currently a Ph.D. candidate at the Data Mining Research Lab at The Ohio State University. Her work is focused on building controllable and explainable natural language processing models for low-resource domains. Her research has been published in venues such as COLING, NLLP, and Complex Networks and is funded by the National Institutes of Health and the National Science Foundation.

Clippers 1/25: Sandro Maskharashvili and Symon Stevens-Guille on generating discourse connectives with pre-trained models

Neural Methodius Revisited: Do Discourse Relations Help with Pre-Trained Models Too?

Aleksandre Maskharashvili, Symon Stevens-Guille, Xintong Li, Michael White

Recent developments in natural language generation (NLG) have bolstered arguments in favor of re-introducing explicit coding of discourse relations in the input to neural models. In the Methodius corpus, a meaning representation (MR) is hierarchically structured and includes discourse relations. Meanwhile pre-trained language models have been shown to implicitly encode rich linguistic knowledge which provides an excellent resource for NLG. By virtue of synthesizing these lines of research, we conduct extensive experiments on the benefits of using pre-trained models and discourse relation information in MRs, focusing on the improvement of discourse coherence and correctness. We redesign the Methodius corpus; we also construct another Methodius corpus in which MRs are not hierarchically structured but flat. We report experiments on different versions of the corpora, which probe when, where, and how pre-trained models benefit from MRs with discourse relation information in them. We conclude that discourse relations significantly improve NLG when data is limited.

Clippers 1/18: Micha Elsner on neural inflection and rating with analogical candidates

Neural inflection and rating with analogical candidates (joint w/Andrea Sims)

Abstract: Recent research on computational inflection prediction leads to a frustrating quandary. On the one hand, neural sequence-to-sequence models (Kann and Schuetze, 2016) provide steadily-improving state of the art perfor mance in predicting the inflectional forms of real words, outperforming a variety of non-neural models proposed in previous work (Nicolai et al., 2016). On the other, a series of experiments reveal their inadequacy in predicting the acceptability ratings of “wug” nonce words (Corkery et al., 2019). Like other neural models, these systems sometimes learn brittle generalizations which differ from human cognition and fail badly on out-of-sample data (Dankers et al., 2021). We present a neural system which aims to obtain the best of both worlds: state-of-the-art inflection prediction performance, and the ability to rate a wide variety of plausible forms for a given input in a human-like way. We show that, unlike many pre-neural models, the system is capable of generalizing across classes of related inflectional changes, leading to new testable hypotheses about the mental representation of inflectional paradigms.