Clippers 2/1: Moniba Keymanesh on Fairness-aware Summarization for Justified Decision-Making

Abstract: In consequential domains such as recidivism prediction, facility inspection, and benefit assignment, it’s important for individuals to know the decision-relevant information for the model’s prediction. In addition, predictions should be fair both in terms of the outcome and the justification of the outcome. In this work, we focus on the problem of (un)fairness in the justification of the text-based neural models. We tie the explanatory power of the model to fairness in the outcome by using a multi-task neural model and an attribution mechanism based on integrated gradients to extract high-utility and low-bias justifications in form of a summary.

In this talk, I will first introduce the notion of fairness in justification and present a data-preprocessing approach based on summarization to detect and remove bias from textual data. Finally, I will share experimental results on food inspections and teaching evaluations.

Speaker Bio: Moniba Keymanesh received her B.Sc. degree in Software Engineering from Amirkabir University of Technology and her M.Sc. degree in Computer Science and Engineering from the Ohio State University. She is currently a Ph.D. candidate at the Data Mining Research Lab at The Ohio State University. Her work is focused on building controllable and explainable natural language processing models for low-resource domains. Her research has been published in venues such as COLING, NLLP, and Complex Networks and is funded by the National Institutes of Health and the National Science Foundation.