Clippers 10/25: Lingbo Mo on Controllable Decontextualization

Yes/No or polar questions represent one of the main linguistic question categories. They consist of a main interrogative clause, for which the answer is binary (assertion or negation). Polar questions and answers (PQA) represent a valuable knowledge resource, present in many communities and other curated QA sources, such as forums or e-commerce applications. Using answers to polar questions alone in other contexts is not trivial. Answers are contextualized, and presume that the interrogative question clause and any shared knowledge between the asker and answerer are provided. We address the problem of controllable rewriting of answers to polar questions into decontextualized and succinct factual statements. We propose a Transformer sequence-to-sequence model that utilizes soft constraints to ensure controllable rewriting, such that the output statement is semantically equivalent to its PQA input. We conduct the evaluation on three separate PQA datasets as measured through both automated and human evaluation metrics and show the effectiveness of our proposed approach compared with existing baselines.

Clippers 10/18: Amad Hussain on Data Augmentation using Paraphrase Generation and Mix-Up

Amad Hussain and Henry Leonardi

Abstract: Low-resource dialogue systems often contain a high degree of few-shot class labels, leading to challenges in utterance classification performance. A potential solution is data augmentation through paraphrase generation, but this method has the potential to introduce harmful data points in form of low quality paraphrases. We explore this challenge as a case-study using a virtual patient dialogue system, which contains a long-tail distribution of few-shot labels. We investigate the efficacy of paraphrase augmentation through Neural Example Extrapolation (Ex2) using both in-domain and out-of-domain data, as well as the effects of paraphrase validation techniques using Natural Language Inference (NLI) and reconstruction methods. These data augmentation techniques are validated through training and evaluation of a downstream self-attentive RNN model with and without MIXUP. Initial results indicate paraphrase augmentation improves downstream model performance, however with less benefit than augmenting with MIXUP. Furthermore, we show mixed results for paraphrase augmentation in combination with MIXUP as well as for the efficacy of paraphrase validation. These results indicate a trade-off between reduction of misleading paraphrases and paraphrase diversity. In accordance with these initial findings, we identify promising areas of future work that have the potential to address this trade-off and better leverage paraphrase augmentation, especially in coordination with Mix-Up. As this is a work in progress, we hope to have a productive conversation with regards to the feasibility of our future directions as well any larger limitations or directions we should consider.

Clippers 10/11: Willy Cheung on Targeted Linguistic Evaluation of Cataphora

Due to their state of the art performance on natural language processing tasks, large neural language models have garnered significant interest as of late. To get a better understanding of their linguistic abilities, linguistics researchers have used the targeted linguistic evaluation paradigm to test neural models in a more linguistically controlled manner. Following this line of work, I am interested in investigating how neural models handle cataphora, i.e. when a pronoun precedes what it refers to (e.g. when [he] gets to work, [John] likes to drink a cup of coffee). I will present work attempting to use stimuli from existing cataphora studies, running and comparing GPT2 results to experimental data. A number of issues arise in comparing to existing studies, motivating a new study to collect data that would better suit the testing of neural models. I show the set up for my pilot experiment, and some preliminary results. I end with some ideas for future directions of this work.

Clippers 10/4: Andy Goodhart on sentiment analysis for multi-label classification

Title: Perils of Legitimacy: How Legitimation Strategies Sow the Seeds of Failure in International Order

Abstract: Autocratic states are challenging U.S. power and the terms of the post-WWII security order. U.S. policy debates have focused on specific military and economic responses that might preserve the United States’ favorable position while largely taking for granted that the effort should be organized around a core of like-minded liberal states. I treat this U.S. emphasis on promoting a liberal narrative of international order as an effort to make U.S. hegemony acceptable to domestic and foreign audiences; it is a strategy to legitimate a U.S. led international hierarchy and mobilize political cooperation. Framing legitimacy in liberal terms is only one option, however. Dominant states have used a range of legitimation strategies that present unique advantages and disadvantages. The main choice these hierarchs face is whether to emphasize the order’s ability to solve problems or to advocate for a governing ideology like liberalism. This project aims to explain why leading states in the international system choose performance- or ideologically-based legitimation strategies and the advantages and disadvantages of each.

This research applies sentiment analysis techniques (that were designed to characterize text based on positive or negative language) to the multi-label classification of foreign policy texts. The goal is to take a corpus of foreign policy speeches and documents that include rhetoric intended to justify an empire or hegemon’s international behavior and build a data set that shows variation in this rhetoric over time. Custom dictionaries reflect vocabulary used by each hierarch to articulate their value proposition to subordinate political actors. The output of the model is the percentage of each text committed to performance- and ideologically-based legitimation strategies. Using sentiment analysis for document classification represents an improvement over supervised machine learning techniques, because it does not require the time-consuming step of creating training sets. It is also better suited to multi-label classification in which each document belongs to multiple categories. Supervised machine learning techniques are better suited to texts that are either homogenous in their category (e.g., a press release is either about health care or about foreign policy) or easily divided into sections that belong to homogenous categories.