Clippers 12/3: David King on BERT for Detecting Paraphrase Context Comparability

Existing resources for paraphrasing such as WordNet and the PPDB contain patterns for easily producing paraphrases but cannot fully take into account in which contexts those patterns are applied. However, words and phrases that are substitutable in one context may not be in another. In this work, we investigate whether BERT’s contextualized word embeddings can be used to predict whether a candidate paraphrase is acceptable by comparing the context of the paraphrase against the context where the paraphrase rule was extracted from. The setting for our investigation is automatically producing paraphrases for augmenting data in a question-answering dialogue system. We generate paraphrases by aligning known paraphrases, extracting patterns, and applying those patterns to new sentences to combat data sparsity. We show that BERT can be used to better identify paraphrases judged acceptable by humans. We use those paraphrases in our downstream dialogue system and show [hopefully] improved accuracy in identifying sparse labels.