[AI Seminar] 2/27 Ziyu Yao on Learning a Semantic Parser from User Interaction

Speaker: Ziyu Yao
Time: Thurs 02/27/2020, 4pm-5pm
Location: Dreese Lab 480

Title: Learning a Semantic Parser from User Interaction

Training a machine learning model usually requires extensive supervision. Particularly for semantic parsers that aim to convert a natural language utterance into a domain-specific meaning representation (e.g., a SQL query), large-scale annotations from domain experts can be very costly. In our ongoing work, we study continually training a deployed semantic parser from end-user feedback, allowing the system better harness the vast store of potential training signals over its lifetime and adapt itself towards more practical user feeds. To this end, we present the first interactive system that proactively requests for intermediate, fine-grained feedback from user interaction and improves itself via an annotation-efficient imitation learning algorithm. On two text-to-SQL benchmark datasets, we first demonstrate that our system can continually improve a semantic parser by simply leveraging interaction feedback from non-expert users. Compared with existing feedback-based online learning approaches, our system enables more efficient learning, i.e., enhancing a parser’s performance with fewer user annotations. We finally show a theoretical analysis discussing the annotation efficiency advantage of our algorithm.

Bio: Ziyu Yao is a fifth-year Ph.D. student in the CSE department, advised by Prof. Huan Sun. Her current research interests include building interactive and interpretable natural language interfaces, as well as general applications of deep learning and reinforcement learning to interdisciplinary domains. She has been publishing papers at ACL/EMNLP/WWW/AAAI and was a research intern at Microsoft Research, Redmond.

[AI Seminar] 2/4 Yang Zhong on Discourse Level Factors for Sentence Deletion in Text Simplification

Speaker: Yang Zhong
Time: TUESDAY 02/04/2020, 4pm-5pm
Location: Dreese Lab 480

Discourse Level Factors for Sentence Deletion in Text Simplification
Yang Zhong, Chao Jiang, Wei Xu, Junyi Jessy Li
AAAI 2020

Title: Discourse Level Factors for Sentence Deletion in Text Simplification

Abstract: In this talk, I will present our paper accepted in AAAI 2020. We conduct a data-driven study focusing on analyzing and predicting sentence deletion — a prevalent but understudied phenomenon in document simplification on a large English text simplification corpus. We inspect various discourse-level factors associated with sentence deletion, using a new manually annotated sentence alignment corpus we collected. We reveal that professional editors utilize different strategies to meet the readability standards of elementary and middle schools. To predict whether a sentence will be deleted during simplification to a certain level, we harness automatically aligned data to train a classification model. We find that discourse-level factors contribute to the challenging task of predicting sentence deletion for simplification.

Bio: Yang Zhong is a first-year Ph.D. student in the Department of Computer Science and Engineering, advised by Prof. Wei Xu. His research mainly focuses on the stylistic variation of language, as well as in the field of document level text simplification.