[AI Seminar] 2/27 Ziyu Yao on Learning a Semantic Parser from User Interaction

Speaker: Ziyu Yao
Time: Thurs 02/27/2020, 4pm-5pm
Location: Dreese Lab 480

Title: Learning a Semantic Parser from User Interaction

Abstract:
Training a machine learning model usually requires extensive supervision. Particularly for semantic parsers that aim to convert a natural language utterance into a domain-specific meaning representation (e.g., a SQL query), large-scale annotations from domain experts can be very costly. In our ongoing work, we study continually training a deployed semantic parser from end-user feedback, allowing the system better harness the vast store of potential training signals over its lifetime and adapt itself towards more practical user feeds. To this end, we present the first interactive system that proactively requests for intermediate, fine-grained feedback from user interaction and improves itself via an annotation-efficient imitation learning algorithm. On two text-to-SQL benchmark datasets, we first demonstrate that our system can continually improve a semantic parser by simply leveraging interaction feedback from non-expert users. Compared with existing feedback-based online learning approaches, our system enables more efficient learning, i.e., enhancing a parser’s performance with fewer user annotations. We finally show a theoretical analysis discussing the annotation efficiency advantage of our algorithm.

Bio: Ziyu Yao is a fifth-year Ph.D. student in the CSE department, advised by Prof. Huan Sun. Her current research interests include building interactive and interpretable natural language interfaces, as well as general applications of deep learning and reinforcement learning to interdisciplinary domains. She has been publishing papers at ACL/EMNLP/WWW/AAAI and was a research intern at Microsoft Research, Redmond.

[AI Seminar] 1/16 Ekim Yurtsever on Holistic Risk Perception for Automated Driving using Spatiotemporal Networks

Speaker: Ekim Yurtsever, Dept. of ECE
Time: Thurs 01/16/2020, 4pm-5pm
Location: Dreese Lab 480

Title: Holistic Risk Perception for Automated Driving using Spatiotemporal Networks

Abstract: Recently, increased public interest and market potential precipitated the emergence of self-driving platforms with varying degrees of automation. However, robust, fully automated driving in urban scenes is not achieved yet.

In this talk, I will introduce a new concept called ‘Holistic Risk Perception’ to alleviate the shortcomings of conventional risk assessment approaches. Holistic risk perception can be summarized as quantifying uncertainties and inferring the risk level of the driving scene.

I will then present a supervised deep learning framework to realize this goal. The proposed method applies semantic segmentation to individual video frames with a pre-trained model. Then, frames overlayed with these masks are fed into a time distributed CNN-LSTM network with a final softmax classification layer. This network was trained on a semi-naturalistic driving dataset with annotated risk labels. A comprehensive comparison of state-of-the-art pre-trained feature extractors was carried out to find the best network layout and training strategy. The best result, with a 0.937 AUC score, was obtained with the proposed framework. The code and trained models are available open-source.

Bio: Ekim Yurtsever received his Ph.D. in Information Science from Nagoya University in 2019 and is currently a postdoctoral researcher at the Department of Electrical and Computer Engineering, Ohio State University. His research interests include artificial intelligence, machine learning, and computer vision. Currently, he is working on machine learning and computer vision tasks in the intelligent vehicle domain.