Clippers 3/31: Nanjiang Jiang on BERT for Event Factuality

BERT is state-of-the-art for event factuality, but still fails on pragmatics

Nanjiang Jiang

Event factuality prediction is the task of predicting whether an event described in the text is factual or not. It is a complex semantic phenomenon that is important for various NLP downstream tasks e.g. information extraction. For example, in Trump thinks he knows better than the doctors about coronavirus, it is crucial that an information extraction system can identify that Trump knows better than the doctors about coronavirus is nonfactual. Although BERT has boosted the performance of various natural language understanding tasks, its applications to event factuality has been limited to the set-up of natural language inference. In this paper, we investigate how well BERT performs on seven event factuality datasets. We found that although BERT can obtain the new state-of-the-art performance on four existing datasets, it does so by exploiting common surface patterns that correlate with certain factuality labels, while fails on instances where pragmatic reasoning overrides. Unlike the high performance suggests, we are still far away from having a robust system for event factuality prediction.

Clippers 3/3: Evan Jaffe on Joint Coreference and Parsing

Models of human sentence processing effort tend to focus on costs
associated with retrieving structures and discourse referents from
memory (memory-based) and/or on costs associated with anticipating
upcoming words and structures based on contextual cues
(expectation-based)(Levy 08).
Although evidence suggests that expectation and memory may play
separable roles in language comprehension (Levy et al 2013), theories of
coreference processing have largely focused on memory: how comprehenders
identify likely referents of linguistic expressions.
In this study, we hypothesize that coreference tracking also informs
human expectations about upcoming words, and we test this hypothesis by
evaluating the degree to which incremental surprisal measures generated
by a novel coreference-aware semantic parser explain human response
times in a naturalistic self-paced reading experiment.
Results indicate (1) that coreference information indeed guides human
expectations and (2) that coreference effects on memory retrieval exist
independently of coreference effects on expectations.
Together, these findings suggest that the language processing system
exploits coreference information both to retrieve referents from memory
and to anticipate upcoming material.