Clippers Tuesday: Mike White on Implementing Dynamic Continuized CCG

At Clippers Tuesday, I’ll motivate a new approach to scope taking in combinatory categorial grammar and discuss progress and plans for implementing the approach (in collaboration with Jordan Needle, Carl Pollard, Simon Charlow and Dylan Bumford):

A long-standing puzzle in natural language semantics has been how to explain the exceptional scope behavior of indefinites. Charlow (2014) has recently shown that their exceptional scope behavior can be derived from a dynamic semantics treatment of indefinites, i.e. one where the function of indefinites is to introduce discourse referents into the evolving discourse context. To do so, he showed that (1) a monadic approach to dynamic semantics can be seamlessly integrated with Barker and Shan’s (2015) approach to scope taking in continuized grammars, and (2) once one does so, the exceptional scope of indefinites follows from the way the side effect of introducing a discourse referent survives the process of delimiting the scope of true quantifiers such as those expressed with ‘each’ and ‘every’.

To date, computationally implemented approaches to scope taking have not distinguished indefinites from true quantifiers in a way that accounts for their exceptional scope taking. Although Steedman (2011) has developed an account of indefinites’ exceptional scope taking by treating them as underspecified Skolem terms in a non-standard static semantics for Combinatory Categorial Grammar (CCG), this treatment has not been implemented in its full complexity. Moreover, as Barker and Shan point out, Steedman’s theory appears to be undergenerate by not allowing true quantifiers to take scope from medial positions.

Barker and Shan offer a brief sketch of how their approach might be implemented, including how lifting can be invoked lazily to ensure parsing terminates. In this talk, I will show how their approach can be seamlessly combined with Steedman’s CCG and extended to include the first prototype implementation of Charlow’s semantics of indefinites, thereby yielding an approach that improves upon scope taking in CCG while retaining many of its attractive computational properties.

Clippers Tuesday: Micha Elsner on Neural Word Embeddings

This Tuesday, Micha Elsner will be presenting preliminary work on neural network word segmentation:

Given a corpus of phonemically transcribed utterances with unknown word boundaries, how can a cognitive model extract the vocabulary? I propose a new model based on working memory: the model must balance phonological memory (remembering how to pronounce words) with syntactic memory (remembering the utterance it just heard). Simulating the memory with encoder-decoder RNNs, I use reinforcement learning to optimize the segmentations.

Why build yet another model of word segmentation? (Is this simply a buzzword-compatibility issue? A little bit, but…) I hope to show that this model provides a deeper cognitive account of the prior biases used in previous work, and that its noisy, error-prone reconstruction process makes it inherently robust to variation in its input.

This is work in progress, so don’t expect great things from me yet. However, I will demonstrate model performance slightly worse than Goldwater et al 2009 on a standard dataset and discuss some directions for future work. Criticism, suggestions and thrown paper airplanes welcome.

Clippers Tuesday: Denis Newman-Griffis on Jointly Embedding Concepts, Phrases, and Words

This Tuesday, Denis Newman-Griffis will be presenting on learning embeddings for ontology concepts:

Recent work on embedding ontology concepts has relied on either expensive manual annotation or automated concept tagging methods that ignore the textual contexts around concepts. We propose a novel method for jointly learning concept, phrase, and word embeddings from an unlabeled text corpus, by using the representative phrases for ontology concepts as distant supervision. We learn embeddings for medical concepts in the Unified Medical Language System and general-domain concepts in YAGO, using a variety of corpora. Our embeddings show performance competitive with existing methods on concept similarity and relatedness tasks, while requiring no human corpus annotation and demonstrating more than 3x coverage in the vocabulary size.

I’ll also be talking a bit about trying to build an analogy completion dataset for the biomedical domain.

Clippers Last Tuesday: Evan Jaffe on Feature Engineering in the Virtual Patient Project

This past Tuesday, 2/7, Evan Jaffe presented on his progress on the Virtual Patient project:

I’ll be discussing results on a baseline log-linear model and the improvement gained from using a simple embedding similarity feature. I’ll also discuss motivation/related work and current status of implementing a simple CNN with padding and max pooling to do multiclass classification for this dataset.

Clippers Tuesday: Lifeng Jin on Two Approaches to Virtual Patient Data

At Clippers tomorrow, Lifeng will present on Two Approaches to Virtual Patient Data:

The main focus of the virtual patient project is question matching. I am going to approach this problem from two different angles. The first one is to treat this problem as a sentence similarity problem and use Siamese CNN models and the second is to treat this problem as a classification problem and use feedforward neural nets. I am going to present some preliminary results on virtual patient data and Microsoft paraphrase corpus, and discuss the pros and cons of the two approaches.

Clippers Tuesday: Cory Shain and Marty van Schijndel on Reading Time Modeling

At Clippers on Tuesday, Cory and Marty will be presenting two related talks:

Memory access during incremental sentence processing causes reading time latency
Cory Shain, Marten van Schijndel, Richard Futrell, Edward Gibson and William Schuler

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli. Our study addresses these concerns by comparing several implementations of prominent sentence processing theories on an exploratory corpus and evaluating the most successful of these on a confirmatory corpus, using a new self-paced reading corpus of seemingly natural narratives constructed to contain an unusually high proportion of memory-intensive constructions. We show highly significant and complementary broad-coverage latency effects both for predictors based on the Dependency Locality Theory and for predictors based on a left-corner parsing model of sentence processing. Our results indicate that memory access during sentence processing does take time, but suggest that stimuli requiring many memory access events may be necessary in order to observe the effect.

Addressing surprisal deficiencies in reading time models
Marten van Schijndel and William Schuler

This study demonstrates a weakness in how n-gram and PCFG surprisal are used to
predict reading times in eye-tracking data. In particular, the information conveyed by
words skipped during saccades is not usually included in the surprisal measures. This
study shows that correcting the surprisal calculation improves n-gram surprisal and that
upcoming n-grams affect reading times, replicating previous findings of how lexical fre-
quencies affect reading times. In contrast, the predictivity of PCFG surprisal does not
benefit from the surprisal correction despite the fact that lexical sequences skipped by
saccades are processed by readers, as demonstrated by the corrected n-gram measure.
These results raise questions about the formulation of information-theoretic measures
of syntactic processing such as PCFG surprisal and entropy reduction when applied to
reading times.