Clippers 10/17: Sam Stevens on mixture-of-experts (MoE) language models

In Clippers next week I will present some early-stage planning for a mixture-of-experts (MoE) language model project I hope to pursue. It will consist of:

  1. A literature review of neural MoE models in NLP
  2. How MoE models changed my thinking around model parallelism, FLOPs and compute efficiency
  3. What this implies about GPT-4 (which is rumored to be a MoE model)
  4. Soft MoE: a recent paper that aims to solve many problems with MoE models, but only applies it to vision
  5. Ideas I have on how to apply soft MoE to language modeling

I hope that #1 and #2 will be valuable to everyone, because I think MoE models are very under-utilized in research, despite supposedly powering the best language model in the world (GPT-4).

Clippers 9/29: Nanjiang Jiang leads discussion of “What BERT is not”

What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models

Allyson Ettinger

Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event prediction— and, in particular, it shows clear insensitivity to the contextual impacts of negation.

Clippers 9/22: Shontael Elward leads discussion on “Use interpretable models”

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Cynthia Rudin

Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Clippers 04/02 Cory Shain on measuring the perceptual availability of phonological features

Measuring the perceptual availability of phonological features during language acquisition using unsupervised binary stochastic autoencoders

This study deploys binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English). Results show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories. Evaluation of the degree to which theory-driven phonological features are encoded in the latent bit patterns shows that some (e.g. [+-approximant]), are well represented by the network in both languages, while others (e.g. [+-spread glottis]) are less so. Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues must eventually be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination. These results also suggest differences in degree of perceptual availability between features, yielding testable predictions as to which features might depend more or less heavily on top-down cues during child language acquisition.

Clippers 02/19 Evan Jaffe on Coreference Resolution

Towards a Coreference-aware Measure of Surprisal

This talk will describe ongoing work to model coreference as an incremental process, discussing current results, model design, and current challenges. Coreference is the semantic identity relationship between entities. Humans are able to effortlessly produce and comprehend language that describes coreference relations. While much work has explored coreference from a psycholinguistic angle, extensive modeling efforts have come from a more task-oriented NLP domain that does not seek to model cognitively plausible mechanisms. The current work attempts to bridge the two approaches by modeling coreference as part of an incremental semantic parsing process. Ultimately the model will be evaluated on parsing performance, coreference performance, and how well its predictions correlate with human processing data.