Clippers 8/27: Amad Hussain on Synthetic Data for Social Needs Chatbot / Building KGQA for Social Determinants of Health and Sleep Behaviors

Title 1: Synthetic Data for Social Needs Chatbot

Abstract: In many cases social needs resources (e.g. food pantries, financial assistance) got underutilized due to lack of accessibility. While certain websites, such as Findhelp.org, exist to improve accessibility through the aggregation and filtering of resources, a barrier still exists due to disparities in technical literacy and mismatches between patient description of experiences relative to the formal terminology. We week to create a conversational agent which can bridge this accessibility barrier.

Due to patient data privacy concerns, and server-side resource limitations, the patient facing conversational system must be lightweight and not rely on API calls. As such, we make use of knowledge transfer through synthetic conversation generation using LLMs for use in training a downstream model. To reflect different user experiences, we make use of patient profile schemas and categorical expansion.

Title 2: Building KGQA for Social Determinants of Health and Sleep Behaviors

Abstract: Social determinants of health (SDOH) are primarily encoded within free-text clinical notes rather than structured data fields, causing cohort identification to be relatively intractable. Likewise, sleep complaints, while occasionally leading to formal diagnoses, can be missed and solely embedded within free text descriptions. We intend to extract sleep characteristics and SDOH mentions within clinical notes to assist in cohort identification and correlation studies. The goal is to see how certain SDOH factors can relate to sleep concerns, especially in cases where underlying biases can lead to not having a diagnosis despite the presence of appropriate mentions.

While models exist for SDOH extraction, they largely work on public datasets and cannot necessarily be converted to individual hospital system. Likewise, sleep mentions are understudied and do not come with a large-scale dataset. To minimize the need for annotations, we leverage LLMs to extract these mentions using prompt-based, or lightly fine-tuned, methods. To then understand deeper relationships between these two factors, we seek to create a knowledge graph relating SDOH and sleep characteristics for a given cohort, allowing a physician to ask questions of these relations in a downstream KGQA system.

Clippers 10/17: Sam Stevens on mixture-of-experts (MoE) language models

In Clippers next week I will present some early-stage planning for a mixture-of-experts (MoE) language model project I hope to pursue. It will consist of:

  1. A literature review of neural MoE models in NLP
  2. How MoE models changed my thinking around model parallelism, FLOPs and compute efficiency
  3. What this implies about GPT-4 (which is rumored to be a MoE model)
  4. Soft MoE: a recent paper that aims to solve many problems with MoE models, but only applies it to vision
  5. Ideas I have on how to apply soft MoE to language modeling

I hope that #1 and #2 will be valuable to everyone, because I think MoE models are very under-utilized in research, despite supposedly powering the best language model in the world (GPT-4).

Clippers 9/29: Nanjiang Jiang leads discussion of “What BERT is not”

What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models

Allyson Ettinger

Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inference and role-based event prediction— and, in particular, it shows clear insensitivity to the contextual impacts of negation.

Clippers 9/22: Shontael Elward leads discussion on “Use interpretable models”

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Cynthia Rudin

Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Clippers 04/02 Cory Shain on measuring the perceptual availability of phonological features

Measuring the perceptual availability of phonological features during language acquisition using unsupervised binary stochastic autoencoders

This study deploys binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English). Results show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories. Evaluation of the degree to which theory-driven phonological features are encoded in the latent bit patterns shows that some (e.g. [+-approximant]), are well represented by the network in both languages, while others (e.g. [+-spread glottis]) are less so. Together, these findings suggest that many reliable cues to phonemic structure are immediately available to infants from bottom-up perceptual characteristics alone, but that these cues must eventually be supplemented by top-down lexical and phonotactic information to achieve adult-like phone discrimination. These results also suggest differences in degree of perceptual availability between features, yielding testable predictions as to which features might depend more or less heavily on top-down cues during child language acquisition.

Clippers 02/19 Evan Jaffe on Coreference Resolution

Towards a Coreference-aware Measure of Surprisal

This talk will describe ongoing work to model coreference as an incremental process, discussing current results, model design, and current challenges. Coreference is the semantic identity relationship between entities. Humans are able to effortlessly produce and comprehend language that describes coreference relations. While much work has explored coreference from a psycholinguistic angle, extensive modeling efforts have come from a more task-oriented NLP domain that does not seek to model cognitively plausible mechanisms. The current work attempts to bridge the two approaches by modeling coreference as part of an incremental semantic parsing process. Ultimately the model will be evaluated on parsing performance, coreference performance, and how well its predictions correlate with human processing data.