Clippers 2/20: Byung-Doh Oh, Frequency Explains the Inverse Correlation of Large Language Models’ Size, Training Data Amount, and Surprisal’s Fit to Reading Times

Frequency Explains the Inverse Correlation of Large Language Models’ Size, Training Data Amount, and Surprisal’s Fit to Reading Times

https://arxiv.org/abs/2402.02255

Recent studies have shown that as Transformer-based language models become larger and are trained on very large amounts of data, the fit of their surprisal estimates to naturalistic human reading times degrades. The current work presents a series of analyses showing that word frequency is a key explanatory factor underlying these two trends. First, residual errors from four language model families on four corpora show that the inverse correlation between model size and fit to reading times is the strongest on the subset of least frequent words, which is driven by excessively accurate predictions of larger model variants. Additionally, training dynamics reveal that during later training steps, all model variants learn to predict rare words and that larger model variants do so more accurately, which explains the detrimental effect of both training data amount and model size on fit to reading times. Finally, a feature attribution analysis demonstrates that larger model variants are able to accurately predict rare words based on both an effectively longer context window size as well as stronger local associations compared to smaller model variants. Taken together, these results indicate that Transformer-based language models’ surprisal estimates diverge from human-like expectations due to the superhumanly complex associations they learn for predicting rare words.

Clippers 2/6: Ash Lewis on a user study of interactive KB querying

In Clippers on Tuesday, February 6th, I will be presenting the results of a user study we (Lingbo Mo, Huan Sun, Mike White, and myself) conducted in order to test the viability of an interactive semantic parsing system we built. The system was designed to help users query a knowledge base in natural language, offsetting the need to know the query language that the knowledge base uses and thus making the information more accessible to novice users. Our system decomposes the query into pieces and translates them into understandable natural language, so that users can see exactly how the system reached an answer and therefore be confident in it. Alternatively, if the parse is incorrect, the user can utilize a natural language interface to correct it.

This work was conducted in the “pre-LLM era” and thus much of the technical contribution is a bit outdated. However, the user study, in which we had crowdworkers test several versions of the system, has broad application to human evaluation of dialogue systems. As dialogue systems become increasingly ubiquitous, we believe our experience conducting this user study has important lessons to contribute to evaluation methodologies.

My goal for Clippers is to make clearer the “story” for a paper about evaluation – this project has spanned many years and there is a great deal of content to sift through. I hope to get fresh eyes on that content and get feedback on the most salient pieces.

Clippers 1/30: Chris Brew on building a summarizer module for Lexis+AI

Building a summarizer module for Lexis+AI

With minimal prompting, commercial large language models can produce useful indicative summaries of many documents. Given informed and tolerant readers, the bar for usefulness is low, and current models easily achieve it. But these summaries do not meet the standards required of a professional information product. We show that, for legal documents, a “faceted” approach to summarization can smooth the path to acceptable professional quality. The Lexis+AI product currently covers about three and a half use cases, which I will explain and demonstrate.

In an applied AI setting, and especially for LLMs, evaluation is a key issue, and one which plays out differently for each use case, and also differently from what is normal in academic NLP. If time permits, I will try to give my impressions of how this really works in practice, and point at opportunities for high-impact work on evaluation.

In other words, we’ll finish up talking a little about what “acceptable professional quality” might mean. I am definitely speaking myself on this, not representing a company position.