Wed January 11th 2023, 3:45 - 5:00pm
Building 200, Room 020 (The History Corner) followed by a reception in Building 420, the Psychology Lounge

Tal Linzen, Assistant Professor of Linguistics and Data Science, New York University

Title: Large-scale investigation of syntactic processing reveals misalignments between humans and neural language models

Abstract: Prediction of upcoming events has emerged as a unifying framework for understanding human cognition. Concurrently, deep learning models trained to predict upcoming words have proved to be a remarkably effective foundation for artificial intelligence. The combination of these two developments presents a prospect for a unified framework for human sentence comprehension, which may obviate the need for more specific cognitive models. We present an evaluation of this hypothesis using reading times from 2000 participants, who read a diverse set of tightly controlled complex English sentences. The predictions of standard LSTM and transformer language models sharply diverged from our empirical data: the models underpredicted processing difficulty and left much of the item-wise variability unaccounted for. The discrepancy was reduced, but not eliminated, when we considered a model that assigns a higher weight to syntax than is necessarily for the word prediction objective. We conclude that humans’ next word predictions differ from those of standard language models, and that prediction error (surprisal) at the word level is unlikely to be a sufficient account of human sentence reading patterns.