Showing 1–20 of 45 results
/ Date/ Name
Mar 3, 2019Structural Supervision Improves Learning of Non-Local Grammatical DependenciesAug 31, 2018What do RNN Language Models Learn about Filler-Gap Dependencies?Oct 12, 2020Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language ModelsSep 10, 2019Representation of Constituents in Neural Language Models: Coordination Phrase as a Case StudyJan 27, 2023Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpusJun 10, 2019Hierarchical Representation in Neural Language Models: Suppression and Recovery of ExpectationsMay 24, 2019What Syntactic Structures block Dependencies in RNN Language Models?Jul 7, 2023Testing the Predictions of Surprisal Theory in 11 LanguagesJun 6, 2021A Targeted Assessment of Incremental Processing in Neural LanguageModels and HumansJun 2, 2020On the Predictive Power of Neural Language Models for Human Real-Time Comprehension BehaviorMay 12, 2025Using Information Theory to Characterize Prosodic Typology: The Case of Tone, Pitch-Accent and Stress-AccentMay 7, 2020A Systematic Assessment of Syntactic Generalization in Neural Language ModelsNov 28, 2023Quantifying the redundancy between prosody and textApr 9, 2024[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpusMay 15, 2024Elements of World Knowledge (EWoK): A Cognition-Inspired Framework for Evaluating Basic World Knowledge in Language ModelsFeb 26, 2025Language Models Grow Less Humanlike beyond Phase TransitionJun 4, 2025The Harmonic Structure of Information ContoursJun 4, 2025Unpacking Let Alone: Human-Scale Models Generalize to a Rare Construction in Form but not MeaningFeb 26, 2025Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMsApr 10, 2025Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora