Showing 21–40 of 45 results
/ Date/ Name
Mar 8, 2019Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic StateDec 18, 2025What Do Prosody and Text Convey? Characterizing How Meaningful Information is Distributed Across Multiple ChannelsNov 4, 2020Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic GeneralizationFeb 15, 2025BabyLM Turns 3: Call for papers for the 2025 BabyLM workshopDec 5, 2023WhisBERT: Multimodal Text-Audio Language Modeling on 100M WordsApr 27, 2023Controlled Text Generation with Natural Language InstructionsSep 5, 2018RNNs as psycholinguistic subjects: Syntactic state and grammatical dependencyOct 21, 2024Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form DiscourseNov 25, 2022On the Effect of Anticipation on Reading TimesFeb 14, 2022Exhaustivity and anti-exhaustivity in the RSA framework: Testing the effect of prior beliefsOct 17, 2025What Can String Probability Tell Us About Grammaticality?Mar 14, 2025The time scale of redundancy between prosody and linguistic contextSep 12, 2024On the Role of Context in Reading Time PredictionFeb 10, 2026A Unified Assessment of the Poverty of the Stimulus Argument for Neural Language ModelsDec 6, 2023Revisiting the Optimality of Word LengthsOct 16, 2024Reverse-Engineering the ReaderFeb 25, 2025Looking forward: Linguistic theory and methodsSep 21, 2025Modeling Bottom-up Information Quality during Language ProcessingDec 6, 2024Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible CorporaJul 7, 2023On the Efficacy of Sampling Adapters