Showing 1–11 of 11 results
/ Date/ Name
Feb 13, 2023The Framework Tax: Disparities Between Inference Efficiency in NLP Research and DeploymentNov 20, 2024Hardware Scaling Trends and Diminishing Returns in Large-Scale Distributed TrainingApr 24, 2025Energy Considerations of Large Language Model Inference and Efficiency OptimizationsApr 8, 2019CODAH: An Adversarially Authored Question-Answer Dataset for Common SenseNov 7, 2024Gradient Localization Improves Lifelong Pretraining of Language ModelsAug 20, 2021CIGLI: Conditional Image Generation from Language & ImageApr 24, 2020Generative Data Augmentation for Commonsense ReasoningMar 3, 2025Holistically Evaluating the Environmental Impact of Creating Language ModelsApr 6, 2026The Energy Cost of Execution-Idle in GPU ClustersJul 19, 2023Efficiency Pentathlon: A Standardized Arena for Efficiency EvaluationJun 17, 2025Empirically-Calibrated H100 Node Power Models for Reducing Uncertainty in AI Training Energy Estimation