Showing 1–13 of 13 results
/ Date/ Name
Jul 11, 2024Generating Contextually-Relevant Navigation Instructions for Blind and Low Vision PeopleJul 7, 2022Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in GroupsMay 16, 2025ReWiND: Language-Guided Rewards Teach Robot Policies without New DemonstrationsOct 12, 2025RobotFleet: An Open-Source Framework for Centralized Multi-Robot Task PlanningJun 19, 2024Contrast Sets for Evaluating Language-Guided Robot PoliciesFeb 14, 2025Efficient Evaluation of Multi-Task Robot Policies With Active Experiment SelectionJan 23, 2025M3PT: A Transformer for Multimodal, Multi-Party Social Signal Prediction with Person-aware Blockwise AttentionMar 6, 2024Feel the Bite: Robot-Assisted Inside-Mouth Bite Transfer using Robust Mouth Perception and Physical Interaction-Aware ControlNov 6, 2025Isaac Lab: A GPU-Accelerated Simulation Framework for Multi-Modal Robot LearningMar 2, 2026Robometer: Scaling General-Purpose Robotic Reward Models via Trajectory ComparisonsSep 20, 2024ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot NavigationNov 12, 2023Which One? Leveraging Context Between Objects and Multiple Views for Language GroundingNov 27, 2025Mechanistic Finetuning of Vision-Language-Action Models via Few-Shot Demonstrations