Showing 1–20 of 31 results
/ Date/ Name
Dec 10, 2024MAGE: A Multi-Agent Engine for Automated RTL Code GenerationOct 13, 2025Stronger-MAS: Multi-Agent Reinforcement Learning for Collaborative LLMsMay 31, 2024Towards LLM-Powered Verilog RTL Assistant: Self-Verification and Self-CorrectionApr 4, 2024Multi-modal Learning for WebAssembly Reverse EngineeringJan 8, 2024Sibyl: Forecasting Time-Evolving Query WorkloadsJun 13, 2025PRO-V-R1: Reasoning Enhanced Programming Agent for RTL VerificationOct 3, 2024Grounding Large Language Models In Embodied Environment With Imperfect World ModelsMay 11, 2023COLA: Characterizing and Optimizing the Tail Latency for Safe Level-4 Autonomous Vehicle SystemsNov 5, 2024The Hitchhiker's Guide to Programming and Optimizing Cache Coherent Heterogeneous Systems: CXL, NVLink-C2C, and AMD Infinity FabricApr 7, 2024Fork is All You Need in Heterogeneous SystemsSep 23, 2023Interpretable and Flexible Target-Conditioned Neural Planners For Autonomous VehiclesJun 25, 2023Safety-Critical Scenario Generation Via Reinforcement Learning Based EditingDec 8, 2023HybridTier: an Adaptive and Lightweight CXL-Memory Tiering SystemApr 1, 2020Efficient Implementation of Multi-Channel Convolution in Monolithic 3D ReRAM CrossbarOct 4, 2018Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGASep 3, 2024You Only Use Reactive Attention Slice For Long Context RetrievalJan 29, 2026ChipBench: A Next-Step Benchmark for Evaluating LLM Performance in AI-Aided Chip DesignMay 21, 2019Towards Safety-Aware Computing System Design in Autonomous VehiclesFeb 11, 2025SHARP: Accelerating Language Model Inference by SHaring Adjacent layers with Recovery ParametersFeb 5, 2026Double-P: Hierarchical Top-P Sparse Attention for Long-Context LLMs