Pengcheng Wang, Zihao Wang, Zhilong Ji, Xiao Liu, Songfan Yang, Zhongqin Wu
This paper introduces our approach to the EmotioNet Challenge 2020. We pose the AU recognition problem as a multi-task learning problem, where the non-rigid facial muscle motion (mainly the first 17 AUs) and the rigid head motion (the last 6 AUs) are modeled separately. The co-occurrence of the expression features and the head pose features are explored. We observe that different AUs converge at various speed. By choosing the optimal checkpoint for each AU, the recognition results are improved. We are able to obtain a final score of 0.746 in validation set and 0.7306 in the test set of the challenge.
Changjian Chen, Pengcheng Wang, Fei Lyu, Zhuo Tang, Li Yang, Long Wang, Yong Cai, Feng Yu, Kenli Li
Hybrid rice breeding crossbreeds different rice lines and cultivates the resulting hybrids in fields to select those with desirable agronomic traits, such as higher yields. Recently, genomic selection has emerged as an efficient way for hybrid rice breeding. It predicts the traits of hybrids based on their genes, which helps exclude many undesired hybrids, largely reducing the workload of field cultivation. However, due to the limited accuracy of genomic prediction models, breeders still need to combine their experience with the models to identify regulatory genes that control traits and select hybrids, which remains a time-consuming process. To ease this process, in this paper, we proposed a visual analysis method to facilitate interactive hybrid rice breeding. Regulatory gene identification and hybrid selection naturally ensemble a dual-analysis task. Therefore, we developed a parametric dual projection method with theoretical guarantees to facilitate interactive dual analysis. Based on this dual projection method, we further developed a gene visualization and a hybrid visualization to verify the identified regulatory genes and hybrids. The effectiveness of our method is demonstrated through the quantitative evaluation of the parametric dual projection method, identified regulatory genes and desired hybrids in the case study, and positive feedback from breeders.
Pengcheng Wang, Jerry Huang, Jiarui Yao, Rui Pan, Peizhi Niu, Yaowenqi Liu, Ruida Wang, Renhao Lu, Yuwei Guo, Tong Zhang
Language-model agent systems commonly rely on reactive prompting, in which a single instruction guides the model through an open-ended sequence of reasoning and tool-use steps, leaving control flow and intermediate state implicit and making agent behavior potentially difficult to control. Orchestration frameworks such as LangGraph, DSPy, and CrewAI impose greater structure through explicit workflow definitions, but tightly couple workflow logic with Python, making agents difficult to maintain and modify. In this paper, we introduce AgentSPEX, an Agent SPecification and EXecution Language for specifying LLM-agent workflows with explicit control flow and modular structure, along with a customizable agent harness. AgentSPEX supports typed steps, branching and loops, parallel execution, reusable submodules, and explicit state management, and these workflows execute within an agent harness that provides tool access, a sandboxed virtual environment, and support for checkpointing, verification, and logging. Furthermore, we provide a visual editor with synchronized graph and workflow views for authoring and inspection. We include ready-to-use agents for deep research and scientific research, and we evaluate AgentSPEX on 7 benchmarks. Finally, we show through a user study that AgentSPEX provides a more interpretable and accessible workflow-authoring paradigm than a popular existing agent framework.
Pengcheng Wang, Qinghang Liu, Haotian Lin, Yiheng Li, Guojian Zhan, Masayoshi Tomizuka, Yixiao Wang
Learning domain adaptive policies that can generalize to unseen transition dynamics, remains a fundamental challenge in learning-based control. Substantial progress has been made through domain representation learning to capture domain-specific information, thus enabling domain-aware decision making. We analyze the process of learning domain representations through dynamical prediction and find that selecting contexts adjacent to the current step causes the learned representations to entangle static domain information with varying dynamical properties. Such mixture can confuse the conditioned policy, thereby constraining zero-shot adaptation. To tackle the challenge, we propose DADP (Domain Adaptive Diffusion Policy), which achieves robust adaptation through unsupervised disentanglement and domain-aware diffusion injection. First, we introduce Lagged Context Dynamical Prediction, a strategy that conditions future state estimation on a historical offset context; by increasing this temporal gap, we unsupervisedly disentangle static domain representations by filtering out transient properties. Second, we integrate the learned domain representations directly into the generative process by biasing the prior distribution and reformulating the diffusion target. Extensive experiments on challenging benchmarks across locomotion and manipulation demonstrate the superior performance, and the generalizability of DADP over prior methods. More visualization results are available on the https://outsider86.github.io/DomainAdaptiveDiffusionPolicy/.
Charilaos Mousoulis, Pengcheng Wang, Nguyen Luu Do, Jose F Waimin, Nithin Raghunathan, Rahim Rahimi, Ali Shakouri, Saurabh Bagchi
Weather and soil conditions are particularly important when it comes to farming activities. Study of these factors and their role in nutrient and nitrate absorption rates can lead to useful insights with benefits for both the crop yield and the protection of the environment through the more controlled use of fertilizers and chemicals. There is a paucity of public data from rural, agricultural sensor networks. This is partly due to the unique challenges faced during the deployment and maintenance of IoT networks in rural agricultural areas. As part of a 5-year project called WHIN we have been deploying and collecting sensor data from production and experimental agricultural farms in and around Purdue University in Indiana. Here we release a dataset comprising soil sensor data from a representative sample of 3 nodes across 3 production farms, each for 5 months. We correlate this data with the weather data and draw some insights about the absorption of rain in the soil. We provide the dataset at: https://purduewhin.ecn.purdue.edu/dataset2021.
Pengcheng Wang, Lingqiao Ji, Zhilong Ji, Yuan Gao, Xiao Liu
In this technical report, we briefly introduce the solution of our team "TAL-ai" for (Semi-) supervised Face detection in the low light condition in UG2+ Challenge in CVPR 2021. By conducting several experiments with popular image enhancement methods and image transfer methods, we pulled the low light image and the normal image to a more closer domain. And it is observed that using these data to training can achieve better performance. We also adapt several popular object detection frameworks, e.g., DetectoRS, Cascade-RCNN, and large backbone like Swin-transformer. Finally, we ensemble several models which achieved mAP 74.89 on the testing set, ranking 1st on the final leaderboard.
Pengcheng Wang, Xinghao Zhu, Yuxin Chen, Chenfeng Xu, Masayoshi Tomizuka, Chenran Li
Reinforcement Learning and Imitation Learning have achieved widespread success in many domains but remain constrained during real-world deployment. One of the main issues is the additional requirements that were not considered during training. To address this challenge, policy customization has been introduced, aiming to adapt a prior policy while preserving its inherent properties and meeting new task-specific requirements. A principled approach to policy customization is Residual Q-Learning (RQL), which formulates the problem as a Markov Decision Process (MDP) and derives a family of value-based learning algorithms. However, RQL has not yet been applied to policy gradient methods, which restricts its applicability, especially in tasks where policy gradient has already proven more effective. In this work, we first derive a concise form of Soft Policy Gradient as a preliminary. Building on this, we introduce Residual Policy Gradient (RPG), which extends RQL to policy gradient methods, allowing policy customization in gradient-based RL settings. With the view of RPG, we rethink the KL-regularized objective widely used in RL fine-tuning. We show that under certain assumptions, KL-regularized objective leads to a maximum-entropy policy that balances the inherent properties and task-specific requirements on a reward-level. Our experiments in MuJoCo demonstrate the effectiveness of Soft Policy Gradient and Residual Policy Gradient.
Pengcheng Wang, Chenran Li, Catherine Weaver, Kenta Kawamoto, Masayoshi Tomizuka, Chen Tang, Wei Zhan
Policies developed through Reinforcement Learning (RL) and Imitation Learning (IL) have shown great potential in continuous control tasks, but real-world applications often require adapting trained policies to unforeseen requirements. While fine-tuning can address such needs, it typically requires additional data and access to the original training metrics and parameters. In contrast, an online planning algorithm, if capable of meeting the additional requirements, can eliminate the necessity for extensive training phases and customize the policy without knowledge of the original training scheme or task. In this work, we propose a generic online planning algorithm for customizing continuous-control policies at the execution time, which we call Residual-MPPI. It can customize a given prior policy on new performance metrics in few-shot and even zero-shot online settings, given access to the prior action distribution alone. Through our experiments, we demonstrate that the proposed Residual-MPPI algorithm can accomplish the few-shot/zero-shot online policy customization task effectively, including customizing the champion-level racing agent, Gran Turismo Sophy (GT Sophy) 1.0, in the challenging car racing scenario, Gran Turismo Sport (GTS) environment. Code for MuJoCo experiments is included in the supplementary and will be open-sourced upon acceptance. Demo videos and code are available on our website: https://sites.google.com/view/residual-mppi.
Haoyu Jiang, Wei Jiang, Huaiyong Bai, Zengqi Cui, Guohui Zhang, Ruirui Fan, Han Yi, Changjun Ning, Liang Zhou, Jingyu Tang, Qi An, Jie Bao, Yu Bao, Ping Cao, Haolei Chen, Qiping Chen, Yonghao Chen, Yukai Chen, Zhen Chen, Changqing Feng, Keqing Gao, Minhao Gu, Changcai Han, Zijie Han, Guozhu He, Yongcheng He, Yang Hong, Hanxiong Huang, Weiling Huang, Xiru Huang, Xiaolu Ji, Xuyang Ji, Zhijie Jiang, Hantao Jing, Ling Kang, Mingtao Kang, Bo Li, Chao Li, Jiawen Li, Lun Li, Qiang Li, Xiao Li, Yang Li, Rong Liu, Shubin Liu, Xingyan Liu, Guangyuan Luan, Qili Mu, Binbin Qi, Jie Ren, Zhizhou Ren, Xichao Ruan, Zhaohui Song, Yingpeng Song, Hong Sun, Kang Sun, Xiaoyang Sun, Zhijia Sun, Zhixin Tan, Hongqing Tang, Xinyi Tang, Binbin Tian, Lijiao Wang, Pengcheng Wang, Qi Wang, Taofeng Wang, Zhaohui Wang, Jie Wen, Zhongwei Wen, Qingbiao Wu, Xiaoguang Wu, Xuan Wu, Likun Xie, Yiwei Yang, Li Yu, Tao Yu, Yongji Yu, Linhao Zhang, Qiwei Zhang, Xianpeng Zhang, Yuliang Zhang, Zhiyong Zhang, Yubin Zhao, Luping Zhou, Zuying Zhou, Danyang Zhu, Kejun Zhu, Peng Zhu
Differential and angle-integrated cross sections for the $^{10}$B($n, α$)$^{7}$Li, $^{10}$B($n, α$$_{0}$)$^{7}$Li and $^{10}$B($n, α$$_{1}$)$^{7}$Li$^{*}$ reactions have been measured at CSNS Back-n white neutron source. Two enriched (90%) $^{10}$B samples 5.0 cm in diameter and ~85.0 $μ$g/cm$^{2}$ in thickness each with an aluminum backing were prepared, and back-to-back mounted at the sample holder. The charged particles were detected using the silicon-detector array of the Light-charged Particle Detector Array (LPDA) system. The neutron energy E$_{n}$ was determined by TOF (time-of-flight) method, and the valid $α$ events were extracted from the E$_{n}$-Amplitude two-dimensional spectrum. With 15 silicon detectors, the differential cross sections of $α$-particles were measured from 19.2° to 160.8°. Fitted with the Legendre polynomial series, the ($n, α$) cross sections were obtained through integration. The absolute cross sections were normalized using the standard cross sections of the $^{10}$B($n, α$)$^{7}$Li reaction in the 0.3 - 0.5 MeV neutron energy region. The measurement neutron energy range for the $^{10}$B($n, α$)$^{7}$Li reaction is 1.0 eV $\le$ En < 2.5 MeV (67 energy points), and for the $^{10}$B($n, α$$_{0}$)$^{7}$Li and $^{10}$B($n, α$$_{1}$)$^{7}$Li$^{*}$ reactions is 1.0 eV $\le$ En < 1.0 MeV (59 energy points). The present results have been analyzed by the resonance reaction mechanism and the level structure of the $^{11}$B compound system, and compared with existing measurements and evaluations.
Binbin Qi, Yang Li, Danyang Zhu, Zhiyong Zhang, Ruirui Fan, Jiang Pan, Jianxin Feng, Chengming Liu, Changqing Feng, Jianbei Liu, Ming Shao, Yi Zhou, Yanfeng Wang, Han Yi, Qi An, Huaiyong Bai, Jie Bao, Ping Cao, Qiping Chen, Yonghao Chen, Pinjing Cheng, Zengqi Cui, Minhao Gu, Fengqin Guo, Changcai Han, Zijie Han, Guozhu He, Yongcheng He, Yuefeng He, Hanxiong Huang, Weiling Huang, Xiru Huang, Xiaolu Ji, Xuyang Ji, Haoyu Jiang, Wei Jiang, Hantao Jing, Ling Kang, Mingtao Kang, Bo Li, Lun Li, Qiang Li, Xiao Li, Yang Li, Rong Liu, Shubin Liu, Xingyan Liu, Guangyuan Luan, Yinglin Ma, Changjun Ning, Jie Ren, Xichao Ruan, Zhaohui Song, Hong Sun, Xiaoyang Sun, Zhijia Sun, Zhixin Tan, Hongqing Tang, Jingyu Tang, Pengcheng Wang, Qi Wang, Taofeng Wang, Zhaohui Wang, Zheng Wang, Jie Wen, Zhongwei Wen, Qingbiao Wu, Xiaoguang Wu, Xuan Wu, Likun Xie, Yiwei Yang, Li Yu, Tao Yu, Yongji Yu, Guohui Zhang, Jing Zhang, Linhao Zhang, Liying Zhang, Qingming Zhang, Qiwei Zhang, Xianpeng Zhang, Yuliang Zhang, Yingtan Zhao, Liang Zhou, Zuying Zhou, Kejun Zhu, Peng Zhu
The Back-n white neutron beam line, which uses back-streaming white neutrons from the spallation target of the China Spallation Neutron Source, is used for nuclear data measurements. A Micromegas-based neutron detector with two variants was specially developed to measure the beam spot distribution for this beam line. In this article, the design, fabrication, and characterization of the detector are described. The results of the detector performance tests are presented, which include the relative electron transparency, the gain and the gain uniformity, and the neutron beam profile reconstruction capability. The result of the first measurement of the Back-n neutron beam spot distribution is also presented.
Haotian Lin, Pengcheng Wang, Jeff Schneider, Guanya Shi
Model-based reinforcement learning algorithms that combine model-based planning and learned value/policy prior have gained significant recognition for their high data efficiency and superior performance in continuous control. However, we discover that existing methods that rely on standard SAC-style policy iteration for value learning, directly using data generated by the planner, often result in \emph{persistent value overestimation}. Through theoretical analysis and experiments, we argue that this issue is deeply rooted in the structural policy mismatch between the data generation policy that is always bootstrapped by the planner and the learned policy prior. To mitigate such a mismatch in a minimalist way, we propose a policy regularization term reducing out-of-distribution (OOD) queries, thereby improving value learning. Our method involves minimum changes on top of existing frameworks and requires no additional computation. Extensive experiments demonstrate that the proposed approach improves performance over baselines such as TD-MPC2 by large margins, particularly in 61-DoF humanoid tasks. View qualitative results at https://darthutopian.github.io/tdmpc_square/.
James Wen, Sahil Nalawade, Zhiwei Liang, Catherine Bielick, Marisa Ferrara Boston, Alexander Chowdhury, Adele Collin, Luigi De Angelis, Jacob Ellen, Heather Frase, Rodrigo R. Gameiro, Juan Manuel Gutierrez, Pooja Kadam, Murat Keceli, Srikanth Krishnamurthy, Anne Kwok, Yanan Lance Lu, Heather Mattie, Liam G. McCoy, Katherine Miller, Allison C. Morgan, Marlene Louisa Moerig, Trang Nguyen, Alexander Owen-Post, Alex D. Ruiz, Sreekar Reddy Puchala, Soujanya Samineni, Takeshi Tohyama, Varun Ullanat, Carmine Valenza, Camilo Velez, Pengcheng Wang, Anna Wuest, Yuxiang Zhou, Yingde Zhu, Jason M. Johnson, Naomi Lenane, Jennifer Willcox, Francis J. Vitiello, Leo Anthony G. Celi, Renato Umeton
Background: Generative artificial intelligence (AI) deployment in academic medical settings raises copyright compliance concerns. Dana-Farber Cancer Institute implemented GPT4DFCI, an internal generative AI tool utilizing OpenAI models, that is approved for enterprise use in research and operations. Given (1) the exceptionally broad adoption of the tool in our organization, (2) our research mission, and (3) the shared responsibility model required to benefit from Customer Copyright Commitment in Azure OpenAI Service products, we deemed rigorous copyright compliance testing necessary. Case Description: We conducted a structured red teaming exercise in Nov. 2024, with 42 participants from academic, industry, and government institutions. Four teams attempted to extract copyrighted content from GPT4DFCI across four domains: literary works, news articles, scientific publications, and access-restricted clinical notes. Teams successfully extracted verbatim book dedications and near-exact passages through various strategies. News article extraction failed despite jailbreak attempts. Scientific article reproduction yielded only high-level summaries. Clinical note testing revealed appropriate privacy safeguards. Discussion: The successful extraction of literary content indicates potential copyrighted material presence in training data, necessitating inference-time filtering. Differential success rates across content types suggest varying protective mechanisms. The event led to implementation of a copyright-specific meta-prompt in GPT4DFCI; this mitigation has been in production since Jan. 2025. Conclusion: Systematic red teaming revealed specific vulnerabilities in generative AI copyright compliance, leading to concrete mitigation strategies. Academic medical institutions deploying generative AI should implement continuous testing protocols to ensure legal and ethical compliance.
Mingshuang Hu, Yuzhong Wang, Zhe Jiang, Cheng Pang, Ying Li, Zhenyu Shao, Ziang Yue, Yiding Liu, Zeming Kong, Pengcheng Wang, Yifei Wang, Axiang Yu, Yinghan Wang, Wenzhi Li, Yongkang Dong, Yayun Cheng, Jiaran Qi
Within the feline eye, a distinctive tapetum lucidum as a mirror resides posterior to the retina, reflecting the incident rays to simulate light source emission. This secondary emission property enables felines to be highly sensitive to light, possessing remarkable visual capabilities even in dark settings. Drawing inspiration from this natural phenomenon, we propose an active-passive-composite sub-terahertz meta-imager integrated with a bifocus metasurface, a high-sensitivity radiometer, and a low-power signal hidden radiation source. Benefiting from its aperture-shared advantage, this advanced fusion imaging system, enabled to be deployed by a simplified portable hardware platform, allows for the concurrent acquisition of active and passive electromagnetic properties to extend the target detection category and realize multi-mode fusion perception. Notably, it also enables the extraction of radiation and reflection characteristics without additional calibration modules. Experiments demonstrate the multi-target fusion imaging and localized information decoupling with the tailored field of view and emission energy. This compact and multi-mode fusion imaging system may have plenty of potential for airplane navigation positioning, abnormal monitoring, and non-interactive concealed security checks.
Xiang Ma, Taihua Chen, Pengcheng Wang, Xuemei Li, Caiming Zhang
Time series forecasting is crucial for applications in various domains. Conventional methods often rely on global decomposition into trend, seasonal, and residual components, which become ineffective for real-world series dominated by local, complex, and highly dynamic patterns. Moreover, the high model complexity of such approaches limits their applicability in real-time or resource-constrained environments. In this work, we propose a novel \textbf{RE}liability-aware \textbf{C}odebook-\textbf{AS}sisted \textbf{T}ime series forecasting framework (\textbf{ReCast}) that enables lightweight and robust prediction by exploiting recurring local shapes. ReCast encodes local patterns into discrete embeddings through patch-wise quantization using a learnable codebook, thereby compactly capturing stable regular structures. To compensate for residual variations not preserved by quantization, ReCast employs a dual-path architecture comprising a quantization path for efficient modeling of regular structures and a residual path for reconstructing irregular fluctuations. A central contribution of ReCast is a reliability-aware codebook update strategy, which incrementally refines the codebook via weighted corrections. These correction weights are derived by fusing multiple reliability factors from complementary perspectives by a distributionally robust optimization (DRO) scheme, ensuring adaptability to non-stationarity and robustness to distribution shifts. Extensive experiments demonstrate that ReCast outperforms state-of-the-art (SOTA) models in accuracy, efficiency, and adaptability to distribution shifts.
Ran Xu, Chen-lin Zhang, Pengcheng Wang, Jayoung Lee, Subrata Mitra, Somali Chaterji, Yin Li, Saurabh Bagchi
Advanced video analytic systems, including scene classification and object detection, have seen widespread success in various domains such as smart cities and autonomous transportation. With an ever-growing number of powerful client devices, there is incentive to move these heavy video analytics workloads from the cloud to mobile devices to achieve low latency and real-time processing and to preserve user privacy. However, most video analytic systems are heavyweight and are trained offline with some pre-defined latency or accuracy requirements. This makes them unable to adapt at runtime in the face of three types of dynamism -- the input video characteristics change, the amount of compute resources available on the node changes due to co-located applications, and the user's latency-accuracy requirements change. In this paper we introduce ApproxDet, an adaptive video object detection framework for mobile devices to meet accuracy-latency requirements in the face of changing content and resource contention scenarios. To achieve this, we introduce a multi-branch object detection kernel (layered on Faster R-CNN), which incorporates a data-driven modeling approach on the performance metrics, and a latency SLA-driven scheduler to pick the best execution branch at runtime. We couple this kernel with approximable video object tracking algorithms to create an end-to-end video object detection system. We evaluate ApproxDet on a large benchmark video dataset and compare quantitatively to AdaScale and YOLOv3. We find that ApproxDet is able to adapt to a wide variety of contention and content characteristics and outshines all baselines, e.g., it achieves 52% lower latency and 11.1% higher accuracy over YOLOv3.
Changjian Chen, Fei Lv, Yalong Guan, Pengcheng Wang, Shengjie Yu, Yifan Zhang, Zhuo Tang
The performance of computer vision models in certain real-world applications (e.g., rare wildlife observation) is limited by the small number of available images. Expanding datasets using pre-trained generative models is an effective way to address this limitation. However, since the automatic generation process is uncontrollable, the generated images are usually limited in diversity, and some of them are undesired. In this paper, we propose a human-guided image generation method for more controllable dataset expansion. We develop a multi-modal projection method with theoretical guarantees to facilitate the exploration of both the original and generated images. Based on the exploration, users refine the prompts and re-generate images for better performance. Since directly refining the prompts is challenging for novice users, we develop a sample-level prompt refinement method to make it easier. With this method, users only need to provide sample-level feedback (e.g., which samples are undesired) to obtain better prompts. The effectiveness of our method is demonstrated through the quantitative evaluation of the multi-modal projection method, improved model performance in the case study for both classification and object detection tasks, and positive feedback from the experts.
Jinke Yang, Yong Xie, Yidi Fan, Pengcheng Wang, Xindong Liang, Haojie Li, Xue Wang, Zhao Cui, Jianjun Jia, Yucheng Tang, Yun Kau Lau
Aug 29, 2025·astro-ph.IM·PDF An alternative, new laser link acquisition scheme for the triangular constellation of spacecraft (SCs) in deep space in the detection of gravitational waves is considered. In place of a wide field CCD camera in the initial stage of laser link acquisition adopted in the conventional scheme, an extended Kalman filter based on precision orbit determination is incorporated in the point ahead angle mechanism (PAAM) to steer the laser beam in such a way to narrow the uncertainty cone and at the same time avoids the heating problem generated by the CCD camera.A quadrant photodetector (QPD) based on the Differential Power Sensing (DPS) technique, which offers a higher dynamic range than differential wavefront sensing (DWS), is employed as the readout of the laser beam spot. The conventional two stages (coarse acquisition and fine acquisition) are integrated into a single control loop. The payload structure of the ATP control loop is simplified and numerical simulations, based on a colored measurement noise model that closely mimics the prospective on-orbit conditions, demonstrate that the AEKF significantly reduces the initial uncertainty region by predicting the point ahead angle (PAA) even when the worst case scenario in SC position (navigation) error is considered.
Fei Wang, Tingting Zhang, Xilei Wu, Pengcheng Wang, Xin Wang, Han Ding, Jingang Shi, Jinsong Han, Dong Huang
Hand hygiene is among the most effective daily practices for preventing infectious diseases such as influenza, malaria, and skin infections. While professional guidelines emphasize proper handwashing to reduce the risk of viral infections, surveys reveal that adherence to these recommendations remains low. To address this gap, we propose UWash, a wearable solution leveraging smartwatches to evaluate handwashing procedures, aiming to raise awareness and cultivate high-quality handwashing habits. We frame the task of handwashing assessment as an action segmentation problem, similar to those in computer vision, and introduce a simple yet efficient two-stream UNet-like network to achieve this goal. Experiments involving 51 subjects demonstrate that UWash achieves 92.27% accuracy in handwashing gesture recognition, an error of <0.5 seconds in onset/offset detection, and an error of <5 points in gesture scoring under user-dependent settings. The system also performs robustly in user-independent and user-independent-location-independent evaluations. Remarkably, UWash maintains high performance in real-world tests, including evaluations with 10 random passersby at a hospital 9 months later and 10 passersby in an in-the-wild test conducted 2 years later. UWash is the first system to score handwashing quality based on gesture sequences, offering actionable guidance for improving daily hand hygiene. The code and dataset are publicly available at https://github.com/aiotgroup/UWash
Guojian Zhan, Likun Wang, Pengcheng Wang, Feihong Zhang, Jingliang Duan, Masayoshi Tomizuka, Shengbo Eben Li
Maximum entropy has become a mainstream off-policy reinforcement learning (RL) framework for balancing exploitation and exploration. However, two bottlenecks still limit further performance improvement: (1) non-stationary Q-value estimation caused by jointly injecting entropy and updating its weighting parameter, i.e., temperature; and (2) short-sighted local entropy tuning that adjusts temperature only according to the current single-step entropy, without considering the effect of cumulative entropy over time. In this paper, we extends maximum entropy framework by proposing a trajectory entropy-constrained reinforcement learning (TECRL) framework to address these two challenges. Within this framework, we first separately learn two Q-functions, one associated with reward and the other with entropy, ensuring clean and stable value targets unaffected by temperature updates. Then, the dedicated entropy Q-function, explicitly quantifying the expected cumulative entropy, enables us to enforce a trajectory entropy constraint and consequently control the policy long-term stochasticity. Building on this TECRL framework, we develop a practical off-policy algorithm, DSAC-E, by extending the state-of-the-art distributional soft actor-critic with three refinements (DSAC-T). Empirical results on the OpenAI Gym benchmark demonstrate that our DSAC-E can achieve higher returns and better stability.
Hanning Zhang, Pengcheng Wang, Shizhe Diao, Yong Lin, Rui Pan, Hanze Dong, Dylan Zhang, Pavlo Molchanov, Tong Zhang
Large language models (LLMs) have shown promise in performing complex multi-step reasoning, yet they continue to struggle with mathematical reasoning, often making systematic errors. A promising solution is reinforcement learning (RL) guided by reward models, particularly those focusing on process rewards, which score each intermediate step rather than solely evaluating the final outcome. This approach is more effective at guiding policy models towards correct reasoning trajectories. In this work, we propose an entropy-regularized process reward model (ER-PRM) that integrates KL-regularized Markov Decision Processes (MDP) to balance policy optimization with the need to prevent the policy from shifting too far from its initial distribution. We derive a novel reward construction method based on the theoretical results. Our theoretical analysis shows that we could derive the optimal reward model from the initial policy sampling. Our empirical experiments on the MATH and GSM8K benchmarks demonstrate that ER-PRM consistently outperforms existing process reward models, achieving 1% improvement on GSM8K and 2-3% improvement on MATH under best-of-N evaluation, and more than 1% improvement under RLHF. These results highlight the efficacy of entropy-regularization in enhancing LLMs' reasoning capabilities.