Chao Song, Jing Cui, H. Wang, J. Hao, H. Feng, Ying Li
Dec 28, 2018·quant-ph·PDF Medium-scale quantum devices that integrate about hundreds of physical qubits are likely to be developed in the near future. However, such devices will lack the resources for realizing quantum fault tolerance. Therefore, the main challenge of exploring the advantage of quantum computation is to minimize the impact of device and control imperfections without encoding. Quantum error mitigation is a solution satisfying the requirement. Here, we demonstrate an error mitigation protocol based on gate set tomography and quasiprobability decomposition. One- and two-qubit circuits are tested on a superconducting device, and computation errors are successfully suppressed. Because this protocol is universal for digital quantum computers and algorithms computing expected values, our results suggest that error mitigation can be an essential component of near-future quantum computation.
Cui Jing, Yi-Qing Zhang, Xiang Li
Network science has released its talents in social network analysis based on the information of static topologies. In reality social contacts are dynamic and evolve concurrently in time. Nowadays they can be recorded by ubiquitous information technologies, and generated into temporal social networks to provide new sights in social reality mining. Here, we define \emph{circle link} to measure contextual relationships in three empirical social temporal networks, and find that the tendency of friends having frequent continuous interactions with their common friend prefer to be close, which can be considered as the extension of Granovetter's hypothesis in temporal social networks. Finally, we present a heuristic method based on circle link to predict relationships and acquire acceptable results.
Miao Ye, Suxiao Wang, Jiaguang Han, Yong Wang, Xiaoli Wang, Jingxuan Wei, Peng Wen, Jing Cui
Detecting anomalies in the data collected by WSNs can provide crucial evidence for assessing the reliability and stability of WSNs. Existing methods for WSN anomaly detection often face challenges such as the limited extraction of spatiotemporal correlation features, the absence of sample labels, few anomaly samples, and an imbalanced sample distribution. To address these issues, a spatiotemporal correlation detection model (MTAD-RD) considering both model architecture and a two-stage training strategy perspective is proposed. In terms of model structure design, the proposed MTAD-RD backbone network includes a retentive network (RetNet) enhanced by a cross-retention (CR) module, a multigranular feature fusion module, and a graph attention network module to extract internode correlation information. This proposed model can integrate the intermodal correlation features and spatial features of WSN neighbor nodes while extracting global information from time series data. Moreover, its serialized inference characteristic can remarkably reduce inference overhead. For model training, a two-stage training approach was designed. First, a contrastive learning proxy task was designed for time series data with graph structure information in WSNs, enabling the backbone network to learn transferable features from unlabeled data using unsupervised contrastive learning methods, thereby addressing the issue of missing sample labels in the dataset. Then, a caching-based sample sampler was designed to divide samples into few-shot and contrastive learning data. A specific joint loss function was developed to jointly train the dual-graph discriminator network to address the problem of sample imbalance effectively. In experiments carried out on real public datasets, the designed MTAD-RD anomaly detection method achieved an F1 score of 90.97%, outperforming existing supervised WSN anomaly detection methods.
Miao Ye, Jing Cui, Yuan huang, Qian He, Yong Wang, Jiwen Zhang
Anomaly detection of multi-temporal modal data in Wireless Sensor Network (WSN) can provide an important guarantee for reliable network operation. Existing anomaly detection methods in multi-temporal modal data scenarios have the problems of insufficient extraction of spatio-temporal correlation features, high cost of anomaly sample category annotation, and imbalance of anomaly samples. In this paper, a graph neural network anomaly detection backbone network incorporating spatio-temporal correlation features and a multi-task self-supervised training strategy of "pre-training - graph prompting - fine-tuning" are designed for the characteristics of WSN graph structure data. First, the anomaly detection backbone network is designed by improving the Mamba model based on a multi-scale strategy and inter-modal fusion method, and combining it with a variational graph convolution module, which is capable of fully extracting spatio-temporal correlation features in the multi-node, multi-temporal modal scenarios of WSNs. Secondly, we design a three-subtask learning "pre-training" method with no-negative comparative learning, prediction, and reconstruction to learn generic features of WSN data samples from unlabeled data, and design a "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning. The model is fine-tuned through the "graph prompting-fine-tuning" mechanism to guide the pre-trained self-supervised learning model to complete the parameter fine-tuning, thereby reducing the training cost and enhancing the detection generalization performance. The F1 metrics obtained from experiments on the public dataset and the actual collected dataset are up to 91.30% and 92.31%, respectively, which provides better detection performance and generalization ability than existing methods designed by the method.
Jing Cui, Yufei Han, Yuzhe Ma, Jianbin Jiao, Junge Zhang
Backdoor attacks in reinforcement learning (RL) have previously employed intense attack strategies to ensure attack success. However, these methods suffer from high attack costs and increased detectability. In this work, we propose a novel approach, BadRL, which focuses on conducting highly sparse backdoor poisoning efforts during training and testing while maintaining successful attacks. Our algorithm, BadRL, strategically chooses state observations with high attack values to inject triggers during training and testing, thereby reducing the chances of detection. In contrast to the previous methods that utilize sample-agnostic trigger patterns, BadRL dynamically generates distinct trigger patterns based on targeted state observations, thereby enhancing its effectiveness. Theoretical analysis shows that the targeted backdoor attack is always viable and remains stealthy under specific assumptions. Empirical results on various classic RL tasks illustrate that BadRL can substantially degrade the performance of a victim agent with minimal poisoning efforts 0.003% of total training steps) during training and infrequent attacks during testing.
Jing Cui, Yufei Han, Jianbin Jiao, Junge Zhang
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety controls. However, the persistence of the implanted backdoors under user-driven post-deployment continual fine-tuning has been rarely examined. Most prior works evaluate the effectiveness and generalization of implanted backdoors only at releasing and empirical evidence shows that naively injected backdoor persistence degrades after updates. In this work, we study whether and how implanted backdoors persist through a multi-stage post-deployment fine-tuning. We propose P-Trojan, a trigger-based attack algorithm that explicitly optimizes for backdoor persistence across repeated updates. By aligning poisoned gradients with those of clean tasks on token embeddings, the implanted backdoor mapping is less likely to be suppressed or forgotten during subsequent updates. Theoretical analysis shows the feasibility of such persistent backdoor attacks after continual fine-tuning. And experiments conducted on the Qwen2.5 and LLaMA3 families of LLMs, as well as diverse task sequences, demonstrate that P-Trojan achieves over 99% persistence while preserving clean-task accuracy. Our findings highlight the need for persistence-aware evaluation and stronger defenses in realistic model adaptation pipelines.
Dizhan Xue, Jing Cui, Shengsheng Qian, Chuanrui Hu, Changsheng Xu
Intelligent agents powered by large language models (LLMs) have recently demonstrated impressive capabilities and gained increasing popularity on social media platforms. While LLM agents are reshaping the ecology of social media, there exists a current gap in conducting a comprehensive evaluation of their ability to comprehend media content, understand user behaviors, and make intricate decisions. To address this challenge, we introduce SoMe, a pioneering benchmark designed to evaluate social media agents equipped with various agent tools for accessing and analyzing social media data. SoMe comprises a diverse collection of 8 social media agent tasks, 9,164,284 posts, 6,591 user profiles, and 25,686 reports from various social media platforms and external websites, with 17,869 meticulously annotated task queries. Compared with the existing datasets and benchmarks for social media tasks, SoMe is the first to provide a versatile and realistic platform for LLM-based social media agents to handle diverse social media tasks. By extensive quantitative and qualitative analysis, we provide the first overview insight into the performance of mainstream agentic LLMs in realistic social media environments and identify several limitations. Our evaluation reveals that both the current closed-source and open-source LLMs cannot handle social media agent tasks satisfactorily. SoMe provides a challenging yet meaningful testbed for future social media agents. Our code and data are available at https://github.com/LivXue/SoMe
Yufei Xie, Jing Cui, Mengdie Wang
With the development of China's economy and society, the importance of "craftsman's spirit" has become more and more prominent. As the main educational institution for training technical talents, higher vocational colleges vigorously promote the exploration of the cultivation path of craftsman spirit in higher vocational education, which provides new ideas and directions for the reform and development of higher vocational education, and is the fundamental need of the national innovation driven development strategy. Based on the questionnaire survey of vocational students in a certain range, this paper analyzes the problems existing in the cultivation path of craftsman spirit in Higher Vocational Education from multiple levels and the countermeasures.
Team Seedance, Heyi Chen, Siyan Chen, Xin Chen, Yanfei Chen, Ying Chen, Zhuo Chen, Feng Cheng, Tianheng Cheng, Xinqi Cheng, Xuyan Chi, Jian Cong, Jing Cui, Qinpeng Cui, Qide Dong, Junliang Fan, Jing Fang, Zetao Fang, Chengjian Feng, Han Feng, Mingyuan Gao, Yu Gao, Dong Guo, Qiushan Guo, Boyang Hao, Qingkai Hao, Bibo He, Qian He, Tuyen Hoang, Ruoqing Hu, Xi Hu, Weilin Huang, Zhaoyang Huang, Zhongyi Huang, Donglei Ji, Siqi Jiang, Wei Jiang, Yunpu Jiang, Zhuo Jiang, Ashley Kim, Jianan Kong, Zhichao Lai, Shanshan Lao, Yichong Leng, Ai Li, Feiya Li, Gen Li, Huixia Li, JiaShi Li, Liang Li, Ming Li, Shanshan Li, Tao Li, Xian Li, Xiaojie Li, Xiaoyang Li, Xingxing Li, Yameng Li, Yifu Li, Yiying Li, Chao Liang, Han Liang, Jianzhong Liang, Ying Liang, Zhiqiang Liang, Wang Liao, Yalin Liao, Heng Lin, Kengyu Lin, Shanchuan Lin, Xi Lin, Zhijie Lin, Feng Ling, Fangfang Liu, Gaohong Liu, Jiawei Liu, Jie Liu, Jihao Liu, Shouda Liu, Shu Liu, Sichao Liu, Songwei Liu, Xin Liu, Xue Liu, Yibo Liu, Zikun Liu, Zuxi Liu, Junlin Lyu, Lecheng Lyu, Qian Lyu, Han Mu, Xiaonan Nie, Jingzhe Ning, Xitong Pan, Yanghua Peng, Lianke Qin, Xueqiong Qu, Yuxi Ren, Kai Shen, Guang Shi, Lei Shi, Yan Song, Yinglong Song, Fan Sun, Li Sun, Renfei Sun, Yan Sun, Zeyu Sun, Wenjing Tang, Yaxue Tang, Zirui Tao, Feng Wang, Furui Wang, Jinran Wang, Junkai Wang, Ke Wang, Kexin Wang, Qingyi Wang, Rui Wang, Sen Wang, Shuai Wang, Tingru Wang, Weichen Wang, Xin Wang, Yanhui Wang, Yue Wang, Yuping Wang, Yuxuan Wang, Ziyu Wang, Guoqiang Wei, Wanru Wei, Di Wu, Guohong Wu, Hanjie Wu, Jian Wu, Jie Wu, Ruolan Wu, Xinglong Wu, Yonghui Wu, Ruiqi Xia, Liang Xiang, Fei Xiao, XueFeng Xiao, Pan Xie, Shuangyi Xie, Shuang Xu, Jinlan Xue, Shen Yan, Bangbang Yang, Ceyuan Yang, Jiaqi Yang, Runkai Yang, Tao Yang, Yang Yang, Yihang Yang, ZhiXian Yang, Ziyan Yang, Songting Yao, Yifan Yao, Zilyu Ye, Bowen Yu, Jian Yu, Chujie Yuan, Linxiao Yuan, Sichun Zeng, Weihong Zeng, Xuejiao Zeng, Yan Zeng, Chuntao Zhang, Heng Zhang, Jingjie Zhang, Kuo Zhang, Liang Zhang, Liying Zhang, Manlin Zhang, Ting Zhang, Weida Zhang, Xiaohe Zhang, Xinyan Zhang, Yan Zhang, Yuan Zhang, Zixiang Zhang, Fengxuan Zhao, Huating Zhao, Yang Zhao, Hao Zheng, Jianbin Zheng, Xiaozheng Zheng, Yangyang Zheng, Yijie Zheng, Jiexin Zhou, Jiahui Zhu, Kuan Zhu, Shenhan Zhu, Wenjia Zhu, Benhui Zou, Feilong Zuo
Yu Cai, Cheng Jin, Jiabo Ma, Fengtao Zhou, Yingxue Xu, Zhengrui Guo, Yihui Wang, Zhengyu Zhang, Ling Liang, Yonghao Tan, Pingcheng Dong, Du Cai, On Ki Tang, Chenglong Zhao, Xi Wang, Can Yang, Yali Xu, Jing Cui, Zhenhui Li, Ronald Cheong Kin Chan, Yueping Liu, Feng Gao, Xiuming Zhang, Li Liang, Hao Chen, Kwang-Ting Cheng
Pathology foundation models (PFMs) have enabled robust generalization in computational pathology through large-scale datasets and expansive architectures, but their substantial computational cost, particularly for gigapixel whole slide images, limits clinical accessibility and scalability. Here, we present LitePath, a deployment-friendly foundational framework designed to mitigate model over-parameterization and patch level redundancy. LitePath integrates LiteFM, a compact model distilled from three large PFMs (Virchow2, H-Optimus-1 and UNI2) using 190 million patches, and the Adaptive Patch Selector (APS), a lightweight component for task-specific patch selection. The framework reduces model parameters by 28x and lowers FLOPs by 403.5x relative to Virchow2, enabling deployment on low-power edge hardware such as the NVIDIA Jetson Orin Nano Super. On this device, LitePath processes 208 slides per hour, 104.5x faster than Virchow2, and consumes 0.36 kWh per 3,000 slides, 171x lower than Virchow2 on an RTX3090 GPU. We validated accuracy using 37 cohorts across four organs and 26 tasks (26 internal, 9 external, and 2 prospective), comprising 15,672 slides from 9,808 patients disjoint from the pretraining data. LitePath ranks second among 19 evaluated models and outperforms larger models including H-Optimus-1, mSTAR, UNI2 and GPFM, while retaining 99.71% of the AUC of Virchow2 on average. To quantify the balance between accuracy and efficiency, we propose the Deployability Score (D-Score), defined as the weighted geometric mean of normalized AUC and normalized FLOP, where LitePath achieves the highest value, surpassing Virchow2 by 10.64%. These results demonstrate that LitePath enables rapid, cost-effective and energy-efficient pathology image analysis on accessible hardware while maintaining accuracy comparable to state-of-the-art PFMs and reducing the carbon footprint of AI deployment.
Jing Cui, Yishi Xu, Zhewei Huang, Shuchang Zhou, Jianbin Jiao, Junge Zhang
Large Language Models (LLMs) have revolutionized artificial intelligence and machine learning through their advanced text processing and generating capabilities. However, their widespread deployment has raised significant safety and reliability concerns. Established vulnerabilities in deep neural networks, coupled with emerging threat models, may compromise security evaluations and create a false sense of security. Given the extensive research in the field of LLM security, we believe that summarizing the current state of affairs will help the research community better understand the present landscape and inform future developments. This paper reviews current research on LLM vulnerabilities and threats, and evaluates the effectiveness of contemporary defense mechanisms. We analyze recent studies on attack vectors and model weaknesses, providing insights into attack mechanisms and the evolving threat landscape. We also examine current defense strategies, highlighting their strengths and limitations. By contrasting advancements in attack and defense methodologies, we identify research gaps and propose future directions to enhance LLM security. Our goal is to advance the understanding of LLM safety challenges and guide the development of more robust security measures.
Jiabo Ma, Yingxue Xu, Fengtao Zhou, Yihui Wang, Cheng Jin, Zhengrui Guo, Jianfeng Wu, On Ki Tang, Huajun Zhou, Xi Wang, Luyang Luo, Zhengyu Zhang, Du Cai, Zizhao Gao, Wei Wang, Yueping Liu, Jiankun He, Jing Cui, Zhenhui Li, Jing Zhang, Feng Gao, Xiuming Zhang, Li Liang, Ronald Cheong Kin Chan, Zhe Wang, Hao Chen
The emergence of pathology foundation models has revolutionized computational histopathology, enabling highly accurate, generalized whole-slide image analysis for improved cancer diagnosis, and prognosis assessment. While these models show remarkable potential across cancer diagnostics and prognostics, their clinical translation faces critical challenges including variability in optimal model across cancer types, potential data leakage in evaluation, and lack of standardized benchmarks. Without rigorous, unbiased evaluation, even the most advanced PFMs risk remaining confined to research settings, delaying their life-saving applications. Existing benchmarking efforts remain limited by narrow cancer-type focus, potential pretraining data overlaps, or incomplete task coverage. We present PathBench, the first comprehensive benchmark addressing these gaps through: multi-center in-hourse datasets spanning common cancers with rigorous leakage prevention, evaluation across the full clinical spectrum from diagnosis to prognosis, and an automated leaderboard system for continuous model assessment. Our framework incorporates large-scale data, enabling objective comparison of PFMs while reflecting real-world clinical complexity. All evaluation data comes from private medical providers, with strict exclusion of any pretraining usage to avoid data leakage risks. We have collected 15,888 WSIs from 8,549 patients across 10 hospitals, encompassing over 64 diagnosis and prognosis tasks. Currently, our evaluation of 19 PFMs shows that Virchow2 and H-Optimus-1 are the most effective models overall. This work provides researchers with a robust platform for model development and offers clinicians actionable insights into PFM performance across diverse clinical scenarios, ultimately accelerating the translation of these transformative technologies into routine pathology practice.
Jingshi Cui, Peibiao Zhao
In the present paper, we first establish and verify a new sharp hyperbolic version of the Michael-Simon inequality for mean curvatures in hyperbolic space $\mathbb{H}^{n+1}$ based on the locally constrained inverse curvature flow introduced by Brendle, Guan and Li, provided that $M$ is $h$-convex and $f$ is a positive smooth function, where $λ^{'}(r)=\rm{cosh}$$r$. In particular, when $f$ is of constant, (0.1) coincides with the Minkowski type inequality stated by Brendle, Hung, and Wang. Further, we also establish and confirm a new sharp Michael-Simon inequality for the $k$-th mean curvatures in $\mathbb{H}^{n+1}$ by virtue of the Brendle-Guan-Li's flow, provided that $M$ is $h$-convex and $Ω$ is the domain enclosed by $M$. In particular, when $f$ is of constant and $k$ is odd, (0.2) is exactly the weighted Alexandrov-Fenchel inequalities proven by Hu, Li, and Wei.
Jingshi Cui, Peibiao Zhao
In this paper, we introduce a kind of inverse mean curvature flow (1.2) in a Sasakian sub-Riemannian 3-manifold $M$ for Legendrian curves, which slightly differs from the classical one, and confirm that this flow preserves the Legendrian condition and increases the length of curves. We establish the long-time existence of the flow (1.2) when the Webster scalar curvature $W$ of $M$ satisfies $ W \in (-\infty, \bar{W}_{0} )\cup \{ 0\} \cup (W_{0}, +\infty)$, where $\bar{W}_{0} <0$ and $W_{0} >0$ are constants. Moreover, we derive that the local limit curve (the asymptotic behavior) along the flow (1.2) is a geodesic of vanishing curvature when $W \geq 0$, wherea it is a geodesic of nonvanishing curvature when $W$ is a negative constant. Specially, in the first Heisenberg group $\mathbb{M}(0)$, we further construct a length-preserving flow (1.3) via a dilation of the flow (1.2) and show that closed Legendrian curves converge to Euclidean helices with vertical axis. By exploiting the properties of the flow (1.3), we establish a Minkowski-type formula for Legendrian curves in $\mathbb{M}(0)$ and provide a new proof of the fact that the total curvature of $γ\subset \mathbb{M}(0)$ with strictly positive curvature equals $2π$.
Jingshi Cui, Peibiao Zhao
Brendle [6] successfully establishes the sharp Michael-Simon inequality for mean curvature on Riemannian manifolds with nonnegative sectional curvature ($\mathcal{K} \geq 0$), and the proof relies on the Alexandrov-Bakelman-Pucci method. Nevertheless, this result cannot be extended to hyperbolic space $\mathbb{H}^{n+1}$ ($\mathcal{K} = -1$), as demonstrated by Counterexample 1.7. In the present paper, we propose Conjectures 1.8 and 1.9 concerning the hyperbolic version of the sharp Michael-Simon type inequality for $k$-th mean curvatures. However, the proof method in \cite{B21} failed to verify the validity of these conjectures. Recently, the authors [12] proved Conjectures 1.8 and 1.9 only for $h$-convex hypersurfaces by means of the Brendle-Guan-Li's flow. This paper aims to utilize other types of curvature flows to prove Conjectures 1.8 and 1.9 for hypersurfaces with weaker convexity conditions. For $k = 1$, we first investigate a new locally constrained mean curvature flow (1.9) in $\mathbb{H}^{n+1}$ and prove its longtime existence and exponential convergence. Then, the sharp Michael-Simon type inequality for mean curvature of starshaped hypersurfaces in $\mathbb{H}^{n+1}$ is confirmed through the flow (1.9). For $k \geq 2$, the sharp Michael-Simon inequality for $k$-th mean curvatures of starshaped, strictly $k$-convex hypersurfaces in $\mathbb{H}^{n+1}$ is proven using the locally constrained inverse curvature flow (1.11) introduced by Scheuer and Xia [31].
J. Cui, P. Zhao
In this paper, we first investigate a new locally constrained mean curvature flow (1.5) and prove that if the initial hypersurface is of smoothly compact starshaped, then the solution of the flow (1.5) exists for all time and converges to a sphere in smooth topology. Following this flow argument, not only do we achieve a new proof of the celebrated sharp Michael-Simon inequality for mean curvature in (n+1) dimensional Euclidean space, but we also get the necessary and sufficient condition for the establishment of the equality. In the second part of this paper, we study a mean curvature type flow (1.7) of static convex hypersurfaces in (n+1) dimensional Euclidean space, and prove that the flow (1.7) has a unique smooth solution for all time t>0, and the static convexity of the hypersurface is preserved along the flow (1.7). Moreover, The solution of the flow (1.7) converges exponentially to a sphere of radius R in smooth topology as time tends to infinity. By exploiting the properties of this flow, we develop and present a new sharp Michael-Simon inequality for kth mean curvature.
Jingshi Cui, Peibiao Zhao
Huisken and Ilmanen in [37] created the theory of weak solutions for inverse mean curvature flows (IMCF) of hypersurfaces on Riemannian manifolds, and proved successfully a Riemannian version of the Penrose inequality. The present paper investigates and constructs a sub-Riemannian version of the theory of weak solutions for inverse mean curvature flows of hypersurfaces in the first Heisenberg group $\mathbb{H}^{1}$, and provides a positive answer to an open problem: the Heintze-Karcher inequality in $\mathbb{H}^{1}$. Furthermore, we introduce a $\mathbb{H}$-perimeter preserving flow (1.8) in the first Heisenberg group $\mathbb{H}^{1}$, which is derived by applying the Heisenberg dilation to HIMCF. This rescaled flow is subsequently applied to establish a Minkowski-type formula in $\mathbb{H}^{1}$.