Jiahui Zhu, Zdzisław Brzeźniak, Wei Liu
We present remarkably simple proofs of Burkholder-Davis-Gundy inequalities for stochastic integrals and maximal inequalities for stochastic convolutions in Banach spaces driven by Lévy-type processes. Exponential estimates for stochastic convolutions are obtained and two versions of Itô's formula in Banach spaces are also derived. Based on the obtained maximal inequality, the existence and uniqueness of mild solutions of stochastic quasi-geostrophic equation with Lévy noise is established.
Jiahui Zhu, Zdzisław Brzeźniak, Erika Hausenblas
Let $(E, \| \cdot\|)$ be a Banach space such that, for some $q\geq 2$, the function $x\mapsto \|x\|^q$ is of $C^2$ class and its first and second Fréchet derivatives are bounded by some constant multiples of $(q-1)$-th power of the norm and $(q-2)$-th power of the norm and let $S$ be a $C_0$-semigroup of contraction type on $(E, \| \cdot\|)$. We consider the following stochastic convolution process \begin{align*} u(t)=\int_0^t\int_ZS(t-s)ξ(s,z)\,\tilde{N}(\mathrm{d} s,\mathrm{d} z), \;\;\; t\geq 0, \end{align*} where $\tilde{N}$ is a compensated Poisson random measure on a measurable space $(Z,\mathcal{Z})$ and $ξ:[0,\infty)\timesΩ\times Z\rightarrow E$ is an $\mathbb{F}\otimes \mathcal{Z}$-predictable function. We prove that there exists a càdlàg modification a $\tilde{u}$ of the process $u$ which satisfies the following maximal inequality \begin{align*} \mathbb{E} \sup_{0\leq s\leq t} \|\tilde{u}(s)\|^{q^\prime}\leq C\ \mathbb{E} \left(\int_0^t\int_Z \|ξ(s,z) \|^{p}\,N(\mathrm{d} s,\mathrm{d} z)\right)^{\frac{q^\prime}{p}}, \end{align*} for all $ q^\prime \geq q$ and $1<p\leq 2$ with $C=C(q,p)$.
Jiahui Zhu, Wei Liu, Jianliang Zhai
In this work we establish a Freidlin-Wentzell type large deviation principle for stochastic nonlinear Schrödinger equation, with either focusing or defocusing nonlinearity, driven by nonlinear multiplicative Lévy noise in the Marcus canonical form. This task is challenging in the current setting due to the presence of the power-type nonlinear term, the lack of regularization effect of the Schrödinger operator and the absence of compactness of embeddings. To overcome these difficulties, we employ a regularization procedure based on Yosida approximations and implement techniques such as time discretization, cut-off arguments, and relative entropy estimates of sequences of probability measures. Our innovative approach circumvents the need for compactness conditions, distinguishing our work from previous studies.
Zdzisław Brzeźniak, Jiahui Zhu
We consider a type of stochastic nonlinear beam equation driven by Lévy noise. By using a suitable Lyapunov function and applying the Khasminskii test we show the nonexplosion of the mild solutions. In addition, under some additional assumptions we prove the exponential stability of the solutions.
Qing Huang, Jiahui Zhu, Zhenchang Xing, Huan Jin, Changjing Wang, Xiwei Xu
API documentation, technical blogs and programming Q&A sites contain numerous partial code that can be reused in programming tasks, but often these code are uncompilable due to unresolved names and syntax errors. To facilitate partial code reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolving fully-qualified names (FQNs) and fixing last-mile syntax errors in partial code based on a giant large language model (LLM) like ChatGPT. Methodologically, PCR-Chain is backed up by the underlying global-level prompt architecture (which combines three design ideas: hierarchical task breakdown, prompt composition, and a mix of prompt-based AI and non-AI units) and the local-level prompt design. Technically, we propose PCR-Chain, which employs in-context learning rather than symbolic, costly training methods. Experimental results demonstrate that in dynamically-typed languages (Python), PCR-Chain outperforms current state-of-the-art (SOTA) 5% accuracy like RING. For statically-type languages (Java), our approach achieves high accuracy of 80.5% in resolving both non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) that can only address last-mile syntax errors. The correct execution of the unit, module, and PCR-Chain demonstrates the effectiveness of the prompt design, composition, and architecture and opens up possibilities for building software engineering tools based on LLMs, replacing traditional program analysis methods.
Jan van Neerven, Jiahui Zhu
Let (e^{tA})_{t \geq 0} be a C_0-contraction semigroup on a 2-smooth Banach space E, let (W_t)_{t \geq 0} be a cylindrical Brownian motion in a Hilbert space H, and let (g_t)_{t \geq 0} be a progressively measurable process with values in the space γ(H,E) of all γ-radonifying operators from H to E. We prove that for all 0<p<\infty there exists a constant C, depending only on p and E, such that for all T \geq 0 we have \E \sup_{0\le t\le T} || \int_0^t e^{(t-s)A} g_s dW_s \ ||^p \leq C \mathbb{E} (\int_0^T || g_t ||_{γ(H,E)}^2 dt)^\frac{p}{2}. For p \geq 2 the proof is based on the observation that ψ(x) = || x ||^p is Fréchet differentiable and its derivative satisfies the Lipschitz estimate || ψ'(x) - ψ'(y)|| \leq C(|| x || + || y ||)^{p-2} || x-y ||; the extension to 0<p<2 proceeds via Lenglart's inequality.
Kaicheng Ni, Heling Su, Jiahui Zhu
We consider the stochastic incompressible magnetohydrodynamic equations driven by additive jump noises on either the whole space $\mathbb{R}^d$, $d=2,3$ or a smooth bounded domain $D$ in $\mathbb{R}^d$. We establish the local existence and uniqueness of a mild solution in the space $L^q(0,T;\mathbb{L}^{p\otimes}_σ(D))$ allowing for initial data with less regularity, including the marginal case $u_0\in \mathbb{L}^{d\otimes}_σ(D)$. In the two-dimensional case, we also prove the global existence of mild solutions.
Jiahui Zhu, Jianliang Zhai
In this paper, we study the stochastic logrithmic Schrödinger equation with saturated nonlinear multiplicative Lévy noise. The global well-posedness is established for the stochastic logrithmic Schrödinger equation in an appropriate Orlicz space by construct solutions of a regularized equation converging strongly to a solution to the original equation.
Jiahui Zhu, Kihyun Yu, Dabeen Lee, Xin Liu, Honghao Wei
Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret O(sqrt(K)) and strong constraint violation O(sqrt(K)) without relying on Slater's condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.
Jiahui Zhu, Zdzisław Brzeźniak, Wei Liu
We study the existence and uniqueness of solutions of 2D Stochastic Navier-Stokes equation with space irregular jump noise for initial data in certain Sobolev spaces of negative order. Comparing with the Galerkin approximation method, the main advantage of this work is to use an $\mathbb{L}^p$-setting to obtain the solution under much weaker assumptions on the noise and the initial condition.
Zdzisław Brzeźniak, Wei Liu, Jiahui Zhu
We establish a new version of the stochastic Strichartz estimate for the stochastic convolution driven by jump noise which we apply to the stochastic nonlinear Schrödinger equation with nonlinear multiplicative jump noise in the Marcus canonical form. With the help of the deterministic Strichartz estimates, we prove the existence and uniqueness of a global solution to stochastic nonlinear Schrödinger equation in $L^2(\RR^n)$ with either focusing or defocusing nonlinearity in the full subcritical range of exponents as in the deterministic case.
Yicheng Deng, Cheng Sun, Jiahui Zhu, Yongqi Sun
Recovering 3D human pose from 2D joints is still a challenging problem, especially without any 3D annotation, video information, or multi-view information. In this paper, we present an unsupervised GAN-based model consisting of multiple weight-sharing generators to estimate a 3D human pose from a single image without 3D annotations. In our model, we introduce single-view-multi-angle consistency (SVMAC) to significantly improve the estimation performance. With 2D joint locations as input, our model estimates a 3D pose and a camera simultaneously. During training, the estimated 3D pose is rotated by random angles and the estimated camera projects the rotated 3D poses back to 2D. The 2D reprojections will be fed into weight-sharing generators to estimate the corresponding 3D poses and cameras, which are then mixed to impose SVMAC constraints to self-supervise the training process. The experimental results show that our method outperforms the state-of-the-art unsupervised methods on Human 3.6M and MPI-INF-3DHP. Moreover, qualitative results on MPII and LSP show that our method can generalize well to unknown data.
Jian Wang, Jianliang Zhai, Jiahui Zhu
In this paper, we establish the existence and uniqueness of solutions of stochastic nonlinear Schrödinger equations with additive jump noise in $L^2(\mathbb{R}^d)$. Our results cover all either focusing or defocusing nonlinearity in the full subcritical range of exponents as in the deterministic case.
Weina Wu, Jianliang Zhai, Jiahui Zhu
We establish a Freidlin-Wentzell type large deviation principle (LDP) for a class of stochastic partial differential equations with locally monotone coefficients driven by Lévy noise. Our results essentially improve a recent work on this topic (Bernoulli, 2018) by the second named author of this paper and his collaborator, because we drop the compactness embedding assumptions, and we also make the conditions for the coefficient of the noise term more specific and weaker. To obtain our results, we utilize an improved sufficient criteria of Budhiraja, Chen, Dupuis, and Maroulas for functions of Poisson random measures, and the techniques introduced by the first and second named authors of this paper in \cite{WZSIAM} play important roles. As an application, for the first time, the Freidlin-Wentzell type LDPs for many SPDEs driven by Lévy noise in unbounded domains of $\mathbb{R}^d$, which are generally lack of compactness embeddings properties, are achieved, like e.g., stochastic $p$-Laplace equation, stochastic Burgers-type equations, stochastic 2D Navier-Stokes equations, stochastic equations of non-Newtonian fluids, etc.
Guang Jiang, Jiahui Zhu, Yunsong Li, Pengcheng An, Yunlong Wang
Teacher-student interaction (TSI) is essential for learning efficiency and harmonious teacher-student interpersonal relationships. However, studies on TSI support tools often focus on teacher needs while neglecting student needs and autonomy. To enhance both lecturer competence in delivering interpersonal interaction and student autonomy in TSI, we developed NaMemo2, a novel augmented-reality system that allows students to express their willingness to TSI and displays student information to teachers during lectures. The design and evaluation process follows a new framework, STUDIER, which can facilitate the development of theory-based ethnics-aware TSI support tools in general. The quantitative results of our four-week field study with four classes in a university suggested that NaMemo2 can improve 1) TSI in the classroom from both teacher and student perspectives, 2) student attitudes and willingness to TSI, and 3) student attitudes to the deployment of NaMemo2. The qualitative feedback from students and teachers indicated that improving TSI may be responsible for improved attention in students and a better classroom atmosphere during lectures.
Christopher Broyles, Sougata Mardanya, Mengke Liu, Junyeong Ahn, Thao Dinh, Gadeer Alqasseri, Jalen Garner, Zackary Rehfuss, Ken Guo, Jiahui Zhu, David Martinez, Du Li, Yiqing Hao, Huibo Cao, Matt Boswell, Weiwei Xie, Jeremy G. Philbrick, Tai Kong, Li Yang, Ashvin Vishwanath, Philip Kim, Su-Yang Xu, Jennifer E. Hoffman, Jonathan D. Denlinger, Sugata Chowdhury, Sheng Ran
Since the initial discovery of two-dimensional van der Waals (vdW) materials, significant effort has been made to incorporate the three properties of magnetism, band structure topology, and strong electron correlations $-$ to leverage emergent quantum phenomena and expand their potential applications. However, the discovery of a single vdW material that intrinsically hosts all three ingredients has remained an outstanding challenge. Here we report the discovery of a Kondo-interacting topological antiferromagnet in the vdW 5$f$ electron system UOTe. It has a high antiferromagnetic (AFM) transition temperature of 150 K, with a unique AFM configuration that breaks the combined parity and time reversal ($PT$) symmetry in an even number of layers while maintaining zero net magnetic moment. Our angle-resolved photoemission spectroscopy (ARPES) measurements reveal Dirac bands near the Fermi level, which combined with our theoretical calculations demonstrate UOTe as an AFM Dirac semimetal. Within the AFM order, we observed the presence of the Kondo interaction, as evidenced by the emergence of a 5$f$ flat band near the Fermi level below 100 K and hybridization between the Kondo band and the Dirac band. Our density functional theory calculations in its bilayer form predict UOTe as a rare example of a fully-compensated AFM Chern insulator.
Yizhou Jin, Jiahui Zhu, Guodong Wang, Shiwei Li, Jinjin Zhang, Xinyue Liu, Qingjie Liu, Yunhong Wang
Incremental anomaly detection aims to sequentially identify defects in industrial product lines but suffers from catastrophic forgetting, primarily due to knowledge overwriting during parameter updates and feature conflicts between tasks. In this work, We propose ONER (ONline Experience Replay), an end-to-end framework that addresses these issues by synergistically integrating two types of experience: (1) decomposed prompts, which dynamically generate image-conditioned prompts from reusable modules to retain prior knowledge thus prevent knowledge overwriting, and (2) semantic prototypes, which enforce separability in latent feature spaces at pixel and image levels to mitigate cross-task feature conflicts. Extensive experiments demonstrate the superiority of ONER, achieving state-of-the-art performance with +4.4% Pixel AUROC and +28.3% Pixel AUPR improvements on the MVTec AD dataset over prior methods. Remarkably, ONER achieves this with only 0.019M parameters and 5 training epochs per task, confirming its efficiency and stability for real-world industrial deployment.
Miao Peng, Ben Liu, Wenjie Xu, Zihao Jiang, Jiahui Zhu, Min Peng
Temporal Knowledge Graph Reasoning (TKGR) is the task of inferring missing facts for incomplete TKGs in complex scenarios (e.g., transductive and inductive settings), which has been gaining increasing attention. Recently, to mitigate dependence on structured connections in TKGs, text-based methods have been developed to utilize rich linguistic information from entity descriptions. However, suffering from the enormous parameters and inflexibility of pre-trained language models, existing text-based methods struggle to balance the textual knowledge and temporal information with computationally expensive purpose-built training strategies. To tap the potential of text-based models for TKGR in various complex scenarios, we propose ChapTER, a Contrastive historical modeling framework with prefix-tuning for TEmporal Reasoning. ChapTER feeds history-contextualized text into the pseudo-Siamese encoders to strike a textual-temporal balance via contrastive estimation between queries and candidates. By introducing virtual time prefix tokens, it applies a prefix-based tuning method to facilitate the frozen PLM capable for TKGR tasks under different settings. We evaluate ChapTER on four transductive and three few-shot inductive TKGR benchmarks, and experimental results demonstrate that ChapTER achieves superior performance compared to competitive baselines with only 0.17% tuned parameters. We conduct thorough analysis to verify the effectiveness, flexibility and efficiency of ChapTER.
Ke Wang, Jiahui Zhu, Minjie Ren, Zeming Liu, Shiwei Li, Zongye Zhang, Chenkai Zhang, Xiaoyu Wu, Qiqi Zhan, Qingjie Liu, Yunhong Wang
The success of Large Language Models (LLMs) is inherently linked to the availability of vast, diverse, and high-quality data for training and evaluation. However, the growth rate of high-quality data is significantly outpaced by the expansion of training datasets, leading to a looming data exhaustion crisis. This underscores the urgent need to enhance data efficiency and explore new data sources. In this context, synthetic data has emerged as a promising solution. Currently, data generation primarily consists of two major approaches: data augmentation and synthesis. This paper comprehensively reviews and summarizes data generation techniques throughout the lifecycle of LLMs, including data preparation, pre-training, fine-tuning, instruction-tuning, preference alignment, and applications. Furthermore, We discuss the current constraints faced by these methods and investigate potential pathways for future development and research. Our aspiration is to equip researchers with a clear understanding of these methodologies, enabling them to swiftly identify appropriate data generation strategies in the construction of LLMs, while providing valuable insights for future exploration.
Team Seedance, Heyi Chen, Siyan Chen, Xin Chen, Yanfei Chen, Ying Chen, Zhuo Chen, Feng Cheng, Tianheng Cheng, Xinqi Cheng, Xuyan Chi, Jian Cong, Jing Cui, Qinpeng Cui, Qide Dong, Junliang Fan, Jing Fang, Zetao Fang, Chengjian Feng, Han Feng, Mingyuan Gao, Yu Gao, Dong Guo, Qiushan Guo, Boyang Hao, Qingkai Hao, Bibo He, Qian He, Tuyen Hoang, Ruoqing Hu, Xi Hu, Weilin Huang, Zhaoyang Huang, Zhongyi Huang, Donglei Ji, Siqi Jiang, Wei Jiang, Yunpu Jiang, Zhuo Jiang, Ashley Kim, Jianan Kong, Zhichao Lai, Shanshan Lao, Yichong Leng, Ai Li, Feiya Li, Gen Li, Huixia Li, JiaShi Li, Liang Li, Ming Li, Shanshan Li, Tao Li, Xian Li, Xiaojie Li, Xiaoyang Li, Xingxing Li, Yameng Li, Yifu Li, Yiying Li, Chao Liang, Han Liang, Jianzhong Liang, Ying Liang, Zhiqiang Liang, Wang Liao, Yalin Liao, Heng Lin, Kengyu Lin, Shanchuan Lin, Xi Lin, Zhijie Lin, Feng Ling, Fangfang Liu, Gaohong Liu, Jiawei Liu, Jie Liu, Jihao Liu, Shouda Liu, Shu Liu, Sichao Liu, Songwei Liu, Xin Liu, Xue Liu, Yibo Liu, Zikun Liu, Zuxi Liu, Junlin Lyu, Lecheng Lyu, Qian Lyu, Han Mu, Xiaonan Nie, Jingzhe Ning, Xitong Pan, Yanghua Peng, Lianke Qin, Xueqiong Qu, Yuxi Ren, Kai Shen, Guang Shi, Lei Shi, Yan Song, Yinglong Song, Fan Sun, Li Sun, Renfei Sun, Yan Sun, Zeyu Sun, Wenjing Tang, Yaxue Tang, Zirui Tao, Feng Wang, Furui Wang, Jinran Wang, Junkai Wang, Ke Wang, Kexin Wang, Qingyi Wang, Rui Wang, Sen Wang, Shuai Wang, Tingru Wang, Weichen Wang, Xin Wang, Yanhui Wang, Yue Wang, Yuping Wang, Yuxuan Wang, Ziyu Wang, Guoqiang Wei, Wanru Wei, Di Wu, Guohong Wu, Hanjie Wu, Jian Wu, Jie Wu, Ruolan Wu, Xinglong Wu, Yonghui Wu, Ruiqi Xia, Liang Xiang, Fei Xiao, XueFeng Xiao, Pan Xie, Shuangyi Xie, Shuang Xu, Jinlan Xue, Shen Yan, Bangbang Yang, Ceyuan Yang, Jiaqi Yang, Runkai Yang, Tao Yang, Yang Yang, Yihang Yang, ZhiXian Yang, Ziyan Yang, Songting Yao, Yifan Yao, Zilyu Ye, Bowen Yu, Jian Yu, Chujie Yuan, Linxiao Yuan, Sichun Zeng, Weihong Zeng, Xuejiao Zeng, Yan Zeng, Chuntao Zhang, Heng Zhang, Jingjie Zhang, Kuo Zhang, Liang Zhang, Liying Zhang, Manlin Zhang, Ting Zhang, Weida Zhang, Xiaohe Zhang, Xinyan Zhang, Yan Zhang, Yuan Zhang, Zixiang Zhang, Fengxuan Zhao, Huating Zhao, Yang Zhao, Hao Zheng, Jianbin Zheng, Xiaozheng Zheng, Yangyang Zheng, Yijie Zheng, Jiexin Zhou, Jiahui Zhu, Kuan Zhu, Shenhan Zhu, Wenjia Zhu, Benhui Zou, Feilong Zuo