Kaile Huang, Yu Huang, Hengfeng Wei
Distributed multi-writer atomic registers are at the heart of a large number of distributed algorithms. While enjoying the benefits of atomicity, researchers further explore fast implementations of atomic reigsters which are optimal in terms of data access latency. Though it is proved that multi-writer atomic register implementations are impossible when both read and write are required to be fast, it is still open whether implementations are impossible when only write or read is required to be fast. This work proves the impossibility of fast write implementations based on a series of chain arguments among indistiguishable executions. We also show the necessary and sufficient condition for fast read implementations by extending the results in the single-writer case. This work concludes a series of studies on fast implementations of distributed atomic registers.
Chao Zhang, Jaswanth Yella, Yu Huang, Xiaoye Qian, Sergei Petrov, Andrey Rzhetsky, Sthitie Bom
With the rapid development of AI technology in recent years, there have been many studies with deep learning models in soft sensing area. However, the models have become more complex, yet, the data sets remain limited: researchers are fitting million-parameter models with hundreds of data samples, which is insufficient to exercise the effectiveness of their models and thus often fail to perform when implemented in industrial applications. To solve this long-lasting problem, we are providing large scale, high dimensional time series manufacturing sensor data from Seagate Technology to the public. We demonstrate the challenges and effectiveness of modeling industrial big data by a Soft Sensing Transformer model on these data sets. Transformer is used because, it has outperformed state-of-the-art techniques in Natural Language Processing, and since then has also performed well in the direct application to computer vision without introduction of image-specific inductive biases. We observe the similarity of a sentence structure to the sensor readings and process the multi-variable sensor readings in a time series in a similar manner of sentences in natural language. The high-dimensional time-series data is formatted into the same shape of embedded sentences and fed into the transformer model. The results show that transformer model outperforms the benchmark models in soft sensing field based on auto-encoder and long short-term memory (LSTM) models. To the best of our knowledge, we are the first team in academia or industry to benchmark the performance of original transformer model with large-scale numerical soft sensing data.
Xiaosong Gu, Wei Cao, Yicong Zhu, Xuan Song, Yu Huang, Xiaoxing Ma
Consensus protocols are widely used in building reliable distributed software systems and its correctness is of vital importance. TLA+ is a lightweight formal specification language which enables precise specification of system design and exhaustive checking of the design without any human effort. The features of TLA+ make it widely used in the specification and model checking of consensus protocols, both in academia and industry. However, the application of TLA+ is limited by the state explosion problem in model checking. Though compositional model checking is essential to tame the state explosion problem, existing compositional checking techniques do not sufficiently consider the characteristics of TLA+. In this work, we propose the Interaction-Preserving Abstraction (IPA) framework, which leverages the features of TLA+ and enables practical and efficient compositional model checking of consensus protocols specified in TLA+. In the IPA framework, system specification is partitioned into multiple modules, and each module is divided to the internal part and the interaction part. The basic idea of the interaction-preserving abstraction is to omit the internal part of each module, such that another module cannot distinguish whether it is interacting with the original module or the coarsened abstract one. We use the IPA framework to the compositional checking of the TLA+ specification of two consensus protocols Raft and ParallelRaft. Raft is a consensus protocol which is originally developed in the academia and then widely used in industry. ParallelRaft is the replication protocol in PolarFS, the distributed file system for the commercial database Alibaba PoloarDB. We demonstrate that the IPA framework is easy to use in realistic scenarios and at the same time significantly reduces the model checking cost.
Yu Xiang, Yu Huang, Haodong Xu, Guangbo Zhang, Wenyong Wang
The identification of pedestrians using radar micro-Doppler signatures has become a hot topic in recent years. In this paper, we propose a multi-characteristic learning (MCL) model with clusters to jointly learn discrepant pedestrian micro-Doppler signatures and fuse the knowledge learned from each cluster into final decisions. Time-Doppler spectrogram (TDS) and signal statistical features extracted from FMCW radar, as two categories of micro-Doppler signatures, are used in MCL to learn the micro-motion information inside pedestrians' free walking patterns. The experimental results show that our model achieves a higher accuracy rate and is more stable for pedestrian identification than other studies, which make our model more practical.
Yu Huang, Viacheslav Chubakov, Fabio Mantovani, Roberta L. Rudnick, William F. McDonough
The recent geoneutrino experimental results from KamLAND and Borexino detectors reveal the usefulness of analyzing the Earth geoneutrino flux, as it provides a constraint on the strength of the radiogenic heat power and this, in turn, provides a test of compositional models of the bulk silicate Earth (BSE). This flux is dependent on the amount and distribution of heat producing elements (HPEs: U, Th and K) in the Earth interior. We have developed a geophysically-based, three-dimensional global reference model for the abundances and distributions of HPEs in the BSE. The structure and composition of the outermost portion of the Earth, the crust and underlying lithospheric mantle, is detailed in the reference model, this portion of the Earth has the greatest influence on the geoneutrino fluxes. The reference model combines three existing geophysical models of the global crust and yields an average crustal thickness of 34.4+-4.1 km in the continents and 8.0+-2.7 km in the oceans. In situ seismic velocity provided by CRUST 2.0 allows us to estimate the average composition of the deep continental crust by using new and updated compositional databases for amphibolite and granulite facies rocks in combination with laboratory ultrasonic velocities measurements. An updated xenolithic peridotite database is used to represent the average composition of continental lithospheric mantle. Monte Carlo simulation is used to predict the geoneutrino flux at 16 selected locations and to track the asymmetrical uncertainties of radiogenic heat power due to the log-normal distributions of HPE concentrations in crustal rocks.
Yiling Yang, Yu Huang, Xiaoxing Ma, Jian Lu
Pervasive applications are involving more and more autonomous computing and communicating devices, augmented with the abilities of sensing and controlling the logical / physical environment. To enable context-awareness for such applications, we are challenged by the intrinsic asynchrony among the context collecting devices. To this end, we introduce the predicate detection theory and propose the Predicate-Detection-based Context-Awareness (PD-CA) framework, in which: a) logical time is used to explicitly cope with the asynchrony; b) specification of predicates enables the applications to express contextual properties of their concerns; c) online and incremental predicate detection algorithms effectively enable context-awareness at runtime. Under the guidance of the PD-CA framework, we present the design and implementation of the MIPA middleware, which shields the applications from the burden of processing the asynchronous contexts. We also demonstrate how PD-CA simplifies the development of context-aware applications. Experimental evaluations show the performance of MIPA in supporting context-aware applications despite of the asynchrony.
Yu Huang, Liang Guo, Wanqian Guo, Zhe Tao, Yang Lv, Zhihao Sun, Dongfang Zhao
In the field of environmental science, it is crucial to have robust evaluation metrics for large language models to ensure their efficacy and accuracy. We propose EnviroExam, a comprehensive evaluation method designed to assess the knowledge of large language models in the field of environmental science. EnviroExam is based on the curricula of top international universities, covering undergraduate, master's, and doctoral courses, and includes 936 questions across 42 core courses. By conducting 0-shot and 5-shot tests on 31 open-source large language models, EnviroExam reveals the performance differences among these models in the domain of environmental science and provides detailed evaluation standards. The results show that 61.3% of the models passed the 5-shot tests, while 48.39% passed the 0-shot tests. By introducing the coefficient of variation as an indicator, we evaluate the performance of mainstream open-source large language models in environmental science from multiple perspectives, providing effective criteria for selecting and fine-tuning language models in this field. Future research will involve constructing more domain-specific test sets using specialized environmental science textbooks to further enhance the accuracy and specificity of the evaluation.
Qian Cheng, Ruize Tang, Emilie Ma, Finn Hackett, Peiyang He, Yiming Su, Ivan Beschastnikh, Yu Huang, Xiaoxing Ma, Tianyin Xu
Formal models are essential to specifying large, complex computer systems and verifying their correctness, but are notoriously expensive to write and maintain. Recent advances in generative AI show promise in generating certain forms of specifications. However, existing work mostly targets small code, not complete systems. It is unclear whether AI can deal with realistic system artifacts, as this requires abstracting their complex behavioral properties into formal models. We present SysMoBench, a benchmark that evaluates AI's ability to formally model large, complex systems. We focus on concurrent and distributed systems, which are keystones of today's critical computing infrastructures, encompassing operating systems and cloud infrastructure. We use TLA+, the de facto specification language for concurrent and distributed systems, though the benchmark can be extended to other specification languages. We address the primary challenge of evaluating AI-generated models by automating metrics like syntactic and runtime correctness, conformance to system code, and invariant correctness. SysMoBench currently includes eleven diverse system artifacts: the Raft implementation of Etcd and Redis, the leader election of ZooKeeper, the Spinlock, Mutex, and Ringbuffer in Asterinas OS, etc., with more being added. SysMoBench enables us to understand the capabilities and limitations of today's LLMs and agents, putting tools in this area on a firm footing and opening up promising new research directions.
Siyang Yu, Yu Huang, Zuntao Fu
The occurrence of some extreme events (such as marine heatwaves or exceptional circulations) can cause other extreme events (such as heatwave, drought and flood). These concurrent extreme events have a great impact on environment and human health. However, how to detect and quantify the causes and impacts of these extreme events by a data-driven way is still unsolved. In this study, the dynamic system method is extended to develop a method for detecting the causality between extreme events. Taking the coupled Lorenz-Lorenz systems with extreme event-driven coupling as an example, it is demonstrated that this proposed detecting method is able to capture the extreme event-driven causality, with even better causality detecting performance between concurrent extreme events. Comparison among three kinds of measured series, full measurements outperform partial ones in event-to-event causality detecting. The successful applicability of our proposed approach in Walker circulation phenomenon indicates that our method contributes a novel way to the study of causal inference in complex systems. This method offers valuable insights into multi-scale, nonlinear dynamics, particularly in uncovering associations among extreme events.
Yu Huang, Yue Chen
Since DARPA Grand Challenges (rural) in 2004/05 and Urban Challenges in 2007, autonomous driving has been the most active field of AI applications. Almost at the same time, deep learning has made breakthrough by several pioneers, three of them (also called fathers of deep learning), Hinton, Bengio and LeCun, won ACM Turin Award in 2019. This is a survey of autonomous driving technologies with deep learning methods. We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc. Due to the limited space, we focus the analysis on several key areas, i.e. 2D and 3D object detection in perception, depth estimation from cameras, multiple sensor fusion on the data, feature and task level respectively, behavior modelling and prediction of vehicle driving and pedestrian trajectories.
Kaile Huang, Hengfeng Wei, Yu Huang, Haixiang Li, Anqun Pan
Causal consistency is a widely used weak consistency model that allows high availability despite network partitions. There are plenty of research prototypes and industrial deployments of causally consistent distributed systems. However, as far as we know, none of them consider Byzantine faults, except Byz-RCM proposed by Tseng et al. Byz-RCM achieves causal consistency in the client-server model with $3f + 1$ servers where up to $f$ servers may suffer Byzantine faults, but assumes that clients are non-Byzantine. In this work, we present Byz-Gentlerain, the first causal consistency protocol which tolerates up to $f$ Byzantine servers among $3f + 1$ servers in each partition and any number of Byzantine clients. Byz-GentleRain is inspired by the stabilization mechanism of GentleRain for causal consistency. To prevent causal violations due to Byzantine faults, Byz-GentleRain relies on PBFT to reach agreement on a sequence of global stable times and updates among servers, and only updates with timestamps less than or equal to such common global stable times are visible to clients. We prove that Byz-GentleRain achieves Byz-CC, the causal consistency variant in the presence of Byzantine faults. We evaluate Byz-GentleRain on Aliyun. The preliminary results show that Byz-GentleRain is efficient on typical workloads.
Yu Huang, Jingchuan Guo, William T Donahoo, Zhengkang Fan, Ying Lu, Wei-Han Chen, Huilin Tang, Lori Bilello, Elizabeth A Shenkman, Jiang Bian
Background: Racial and ethnic minority groups and individuals facing social disadvantages, which often stem from their social determinants of health (SDoH), bear a disproportionate burden of type 2 diabetes (T2D) and its complications. It is therefore crucial to implement effective social risk management strategies at the point of care. Objective: To develop an EHR-based machine learning (ML) analytical pipeline to identify the unmet social needs associated with hospitalization risk in patients with T2D. Methods: We identified 10,192 T2D patients from the EHR data (from 2012 to 2022) from the University of Florida Health Integrated Data Repository, including contextual SDoH (e.g., neighborhood deprivation) and individual-level SDoH (e.g., housing stability). We developed an electronic health records (EHR)-based machine learning (ML) analytic pipeline, namely individualized polysocial risk score (iPsRS), to identify high social risk associated with hospitalizations in T2D patients, along with explainable AI (XAI) techniques and fairness assessment and optimization. Results: Our iPsRS achieved a C statistic of 0.72 in predicting 1-year hospitalization after fairness optimization across racial-ethnic groups. The iPsRS showed excellent utility for capturing individuals at high hospitalization risk; the actual 1-year hospitalization rate in the top 5% of iPsRS was ~13 times as high as the bottom decile. Conclusion: Our ML pipeline iPsRS can fairly and accurately screen for patients who have increased social risk leading to hospitalization in T2D patients.
Gen Li, Yu Huang, Timofey Efimov, Yuting Wei, Yuejie Chi, Yuxin Chen
Score-based diffusion models, while achieving remarkable empirical performance, often suffer from low sampling speed, due to extensive function evaluations needed during the sampling phase. Despite a flurry of recent activities towards speeding up diffusion generative modeling in practice, theoretical underpinnings for acceleration techniques remain severely limited. In this paper, we design novel training-free algorithms to accelerate popular deterministic (i.e., DDIM) and stochastic (i.e., DDPM) samplers. Our accelerated deterministic sampler converges at a rate $O(1/{T}^2)$ with $T$ the number of steps, improving upon the $O(1/T)$ rate for the DDIM sampler; and our accelerated stochastic sampler converges at a rate $O(1/T)$, outperforming the rate $O(1/\sqrt{T})$ for the DDPM sampler. The design of our algorithms leverages insights from higher-order approximation, and shares similar intuitions as popular high-order ODE solvers like the DPM-Solver-2. Our theory accommodates $\ell_2$-accurate score estimates, and does not require log-concavity or smoothness on the target distribution.
Tong Yang, Yu Huang, Yingbin Liang, Yuejie Chi
In-context learning (ICL) refers to a remarkable capability of pretrained large language models, which can learn a new task given a few examples during inference. However, theoretical understanding of ICL is largely under-explored, particularly whether transformers can be trained to generalize to unseen examples in a prompt, which will require the model to acquire contextual knowledge of the prompt for generalization. This paper investigates the training dynamics of transformers by gradient descent through the lens of non-linear regression tasks. The contextual generalization here can be attained via learning the template function for each task in-context, where all template functions lie in a linear space with $m$ basis functions. We analyze the training dynamics of one-layer multi-head transformers to in-contextly predict unlabeled inputs given partially labeled prompts, where the labels contain Gaussian noise and the number of examples in each prompt are not sufficient to determine the template. Under mild assumptions, we show that the training loss for a one-layer multi-head transformer converges linearly to a global minimum. Moreover, the transformer effectively learns to perform ridge regression over the basis functions. To our knowledge, this study is the first provable demonstration that transformers can learn contextual (i.e., template) information to generalize to both unseen examples and tasks when prompts contain only a small number of query-answer pairs.
Lingzhi Ouyang, Xudong Sun, Ruize Tang, Yu Huang, Madhav Jivrajani, Xiaoxing Ma, Tianyin Xu
This paper presents our experience specifying and verifying the correctness of ZooKeeper, a complex and evolving distributed coordination system. We use TLA+ to model fine-grained behaviors of ZooKeeper and use the TLC model checker to verify its correctness properties; we also check conformance between the model and code. The fundamental challenge is to balance the granularity of specifications and the scalability of model checking -- fine-grained specifications lead to state-space explosion, while coarse-grained specifications introduce model-code gaps. To address this challenge, we write specifications with different granularities for composable modules, and compose them into mixed-grained specifications based on specific scenarios. For example, to verify code changes, we compose fine-grained specifications of changed modules and coarse-grained specifications that abstract away details of unchanged code with preserved interactions. We show that writing multi-grained specifications is a viable practice and can cope with model-code gaps without untenable state space, especially for evolving software where changes are typically local and incremental. We detected six severe bugs that violate five types of invariants and verified their code fixes; the fixes have been merged to ZooKeeper. We also improve the protocol design to make it easy to implement correctly.
Yu Huang, Zixin Wen, Yuejie Chi, Yingbin Liang
Self-supervised learning has become a cornerstone in computer vision, primarily divided into reconstruction-based methods like masked autoencoders (MAE) and discriminative methods such as contrastive learning (CL). Recent empirical observations reveal that MAE and CL capture different types of representations: CL tends to focus on global patterns, while MAE adeptly captures both global and subtle local information simultaneously. Despite a flurry of recent empirical investigations to shed light on this difference, theoretical understanding remains limited, especially on the dominant architecture vision transformers (ViTs). In this paper, to provide rigorous insights, we model the visual data distribution by considering two types of spatial features: dominant global features and comparatively minuscule local features, and study the impact of imbalance among these features. We analyze the training dynamics of one-layer softmax-based ViTs on both MAE and CL objectives using gradient descent. Our analysis shows that as the degree of feature imbalance varies, ViTs trained with the MAE objective effectively learn both global and local features to achieve near-optimal reconstruction, while the CL-trained ViTs favor predominantly global features, even under mild imbalance. These results provide a theoretical explanation for distinct behaviors of MAE and CL observed in empirical studies.
Hengfeng Wei, Yu Huang, Jiannong Cao, Jian Lu
A consistency/latency tradeoff arises as soon as a distributed storage system replicates data. For low latency, modern storage systems often settle for weak consistency conditions, which provide little, or even worse, no guarantee for data consistency. In this paper we propose the notion of almost strong consistency as a better balance option for the consistency/latency tradeoff. It provides both deterministically bounded staleness of data versions for each read and probabilistic quantification on the rate of "reading stale values", while achieving low latency. In the context of distributed storage systems, we investigate almost strong consistency in terms of 2-atomicity. Our 2AM (2-Atomicity Maintenance) algorithm completes both reads and writes in one communication round-trip, and guarantees that each read obtains the value of within the latest 2 versions. To quantify the rate of "reading stale values", we decompose the so-called "old-new inversion" phenomenon into concurrency patterns and read-write patterns, and propose a stochastic queueing model and a "timed balls-into-bins model" to analyze them, respectively. The theoretical analysis not only demonstrates that "old-new inversions" rarely occur as expected, but also reveals that the read-write pattern dominates in guaranteeing such rare data inconsistencies. These are further confirmed by the experimental results, showing that 2-atomicity is "good enough" in distributed storage systems by achieving low latency, bounded staleness, and rare data inconsistencies.
Jiliang Li, Yifan Zhang, Yu Huang, Kevin Leach
Recent growth and proliferation of malware have tested practitioners ability to promptly classify new samples according to malware families. In contrast to labor-intensive reverse engineering efforts, machine learning approaches have demonstrated increased speed and accuracy. However, most existing deep-learning malware family classifiers must be calibrated using a large number of samples that are painstakingly manually analyzed before training. Furthermore, as novel malware samples arise that are beyond the scope of the training set, additional reverse engineering effort must be employed to update the training set. The sheer volume of new samples found in the wild creates substantial pressure on practitioners ability to reverse engineer enough malware to adequately train modern classifiers. In this paper, we present MalMixer, a malware family classifier using semi-supervised learning that achieves high accuracy with sparse training data. We present a domain-knowledge-aware data augmentation technique for malware feature representations, enhancing few-shot performance of semi-supervised malware family classification. We show that MalMixer achieves state-of-the-art performance in few-shot malware family classification settings. Our research confirms the feasibility and effectiveness of lightweight, domain-knowledge-aware data augmentation methods for malware features and shows the capabilities of similar semi-supervised classifiers in addressing malware classification issues.
Yifan Zhang, Chen Huang, Yueke Zhang, Huajie Shao, Kevin Leach, Yu Huang
Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary code lacks semantics and is more difficult for human engineers to understand and analyze. In this paper, we present ContraBin, a contrastive learning technique that integrates source code and comment information along with binaries to create an embedding capable of aiding binary analysis and comprehension tasks. Specifically, we present three components in ContraBin: (1) a primary contrastive learning method for initial pre-training, (2) a simplex interpolation method to integrate source code, comments, and binary code, and (3) an intermediate representation learning algorithm to train a binary code embedding. We further analyze the impact of human-written and synthetic comments on binary code comprehension tasks, revealing a significant performance disparity. While synthetic comments provide substantial benefits, human-written comments are found to introduce noise, even resulting in performance drops compared to using no comments. These findings reshape the narrative around the role of comment types in binary code analysis. We evaluate the effectiveness of ContraBin through four indicative downstream tasks related to binary code: algorithmic functionality classification, function name recovery, code summarization, and reverse engineering. The results show that ContraBin considerably improves performance on all four tasks, measured by accuracy, mean of average precision, and BLEU scores as appropriate. ContraBin is the first language representation model to incorporate source code, binary code, and comments into contrastive code representation learning and is intended to contribute to the field of binary code analysis. The dataset used in this study is available for further research.
Yu Huang, Zixin Wen, Aarti Singh, Yuejie Chi, Yuxin Chen
The ability to reason lies at the core of artificial intelligence (AI), and challenging problems usually call for deeper and longer reasoning to tackle. A crucial question about AI reasoning is whether models can extrapolate learned reasoning patterns to solve harder tasks with longer chain-of-thought (CoT). In this work, we present a theoretical analysis of transformers learning on synthetic state-tracking tasks with gradient descent. We mathematically prove how the algebraic structure of state-tracking problems governs the degree of extrapolation of the learned CoT. Specifically, our theory characterizes the length generalization of transformers through the mechanism of attention concentration, linking the retrieval robustness of the attention layer to the state-tracking task structure of long-context reasoning. Moreover, for transformers with limited reasoning length, we prove that a recursive self-training scheme can progressively extend the range of solvable problem lengths. To our knowledge, we provide the first optimization guarantee that constant-depth transformers provably learn $\mathsf{NC}^1$-complete problems with CoT, significantly going beyond prior art confined in $\mathsf{TC}^0$, unless the widely held conjecture $\mathsf{TC}^0 \neq \mathsf{NC}^1$ fails. Finally, we present a broad set of experiments supporting our theoretical results, confirming the length generalization behaviors and the mechanism of attention concentration.