Lei Deng, Fang Liu, Yijin Zhang, Wing Shing Wong
Transparent topology is common in many mobile ad hoc networks (MANETs) such as vehicle ad hoc networks (VANETs), unmanned aerial vehicle (UAV) ad hoc networks, and wireless sensor networks due to their decentralization and mobility nature. There are many existing works on distributed scheduling scheme design for topology-transparent MANETs. Most of them focus on delay-unconstrained settings. However, with the proliferation of real-time applications over wireless communications, it becomes more and more important to support delay-constrained traffic in MANETs. In such applications, each packet has a given hard deadline: if it is not delivered before its deadline, its validity will expire and it will be removed from the system. This feature is fundamentally different from the traditional delay-unconstrained one. In this paper, we for the first time investigate distributed scheduling schemes for a topology-transparent MANET to support delay-constrained traffic. We analyze and compare probabilistic ALOHA scheme and deterministic sequence schemes, including the conventional time division multiple access (TDMA), the Galois field (GF) sequence scheme proposed in \cite{chlamtac1994making}, and the combination sequence scheme that we propose for a special type of sparse network topology.We use both theoretical analysis and empirical simulations to compare all these schemes and summarize the conditions under which different individual schemes perform best.
Lei Deng, Wing Shing Wong, Po-Ning Chen, Yunghsiang S. Han, Hanxu Hou
In this paper, we study the delay-constrained input-queued switch where each packet has a deadline and it will expire if it is not delivered before its deadline. Such new scenario is motivated by the proliferation of real-time applications in multimedia communication systems, tactile Internet, networked controlled systems, and cyber-physical systems. The delay-constrained input-queued switch is completely different from the well-understood delay-unconstrained one and thus poses new challenges. We focus on three fundamental problems centering around the performance metric of timely throughput: (i) how to characterize the capacity region? (ii) how to design a feasibility/throughput-optimal scheduling policy? and (iii) how to design a network-utility-maximization scheduling policy? We use three different approaches to solve these three fundamental problems. The first approach is based on Markov Decision Process (MDP) theory, which can solve all three problems. However, it suffers from the curse of dimensionality. The second approach breaks the curse of dimensionality by exploiting the combinatorial features of the problem. It gives a new capacity region characterization with only a polynomial number of linear constraints. The third approach is based on the framework of Lyapunov optimization, where we design a polynomial-time maximum-weight T-disjoint-matching scheduling policy which is proved to be feasibility/throughput-optimal. Our three approaches apply to the frame-synchronized traffic pattern but our MDP-based approach can be extended to more general traffic patterns.
Danzhou Wu, Lei Deng, Zilong Liu, Yijin Zhang, Yunghsiang S. Han
In this paper, we investigate the random access problem for a delay-constrained heterogeneous wireless network. As a first attempt to study this new problem, we consider a network with two users who deliver delay-constrained traffic to an access point (AP) via a common unreliable collision wireless channel. We assume that one user (called user 1) adopts ALOHA and we optimize the random access scheme of the other user (called user 2). The most intriguing part of this problem is that user 2 does not know the information of user 1 but needs to maximize the system timely throughput. Such a paradigm of collaboratively sharing spectrum is envisioned by DARPA to better dynamically match the supply and demand in the future [1], [2]. We first propose a Markov Decision Process (MDP) formulation to derive a modelbased upper bound, which can quantify the performance gap of any designed schemes. We then utilize reinforcement learning (RL) to design an R-learning-based [3]-[5] random access scheme, called TSRA. We finally carry out extensive simulations to show that TSRA achieves close-to-upper-bound performance and better performance than the existing baseline DLMA [6], which is our counterpart scheme for delay-unconstrained heterogeneous wireless network. All source code is publicly available in https://github.com/DanzhouWu/TSRA.
Lei Deng, Danzhou Wu, Jing Deng, Po-Ning Chen, Yunghsiang S. Han
Motivated by the proliferation of real-time applications in multimedia communication systems, tactile Internet, and cyber-physical systems, supporting delay-constrained traffic becomes critical for such systems. In delay-constrained traffic, each packet has a hard deadline; when it is not delivered before its deadline is up, it becomes useless and will be removed from the system. In this work, we focus on designing random access schemes for delay-constrained wireless communications. We first investigate three ALOHA-based schemes and prove that the system timely throughput of all three schemes under corresponding optimal transmission probabilities asymptotically converges to $1/e$, same as the well-known throughput limit for delay-unconstrained ALOHA systems. The fundamental reason why ALOHA-based schemes cannot achieve asymptotical system timely throughput beyond $1/e$ is that all active ALOHA stations access the channel with the same probability in any slot. To go beyond $1/e$, we propose a reinforcement-learning-based scheme for delay-constrained wireless communications, called RLRA-DC, under which different stations collaboratively attain different transmission probabilities by only interacting with the access point. Our numerical result shows that the system timely throughput of RLRA-DC can be as high as 0.8 for tens of stations and can still reach 0.6 even for thousands of stations, much larger than $1/e$.
Lei Deng, Qiulin Lin
Denote $\mathcal{A}$ as the set of all doubly substochastic $m \times n$ matrices and let $k$ be a positive integer. Let $\mathcal{A}_k$ be the set of all $1/k$-bounded doubly substochastic $m \times n$ matrices, i.e., $\mathcal{A}_k \triangleq \{E \in \mathcal{A}: e_{i,j} \in [0, 1/k], \forall i=1,2,\cdots,m, j = 1,2,\cdots, n\}$. Denote $\mathcal{B}_k$ as the set of all matrices in $\mathcal{A}_k$ whose entries are either $0$ or $1/k$. We prove that $\mathcal{A}_k$ is the convex hull of all matrices in $\mathcal{B}_k$.
Lei Deng, Siyuan Huang, Yueqi Duan, Baohua Chen, Jie Zhou
Conventional single image based localization methods usually fail to localize a querying image when there exist large variations between the querying image and the pre-built scene. To address this, we propose an image-set querying based localization approach. When the localization by a single image fails to work, the system will ask the user to capture more auxiliary images. First, a local 3D model is established for the querying image set. Then, the pose of the querying image set is estimated by solving a nonlinear optimization problem, which aims to match the local 3D model against the pre-built scene. Experiments have shown the effectiveness and feasibility of the proposed approach.
Lei Deng, Danzhou Wu, Zilong Liu, Yijin Zhang, Yunghsiang S. Han
In this paper, we for the first time investigate the random access problem for a delay-constrained heterogeneous wireless network. We begin with a simple two-device problem where two devices deliver delay-constrained traffic to an access point (AP) via a common unreliable collision channel. By assuming that one device (called Device 1) adopts ALOHA, we aim to optimize the random access scheme of the other device (called Device 2). The most intriguing part of this problem is that Device 2 does not know the information of Device 1 but needs to maximize the system timely throughput. We first propose a Markov Decision Process (MDP) formulation to derive a model-based upper bound so as to quantify the performance gap of certain random access schemes. We then utilize reinforcement learning (RL) to design an R-learning-based random access scheme, called tiny state-space R-learning random access (TSRA), which is subsequently extended for the tackling of the general multi-device problem. We carry out extensive simulations to show that the proposed TSRA simultaneously achieves higher timely throughput, lower computation complexity, and lower power consumption than the existing baseline--deep-reinforcement learning multiple access (DLMA). This indicates that our proposed TSRA scheme is a promising means for efficient random access over massive mobile devices with limited computation and battery capabilities.
Lei Deng, Wenjie Zhang, Yun Rui, Yeo Chai Kiat
Driven by green communications, energy efficiency (EE) has become a new important criterion for designing wireless communication systems. However, high EE often leads to low spectral efficiency (SE), which spurs the research on EE-SE tradeoff. In this paper, we focus on how to maximize the utility in physical layer for an uplink multi-user multiple-input multipleoutput (MU-MIMO) system, where we will not only consider EE-SE tradeoff in a unified way, but also ensure user fairness. We first formulate the utility maximization problem, but it turns out to be non-convex. By exploiting the structure of this problem, we find a convexization procedure to convert the original nonconvex problem into an equivalent convex problem, which has the same global optimum with the original problem. Following the convexization procedure, we present a centralized algorithm to solve the utility maximization problem, but it requires the global information of all users. Thus we propose a primal-dual distributed algorithm which does not need global information and just consumes a small amount of overhead. Furthermore, we have proved that the distributed algorithm can converge to the global optimum. Finally, the numerical results show that our approach can both capture user diversity for EE-SE tradeoff and ensure user fairness, and they also validate the effectiveness of our primal-dual distributed algorithm.
Lei Deng, Yinghui He, Ying Zhang, Minghua Chen, Zongpeng Li, Jack Y. B. Lee, Ying Jun Zhang, Lingyang Song
Small-cell architecture is widely adopted by cellular network operators to increase network capacity. By reducing the size of cells, operators can pack more (low-power) base stations in an area to better serve the growing demands, without causing extra interference. However, this approach suffers from low spectrum temporal efficiency. When a cell becomes smaller and covers fewer users, its total traffic fluctuates significantly due to insufficient traffic aggregation and exhibiting a large "peak-to-mean" ratio. As operators customarily provision spectrum for peak traffic, large traffic temporal fluctuation inevitably leads to low spectrum temporal efficiency. In this paper, we advocate device-to-device (D2D) load-balancing as a useful mechanism to address the fundamental drawback of small-cell architecture. The idea is to shift traffic from a congested cell to its adjacent under-utilized cells by leveraging inter-cell D2D communication, so that the traffic can be served without using extra spectrum, effectively improving the spectrum temporal efficiency. We provide theoretical modeling and analysis to characterize the benefit of D2D load balancing, in terms of total spectrum requirements of all individual cells. We also derive the corresponding cost, in terms of incurred D2D traffic overhead. We carry out empirical evaluations based on real-world 4G data traces to gauge the benefit and cost of D2D load balancing under practical settings. The results show that D2D load balancing can reduce the spectrum requirement by 25% as compared to the standard scenario without D2D load balancing, at the expense of negligible 0.7% D2D traffic overhead.
Lei Deng, Wenhan Xu, Jingwei Li, Danny H. K. Tsang
Real-time network traffic forecasting is crucial for network management and early resource allocation. Existing network traffic forecasting approaches operate under the assumption that the network traffic data is fully observed. However, in practical scenarios, the collected data are often incomplete due to various human and natural factors. In this paper, we propose a generative model approach for real-time network traffic forecasting with missing data. Firstly, we model the network traffic forecasting task as a tensor completion problem. Secondly, we incorporate a pre-trained generative model to achieve the low-rank structure commonly associated with tensor completion. The generative model effectively captures the intrinsic low-rank structure of network traffic data during pre-training and enables the mapping from a compact latent representation to the tensor space. Thirdly, rather than directly optimizing the high-dimensional tensor, we optimize its latent representation, which simplifies the optimization process and enables real-time forecasting. We also establish a theoretical recovery guarantee that quantifies the error bound of the proposed approach. Experiments on real-world datasets demonstrate that our approach achieves accurate network traffic forecasting within 100 ms, with a mean absolute error (MAE) below 0.002, as validated on the Abilene dataset.
Ming Li, Lei Deng, Yunghsiang S. Han
Zero packet loss with bounded latency is necessary for many applications, such as industrial control networks, automotive Ethernet, and aircraft communication systems. Traditional networks cannot meet the such strict requirement, and thus Time-Sensitive Networking (TSN) emerges. TSN is a set of standards proposed by IEEE 802 for providing deterministic connectivity in terms of low packet loss, low packet delay variation, and guaranteed packet transport. However, to our knowledge, few existing TSN solutions can deterministically achieve zero packet loss with bounded latency. This paper fills in this blank by proposing a novel input-queueing TSN switching architecture, under which we design a TDMA-like scheduling policy (called M-TDMA) along with a sufficient condition and an EDF-like scheduling policy (called M-EDF) along with a different sufficient condition to achieve zero packet loss with bounded latency.
Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Xin Ma, Yuan Xie
Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration. Specifically, we model it as a statistical sampling problem and identify that by sampling less correlated data, we can largely reduce the requirement of the number of data for statistics estimation in BN, which directly simplifies the reduction operations. Based on this conclusion, we propose two sampling strategies, "Batch Sampling" (randomly select several samples from each batch) and "Feature Sampling" (randomly select a small patch from each feature map of all samples), that take both computational efficiency and sample correlation into consideration. Furthermore, we introduce an extremely simple variant of BN, termed as Virtual Dataset Normalization (VDN), that can normalize the activations well with few synthetical random samples. All the proposed methods are evaluated on various datasets and networks, where an overall training speedup by up to 20% on GPU is practically achieved without the support of any specialized libraries, and the loss on accuracy and convergence rate are negligible. Finally, we extend our work to the "micro-batch normalization" problem and yield comparable performance with existing approaches at the case of tiny batch size.
Lei Deng, Cheng Tan, Wing Shing Wong
In this paper, we study a wireless networked control system (WNCS) with $N \ge 2$ sub-systems sharing a common wireless channel. Each sub-system consists of a plant and a controller and the control message must be delivered from the controller to the plant through the shared wireless channel. The wireless channel is unreliable due to interference and fading. As a result, a packet can be successfully delivered in a slot with a certain probability. A network scheduling policy determines how to transmit those control messages generated by such $N$ sub-systems and directly influences the transmission delay of control messages. We first consider the case that all sub-systems have the same sampling period. We characterize the stability condition of such a WNCS under the joint design of the control policy and the network scheduling policy by means of $2^N$ linear inequalities. We further simplify the stability condition into only one linear inequality for two special cases: the perfect-channel case where the wireless channel can successfully deliver a control message with certainty in each slot, and the symmetric-structure case where all sub-systems have identical system parameters. We then consider the case that different sub-systems can have different sampling periods, where we characterize a sufficient condition for stability.
Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding
As a promising solution to boost the performance of distance-related algorithms (e.g., K-means and KNN), FPGA-based acceleration attracts lots of attention, but also comes with numerous challenges. In this work, we propose AccD, a compiler-based framework for accelerating distance-related algorithms on CPU-FPGA platforms. Specifically, AccD provides a Domain-specific Language to unify distance-related algorithms effectively, and an optimizing compiler to reconcile the benefits from both the algorithmic optimization on the CPU and the hardware acceleration on the FPGA. The output of AccD is a high-performance and power-efficient design that can be easily synthesized and deployed on mainstream CPU-FPGA platforms. Intensive experiments show that AccD designs achieve 31.42x speedup and 99.63x better energy efficiency on average over standard CPU-based implementations.
Liu Liu, Lei Deng, Xing Hu, Maohua Zhu, Guoqi Li, Yufei Ding, Yuan Xie
We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimension-reduction search (DRS) and obtains the BN compatibility via a double-mask selection (DMS). Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.
Yihan Lin, Wei Ding, Shaohua Qiang, Lei Deng, Guoqi Li
With event-driven algorithms, especially the spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream-dataset is urgently needed. However, it is well known that creating an ES-dataset is a time-consuming and costly task with neuromorphic cameras like dynamic vision sensors (DVS). In this work, we propose a fast and effective algorithm termed Omnidirectional Discrete Gradient (ODG) to convert the popular computer vision dataset ILSVRC2012 into its event-stream (ES) version, generating about 1,300,000 frame-based images into ES-samples in 1000 categories. In this way, we propose an ES-dataset called ES-ImageNet, which is dozens of times larger than other neuromorphic classification datasets at present and completely generated by the software. The ODG algorithm implements an image motion to generate local value changes with discrete gradient information in different directions, providing a low-cost and high-speed way for converting frame-based images into event streams, along with Edge-Integral to reconstruct the high-quality images from event streams. Furthermore, we analyze the statistics of the ES-ImageNet in multiple ways, and a performance benchmark of the dataset is also provided using both famous deep neural network algorithms and spiking neural network algorithms. We believe that this work shall provide a new large-scale benchmark dataset for SNNs and neuromorphic vision.
Aoyu Gong, Lei Deng, Fang Liu, Yijin Zhang
This paper considers random access in deadline-constrained broadcasting with frame-synchronized traffic. To enhance the maximum achievable timely delivery ratio (TDR), we define a dynamic control scheme that allows each active node to determine the transmission probability with certainty based on the current delivery urgency and the knowledge of current contention intensity. For an idealized environment where the contention intensity is completely known, we develop an analytical framework based on the theory of Markov Decision Process (MDP), which leads to an optimal scheme by applying backward induction. For a realistic environment where the contention intensity is incompletely known, we develop a framework using Partially Observable Markov Decision Process (POMDP), which can in theory be solved. We show that for both environments, there exists an optimal scheme that is optimal over all types of policies. To overcome the infeasibility in obtaining an optimal or near-optimal scheme from the POMDP framework, we investigate the behaviors of the optimal scheme for two extreme cases in the MDP framework, and leverage intuition gained from these behaviors to propose a heuristic scheme for the realistic environment with TDR close to the maximum achievable TDR in the idealized environment. In addition, we propose an approximation on the knowledge of contention intensity to further simplify this heuristic scheme. Numerical results with respect to a wide range of configurations are provided to validate our study.
Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and the weight update, we first design lookup table (LUT) based processing elements in Forward Engine and Weight Update Engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware Backward Engine which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.
Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan
Graph Convolutional Networks (GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations. Although GCN performs well compared with other methods, it still faces challenges. Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs. Therefore, motivated by an urgent need in terms of efficiency and scalability in training GCN, sampling methods have been proposed and achieved a significant effect. In this paper, we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN. To highlight the characteristics and differences of sampling methods, we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories. Finally, we discuss some challenges and future research directions of the sampling methods.
Zhijun Zeng, Zhen Hou, Ting Li, Lei Deng, Jianguo Hou, Xinran Huang, Jun Li, Meirou Sun, Yunhan Wang, Qiyu Wu, Wenhao Zheng, Hua Jiang, Qi Wang
Feb 21, 2022·q-bio.QM·PDF We develop a deep learning approach to predicting a set of ventilator parameters for a mechanically ventilated septic patient using a long and short term memory (LSTM) recurrent neural network (RNN) model. We focus on short-term predictions of a set of ventilator parameters for the septic patient in emergency intensive care unit (EICU). The short-term predictability of the model provides attending physicians with early warnings to make timely adjustment to the treatment of the patient in the EICU. The patient specific deep learning model can be trained on any given critically ill patient, making it an intelligent aide for physicians to use in emergent medical situations.