Mahadev Sunil Kumar, Arnab Raha, Debayan Das, Gopakumar G, Rounak Chatterjee, Amitava Mukherjee
Distributed deep neural networks (DNNs) have become central to modern computer vision, yet their deployment on resource-constrained edge devices remains hindered by substantial parameter counts, computational demands, and the probability of device failure. Here, we present an approach to the efficient deployment of distributed DNNs that jointly respect hardware limitations, preserve task performance, and remain robust to partial system failures. Our method integrates structured model pruning with a multi-objective optimization framework to tailor network capacity for heterogeneous device constraints, while explicitly accounting for device availability and failure probability during deployment. We demonstrate this framework using Multi-View Convolutional Neural Networks (MVCNN), a state-of-the-art architecture for 3D object recognition, by quantifying the contribution of individual views to classification accuracy and allocating pruning budgets accordingly. Experimental results show that the resulting models satisfy user-specified bounds on accuracy and memory footprint, even under multiple simultaneous device failures. The inference time is reduced by factors up to 4.7x across diverse simulated device configurations. These findings suggest that performance-aware, view-adaptive, and failure-resilient compression provides a viable pathway for deploying complex vision models in distributed edge environments.
Mahadev Sunil Kumar, Adarsh Ganesan
Parametrically driven oscillators provide a natural platform for neuromorphic computation, where nonlinear mode coupling and intrinsic dynamics enable both memory and high-dimensional transformation. Here, we investigate a two-mode system exhibiting 2:1 parametric resonance and demonstrate its operation as a reservoir computer across distinct dynamical regimes, including sub-threshold, parametric resonance, and frequency-comb states. By encoding input signals into the drive amplitude and sampling the resulting temporal and spectral responses, we perform one step-ahead prediction of benchmark chaotic systems, including Mackey-Glass, Rossler, and Lorenz dynamics. We find that optimal computational performance is achieved within the parametric resonance regime, where nonlinear interactions are activated while temporal coherence is preserved. In contrast, although frequency-comb states introduce increased spectral dimensionality, their performance is not consistently good across their existence band and also degrades in the chaotic comb regime due to loss of phase coherence. Mapping prediction error over parameter space reveals a direct correspondence between computational capability and the underlying bifurcation structure, with low-error regions aligned with the parametric resonance boundary. We further show that the input modulation, the detuning from the frequency matching condition, damping ratio, and input data rate systematically control the accessible dynamical regimes and thereby the computational performance. These results establish parametric resonance as a robust operating regime for oscillator-based reservoir computing and provide design principles for tuning physical systems toward optimal neuromorphic functionality.