I. D. Kolesnikov, D. A. Maksimov, V. M. Moskvitin, N. Semenova
This study examines the impact of additive and multiplicative noise on both a single leaky integrate-and-fire (LIF) neuron and a trained spiking neural network (SNN). Noise was introduced at different stages of neural processing, including the input current, membrane potential, and output spike generation. The results show that multiplicative noise applied to the membrane potential has the most detrimental effect on network performance, leading to a significant degradation in accuracy. This is primarily due to its tendency to suppress membrane potentials toward large negative values, effectively silencing neuronal activity. To address this issue, input pre-filtering strategies were evaluated, with a sigmoid-based filter demonstrating the best performance by shifting inputs to a strictly positive range. Under these conditions, additive noise in the input current becomes the dominant source of performance degradation, while other noise configurations reduce accuracy by no more than 1\%, even at high noise intensity. Additionally, the study compares the effects of common and uncommon noise across neuron populations in hidden layer, revealing that SNNs exhibit greater robustness to common noise. Overall, the findings identify the most critical noise mechanisms affecting SNNs and provide practical approaches for improving their robustness.
V. M. Moskvitin, N. Semenova
In recent years, more and more researchers in the field of neural networks are interested in creating hardware implementations where neurons and the connection between them are realized physically. The physical implementation of ANN fundamentally changes the features of noise influence. In the case hardware ANNs, there are many internal sources of noise with different properties. The purpose of this paper is to study the peculiarities of internal noise propagation in recurrent ANN on the example of echo state network (ESN), to reveal ways to suppress such noises and to justify the stability of networks to some types of noises. In this paper we analyse ESN in presence of uncorrelated additive and multiplicative white Gaussian noise. Here we consider the case when artificial neurons have linear activation function with different slope coefficients. Starting from studying only one noisy neuron we complicate the problem by considering how the input signal and the memory property affect the accumulation of noise in ESN. In addition, we consider the influence of the main types of coupling matrices on the accumulation of noise. So, as such matrices, we take a uniform matrix and a diagonal-like matrices with different coefficients called "blurring" coefficient. We have found that the general view of variance and signal-to-noise ratio of ESN output signal is similar to only one neuron. The noise is less accumulated in ESN with diagonal reservoir connection matrix with large "blurring" coefficient. Especially it concerns uncorrelated multiplicative noise.
D. A. Maksimov, V. M. Moskvitin, N. Semenova
This paper examines the influence of internal Gaussian noise on the performance of deep feedforward neural networks, focusing on the role of the noise injection stage relative to the activation function. Two scenarios are analyzed: noise introduced before and after the activation function, for both additive and multiplicative noise influence. The case of noise before activation function is similar to perturbations in the input channel of neuron, while the noise introduced after activation function is analogous to noise occurring either within the neuron itself or in its output channel. The types of noise and the method of their introduction were inspired by analog neural networks. The results show that the activation function acts as an effective nonlinear filter of noise. Networks with noise introduced before the activation function consistently achieve higher accuracy than those with noise applied after it, with additive noise being more effectively suppressed in this case. For noise introduced after the activation function, multiplicative noise is less detrimental than additive noise, and earlier hidden layers contribute more significantly to performance degradation due to cumulative noise amplification governed by the statistical properties of subsequent weight matrices. The study also demonstrates that pooling-based noise reduction is effective in both cases when noise is introduced before and after the activation function, consistently improving network performance.