Feed-forward chains of recurrent attractor neural networks near saturation
/ Abstract
We perform a stationary state replica analysis for a layered network of Ising spin neurons, with recurrent Hebbian interactions within each layer, in combination with strictly feed-forward Hebbian interactions between successive layers. This model interpolates between the fully recurrent and symmetric attractor network studied by Amit et al, and the strictly feed-forward attractor network studied by Domany et al. Due to the absence of detailed balance, it is as yet solvable only in the zero-temperature limit. The built-in competition between two qualitatively different modes of operation, feed-forward (ergodic within layers) versus recurrent (non-ergodic within layers), is found to induce interesting phase transitions.
Journal: Journal of Physics A