Attractors: Architects of Network Organization?

Publisher: Karger

E-ISSN: 1421-9743|55|5|256-277

ISSN: 0006-8977

Source: Brain, Behavior and Evolution, Vol.55, Iss.5, 2000-08, pp. : 256-277

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Previous Menu Next

Abstract

An attractor is defined here informally as a state of activity toward which a system settles. The settling or relaxation process dissipates the effects produced by external perturbations. In neural systems the relaxation process occurs temporally in the responses of each neuron and spatially across the network such that the activity settles into a subset of the available connections. Within limits, the set of neurons toward which the coordinated neural firing settles can be different from one time to another, and a given set of neurons can generate different types of attractor activity, depending on how the input environment activates the network. Findings such as these indicate that though information resides in the details of neuroanatomic structure, the expression of this information is in the dynamics of attractors. As such, attractors are sources of information that can be used not only in adaptive behavior, but also to effect the neural architecture that generates the attractor. The discussion here focuses on the latter possibility. A conjecture is offered to show that the relaxation dynamic of an attractor may ‘guide’ activity-dependent learning processes in such a way that synaptic strengths, firing thresholds, the physical connections between neurons, and the size of the network are automatically set in an optimal, interrelated fashion. This inter-relatedness among network parameters would not be expected from more classical, ‘switchboard’ approaches to neural integration. The ideas are discussed within the context of ‘pulse-propagated networks’ or equivalently as ‘spike-activated networks’ in which the specific order in time intervals between action potentials carries important information for cooperative activity to emerge among neurons in a network. Though the proposed ideas are forward-looking, being based on preliminary work in biological and artificial networks, they are testable in biological neural networks reconstructed from identified neurons in cell culture and in simulation models of them.