Author: Morgan James Patterson Elizabeth Klopf A
Publisher: Informa Healthcare
ISSN: 0954-898X
Source: Network: Computation in Neural Systems, Vol.1, Iss.4, 1990-10, pp. : 439-448
Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.
Abstract
A network of two self-supervised simulated neurons using the drive-reinforcement rule for synaptic modification can learn to balance a pole without experiencing failure. This adaptive controller also responds quickly and automatically to rapidly changing plant parameters. Other aspects of the controller's performance investigated include the controller's response in a noisy environment, the effect of varying the partitioning of the state space of the plant, the effect of increasing the controller's response time, and the consequences of disabling learning at the beginning of a trial and during the progress of a trial. Earlier work with drive-reinforcement learning supports the claim that the theory's neuronal model can account for observed phenomena of classical conditioning; this work constitutes progress toward demonstrating that useful adaptive controllers can be fabricated from networks of classically conditionable elements.
Related content
Combining Hebbian and reinforcement learning in a minibrain model
By Bosman R.J.C. van Leeuwen W.A. Wemmenhove B.
Neural Networks, Vol. 17, Iss. 1, 2004-01 ,pp. :
Stochastic dynamics of reinforcement learning
By Bressloff P
Network: Computation in Neural Systems, Vol. 6, Iss. 2, 1995-05 ,pp. :
Second-hand supervised learning in Hebbian perceptrons
By Idiart Marco
Network: Computation in Neural Systems, Vol. 8, Iss. 4, 1997-11 ,pp. :