Author: Santos Juan Miguel Touzet Claude
Publisher: Taylor & Francis Ltd
ISSN: 1360-0494
Source: Connection Science, Vol.11, Iss.3-4, 1999-12, pp. : 267-289
Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.
Abstract
During the last decade, numerous contributions have been made to the use of reinforcement learning in the robot learning field. They have focused mainly on the generalization, memorization and exploration issues-mandatory for dealing with real robots. However, it is our opinion that the most difficult task today is to obtain the definition of the reinforcement function (RF). A first attempt in this direction was made by introducing a method-the update parameters algorithm (UPA)-for tuning a RF in such a way that it would be optimal during the exploration phase. The only requirement is to conform to a particular expression of RF. In this article, we propose Dynamic-UPA, an algorithm able to tune the RF parameters during the whole learning phase (exploration and exploitation). It allows one to undertake the so-called exploration versus exploitation dilemma through careful computation of the RF parameter values by controlling the ratio between positive and negative reinforcement during learning. Experiments with the mobile robot Khepera in tasks of synthesis of obstacle avoidance and wall-following behaviors validate our proposals.
Related content
By Nagayoshi Masato Murao Hajime Tamaki H.
Artificial Life and Robotics, Vol. 18, Iss. 1-2, 2013-12 ,pp. :