Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
FUZZY SETS, EVOLUTIONARY LEARNING AND ADAPTATION OF BEHAVIORS FOR AUTONOMOUS ROBOTS Andrea Bonarini Politecnico di Milano Artificial Intelligence and Robotics Project Dipartimento di Elettronica e Informazione - Politecnico di Milano Piazza Leonardo da Vinci, 32 - 20133 Milano - Italy E-mail: [email protected] URL: http://www.elet.polimi.it/people/bonarini/ Phone: +39 2 2399 3525 Fax: +39 2 2399 3411 Abstract In this paper, we discuss some motivations supporting the use of models based on fuzzy sets to implement robot controllers to be evolved by a learning/adaptation, reinforcement algorithm. When we observe a robot, or an animal, operating in its environment, we tend to describe its behavior in terms we are familiar with. In particular, we consider variables that are related to the sensorial and expressive ability of the agent we are observing. The values of each of these variables can be classified as belonging to a fuzzy set , identified by a label that may easily correspond to our perception of the classification that could be useful to achieve the task. Fuzzy sets, as well as labeled intervals, give the possibility to classify the sensorial input, thus abstracting the aspects relevant for the application. Fuzzy sets, unlike intervals, give a quantified classification that can be used to control the robot with the required precision. We focus on reinforcement learning of control models and architectures based on fuzzy sets. There are many motivations to adopt fuzzy models to represent such mappings from real-valued input to real-valued output. It is well-known that it is possible to implement fuzzy control systems robust with respect to modeling imprecision and input noise. Imperfect learning and adaptation can affect the quality of the model, and the intrinsic robustness of a fuzzy model plays an important role to smooth this effect. Moreover, fuzzy models at a high-level of abstraction are compact and make it possible to learn how to face complex situations in a relatively short time. A good fuzzy model catches the relationship among relevant aspects of classified input and output. This means that the model consists of relationships among symbols related to data interpretation. This is a compact way to represent local models (one model for each fuzzy rule) and their interaction. Learning this type of models may be really effective, since learning algorithms may focus on local, very simple models. Therefore, the search space is usually relatively small, and the interaction between close models is ruled by the traditional fuzzy operators. A last, relevant feature of fuzzy models is that the data interpretation may preserve the precision of the acquired data. An interval-based classification gives to all the values belonging to an interval the same role in the model (we cannot distinguish from each other), thus reducing the granularity of data. This is desired, to reduce the complexity of the model, but has the undesired effect to produce coarse output. A fuzzy model obtains a similar reduction of the search space, but, since membership degrees are associated to data classification, it is possible to produce output with the desired granularity, possibly giving to any value a different role in the model. We focus on reinforcement learning algorithms (Kaelbling et al., 1995). Most of the algorithms, belonging to this category and proposed so far, operate on data represented by intervals, usually coded by binary sequences. We discuss how it is possible to modify well-known algorithms to learn fuzzy models instead than interval-based. In particular, we introduce some general considerations that can be used to modify most of the known reinforcement learning algorithms, such that they can operate on fuzzy models instead of on interval-based. In this paper, we present in detail only one of these algorithms, to pinpoint the general ideas we have introduced. Other extensions are discussed elsewhere (Bonarini,1998). Finally, we present some experiments, showing how reinforcement learning abilities can be successfully applied to learn both monolithic or modular fuzzy control systems for autonomous robots, and to adapt them to the environment. References 1. Bonarini, A. (1998). Reinforcement distribution to fuzzy classifiers: a methodology to extend crisp algorithms. Proceedings. of the IEEE World congress on Computational Intelligence (WCCI) - Evolutionary Computation, IEEE Computer Press, Piscataway, NJ, pp.51-56. 2. Kaelbling, Pack L., M. L. Littman, & A. W. Moore. (1996). Reinforcement Learning: a survey. Journal of Artificial Intelligence Research. 4, 237-285.