Learning motor control in robots, such as learning visual reaching or object manipulation in humanoid robots, is becoming a central topic both in "traditional" robotics and in developmental robotics. A major obstacle is that learning can become extremely slow or even impossible without adequate exploration strategies. Active learning techniques, also called intrinsically motivated learning in the developmental robotics literature, can be used to accelerate learning. Yet, many robotic spaces have properties which are not compatible with the standard assumptions of most active learning or intrinsic motivation algorithms. For example, they are typically much too large to be learnt entirely, they can even be open-ended, and they can also contain subspaces which are too complex to be learnt by given machine learning algorithms. Some approaches to active learning/intrinsic motivation have been proposed to address some of these difficulties, such as the explicit maximization of information gain or the explicit maximization of the decrease of prediction errors (as opposed to the maximization of uncertainty or prediction errors as in many active learning heuristics). Yet, even these approaches become quickly inefficient in realistic sensorimotor spaces. In this talk, I will argue that various kinds of developmental constraints should be considered to address properly those spaces, such as maturational constraints on sensorimotor channels, the use of motor primitives, constraints on the spaces on which active learning is performed, morphological constraints, and obviously social learning constraints.

Loading more stuff…

Hmm…it looks like things are taking a while to load. Try again?

Loading videos…