Imitation is not simply recording and replaying movements. The learned skills require to be generalized to new situations. For example, if someone grasps a bottle of orange juice, shakes it and pours its content into a glass, the robot should be able to reproduce the task even if the position of the bottle and the glass are different than during the demonstrations. The robot should be able to shake the bottle even if its body does not have the same exact shape and configuration of articulations (a.k.a. correspondence problem, retargeting problem, mapping problem).
In contrast to robots in standard industrial settings, humanoids and compliant robots can work in unpredictable environment and in the proximity of users.
Two sets of tools are relevant for learning and reproducing skills in unpredictable environment:
a) Probabilistic machine learning tools: they can extract and exploit the regularities and the important characteristics of the task.
b) Dynamical systems: they are able to cope with perturbations in real-time without having to replan the whole trajectory.
We study how to make these two sets of tools work together. The problem can be illustrated as follows. We assume that the motion of the robot is driven by a set of virtual springs that are related (connected) to a set of candidate objects or body parts of the robot. The learning problem consists of estimating when and where to activate these springs. This can be learned from demonstrations by exploiting the invariant characteristics of the task (the parts of the movement that are the same between the multiple demonstrations). The consistent characteristics will result in strong springs, and the irrelevant will result in soft springs.
What does the video show?
By using the same software, we can teach different skills to the robot. Demonstrations were recorded by placing visual markers on an object and on the hands of the user, tracked by the Optitrack system.
When the model is tested with hand clapping movements, the robot extracted that bimanual coordination was required. Namely, that the position of one hand is regulated with respect to the other hand. The clapping motion is not perturbed by moving the object around the robot, because the robot learned that the object was not relevant for this skill (i.e., the springs attached to the object frame are too weak to influence the movement).
When the model is tested with reaching movements, the robot learned that it should use one or the other hand depending on the position of the object. If the object is in the center, the robot can try to grasp it with both hands, by following the behavior previously demonstrated by the user. The hand that is not used for tracking the object automatically comes back to a neutral pose.
Publication: Calinon, S., Li, Z., Alizadeh, T., Tsagarakis, N.G. and Caldwell, D.G. (2012). Statistical dynamical systems for skills acquisition in humanoids. In Proceedings of the Intl Conf. on Humanoid Robots (Humanoids).
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?