Peter Stone, University of Texas, Austin
As robot technology advances, we are approaching the day when robots will be deployed prevalently in uncontrolled, unpredictable environments: the proverbial "real world." As this happens, it will be essential for these robots to be able to adapt autonomously to their changing environment. This session presents 3 examples of machine learning on physical robots.
First, for a robot, the ability to get from one place to another is one of the most basic skills. Session chair Peter Stone from The University of Texas at Austin presents a machine learning approach to legged locomotion, with all training done on the physical robots. The resulting learned walk is considerably faster than all previously reported hand-coded walks for the same robot platform, and is incorporated into the UT Austin Villa robot soccer team.
Second, robots will need to interact with people. Brian Scassellati from Yale University presents his research on humanoid robots that learn to use normal social cues to interact with people. His research group uses these robots both as tools to evaluate models of how infants acquire social skills and to assist in the diagnosis and quantification of disorders of social development (such as autism).
Third, robots will need to interact with one another, and with scientists, from remote locations. Ayanna Howard from Georgia Institute of Technology, focuses on space robotics, in which robust operations must occur in environments that are unknown, unexpected, and uncertain. As such, approaches that utilize human-inspired techniques, such as neural networks, fuzzy logic, and visual sensing, are employed to enhance the autonomous capabilities of intelligent systems.