The aim of this research is to base the “AI” methods that control virtual characters on data of real human behaviour rather than purely a priori models. This will make it possible to capture the individuality and complexity of a real person’s behaviour. It will also make it enable actor’s to be more closely involved in the process of creating interactive characters, thus bringing a greater artistic insight into what has been, up to now a very technology driven process.
Gillies, Marco, 2009. Learning Finite State Machine Controllers from Motion Capture Data. IEEE transactions on computational intelligence and AI in games, 1 (1). pp. 63-72. ISSN 1943-068X
Gillies, Marco and Pan, Xueni and Slater, Mel and Shawe-Taylor, John, 2008. Responsive Listening Behavior. Computer Animation and Virtual Worlds, 19 (5). pp. 579-589. ISSN 1546-4261
This video shows a new approach to designing video game characters that can respond to our body movements and body language. Rather than trying to program explicit rules for behavior, which would make it hard to capture the subtleties of body language, our software allows people to design movements directly by moving and interacting.
Two people can play the roles of the video game character and the player, showing how the character should respond by acting out the movements themselves. This allows them to design movements in a natural way, by moving, rather than having to think about mathematical rules. The motion of both participants are recorded and synchronized. This data is then used as input to a machine learning algorithm which learns an algorithm for automatically controlling a video game character so that is responds in the same way as the people designing it.
This style of design is particularly well suited to actors and performers who have a deep understanding of movement and body language. We did an in depth case study with physical theatre performer Emanuele Nargi, who used our software to design an interactive character based on his interactions with a number of members of the public.
This research was by Andrea Kleinsmith and Marco Gillies
Thank you to our colleagues at Goldsmiths, University of London, Emanuele Nargi of the MA Performance Making, the Embodied Audio-Visual Interaction research group at Goldsmiths Digital Studio, the Department of Computing and to Anna Furse of the Department of Theatre and Performance.