Keynote Title: Multimodal Interfaces: Capture, Tracking and Recognition
Keynote Lecturer: Dr. Vladimir Devyatkov
Keynote Chair: Dr. Pedro Gómez Vilda
Presented on: 14-02-2013
Abstract: A main goal of multimodal interface now is to support natural, efficient, powerful, and flexible human-computer interaction for different types of virtual environments. If the interaction technology is awkward, or constraining, the user’s experience with the synthetic environment is severely degraded. If the interaction itself draws attention to the technology, rather than the task at hand, it becomes an obstacle to a successful virtual environment interface. The traditional two-dimensional, keyboard- and mouse-oriented graphical user interface (GUI) is not well-suited for virtual environments. Instead, using several different modalities and integrating them provide the opportunity to develop user-friendly interface with a virtual environment. The cross product of communication modalities and sensing devices begets a wide range of multimodal interface techniques. The potential of these techniques to support natural and powerful interfaces is the future of virtual reality constructing and designing. To more fully support natural communication, it has to not only track human movement, but to interpret that movement in order to recognize semantically meaningful modality. While tracking user’s modalities may be quite useful to express them through higher-level relations such as distance, the relative direction of movement or orientation of the objects being tracked.
In this lecture, we shall consider the most popular approaches for capture, tracking and recognition of different modalities simultaneously to create intellectual human-computer interface for different goals. Taking into account the large gesture variability and their important role in creating intuitive interfaces, the considered approaches focus one's attention on gestures although the approaches may be used also for other modalities. The considered approaches are user independent and do not require large learning samples. In section 1 of the lecture gestures modalities are considered as natural and artificial. Before gestures recognition the parts of the body involved in gestures have to be captured in video stream. Modern capture and tracking methods are included in section 2 of the lecture. If the part of the body has been captured as a digital image, it can be recognized using some mathematical recognition models. Section 3 of the lecture is devoted to the most effective recognition models. Multimodal aggregation as a way to an intellectual human-computer interaction is presented in section 4 of the lecture. The last section of the lecture is a conclusion.
Presented at the following Conference: BIOSTEC, International Joint Conference on Biomedical Engineering Systems and Technologies
Conference Website: biostec.org