In emergency response, gathering intelligence is still largely a manual process despite advances in mobile computing and multi-touch interaction. Personnel in the field take notes on paper maps which are then manually correlated to larger paper maps that are eventually digested by global information systems (GIS) specialists who geo-reference the data by hand. Unfortunately, this analog process is personnel intensive and can often take twelve to twenty-four hours to get the data from the field to the daily operation briefings. This means that the information digested by personnel going into the field is at least an operational period out of date. In a day where satellite photography and mobile connectivity is becoming ubiquitous in our digital lives, it is alarming to find this as the state of the practice for most disciplines of emergency response.
Our goal is to create a common computing platform that integrates multiple digital sources into a common computing platform that provides two-way communication, tracking, and mission status to the personnel in the field and the incident command hierarchy that supports them. Advanced technology such as geo-referenced digital photography and ground robots are still limited to sending video only to the operators at the site, but this is changing quickly. Recent advances in robotics, mobile communication, and multi-touch tabletop displays are bridging these technological gaps and providing enhanced network centric operation and increased mission effectiveness.
Our research in human computer interaction leverages these technologies through the use of a collaborative tabletop multi-touch display, the Microsoft Surface. A single-robot operator control unit and a multi-robot command and control interface has been created. It is used to monitor and interact with all of the robots deployed at a disaster response. Users to tap and drag commands for individual or multiple robots through a gesture set designed to maximize ease of learning. A trail of waypoints can provide specific areas of interest or a specific path can be drawn for the robots to follow. The system is designed as discrete modules allow integration of a variety of data sets like city maps, building blueprints, and other geo-referenced data sources. Users can pan and zoom on any area, and it can integrate video feeds from individual robots so you can see things from their perspective.
Manual robot control is achieved by using the DREAM (Dynamically Resizing Ergonomic and Multi-touch) Controller. The controller is virtually painted beneath the user's hands, changing its size and orientation according to our newly designed algorithm for fast hand detection, finger registration, and handedness registration. In addition to robot control, the DREAM Controller and hand detection algorithms have a wide number of applications in general human computer interaction such as keyboard emulation and multi-touch user interface design.
Thesis Advisor: Prof. Holly Yanco
Thesis Committee: Dr. Jill Drury, The MITRE Corporation and UMass Lowell
Dr. Terry Fong, NASA Ames
Loading more stuff…
Hmm…it looks like things are taking a while to load. Try again?