ABSTRACT
An accepted evaluation methodology in HCI is to take a general set of principles and tailor them for use in evaluating a specific application (e.g., see Nielsen, 1993). We operationalized and tailored Scholtz’s (2002) evaluation guidelines as follows to be more specific to the case of HRI in an urban search and rescue context.“Is the necessary information present for the human to be able to determine that an intervention is needed?” becomes “Is sufficient status and robot location information available so that the operator knows the robot is operating correctly and avoiding obstacles?” “Necessary information” is very broad. In the case of urban search and rescue robots, operators need information regarding the robot’s health, especially if it is not operating correctly. Another critical piece of information operators need is the robot’s location relative to obstacles, regardless of whether the robot is operating in an autonomous or teleoperated mode. In either case, if the robot is not operating correctly or is about to collide with an obstacle, the operator will need to take corrective action.“Is the information presented in an appropriate form?,” becomes “Is the information coming from the robots presented in a manner that minimizes operator memory load, including the amount of information fusion that needs to be
performed in the operators’ heads?” Robotic systems are very complex. If pieces of information that are normally considered in tandem (e.g., video images and laser ranging sensor information) are presented in different parts of the interface, the operator will need to switch his attention back and forth, remembering what he or she saw in a previous window to fuse the information mentally. Operators can be assisted by information presentation that minimizes memory load and maximizes information fusion.“Is the interaction language efficient for both the human and the intelligent system? Are interactions handled efficiently and effectively-both from the user and the system perspective?” Combining these two, they become, “Are the means of interaction provided by the interface efficient and effective for the human and the robot (e.g., are shortcuts provided for the human)?” We consider these two guidelines together because there is little language per se in these interfaces; rather, the more important question is whether the interactions minimize the operator’s workload and result in the intended effects.We are looking at interaction in a local sense, that is, we are focused on interactions between an operator and one or more robots. The competitions currently emphasize this type of interaction but do not provide an environment to study the operator-robot interaction within a larger search and rescue team.Interactions differ depending on autonomous capabilities of the robots. From the user perspective, we are interested in finding the most efficient means of communicating with robots at all levels of autonomy. For example, if a robot is capable of autonomous movement between waypoints, then how does the operator specify these points? The interaction language must also be efficient from the robot point of view. Can the input from the user be quickly and unambiguously parsed? If the operator inputs waypoints by pointing on a map, what is the granularity? If the user types robot commands, is the syntax of the commands easily understood? Are error dialogues needed in the case of missing or erroneous parameters?“Does the interaction architecture scale to multiple platforms and interactions?” becomes “Does the interface support the operator directing the actions of more than one robot simultaneously?” A goal in the robotics community is for a single operator to be able to direct the activities of more than one robot at a time. Multiple robots can allow more area to be covered, can allow for different types of sensing and mobility, or can allow for the team to continue operating after an individual robot has failed. Obviously, if multiple robots are to be used, the interface needs to enable the operator to switch his or her attention among robots successfully.“Does the interaction architecture support evolution of platforms?” becomes “Will the interface design allow for adding more sensors and more autonomy?” A robotic system that currently includes a small number of sensors is likely to add more sensors as they become available. In addition, robots will
become more autonomous, and the interaction architecture will need to support this type of interaction. If the interaction architecture has not been designed with these possibilities in mind, it may not support growth. 3.2. Assessment Environment
The robots competed in the Reference Test Arenas for Autonomous Mobile Robots developed by the National Institute of Standards and Technology (NIST;Jacoff, Messina, & Evans, 2000, 2001). The arena consists of three sections that vary in difficulty. The yellow section, the easiest to traverse, is similar to an office environment containing light debris (fallen blinds, overturned table and chairs). The orange section is more difficult to traverse due to the variable floorings, a second story accessible by stairs or a ramp, and holes in the second story flooring. The red section, the most difficult section, is an unstructured environment containing a simulated pancake building collapse, piles of debris, unstable platforms to simulate a secondary collapse, and other hazardous junk such as rebar and wire cages. Figure 1 shows one possible floor plan for the NIST arena. The walls of the arena are easily modified to create new internal floor layouts, which prevent operators from having prior knowledge of the arena map.In the arena, victims are simulated using mannequins. Some of the mannequins are equipped with heating pads to show body warmth, motors to create movement in the fingers and arms, tape recorders to play recordings of people calling for help, or all three. Victims in the yellow arena are easier to locate than victims in the orange and red arenas. In the yellow arena, most victims are located in the open. In the orange arena, victims are usually hidden behind obstacles or on the second level of the arena. In the red arena, victims are in the pancake layers of the simulated collapse. Between rounds, the victim locations are changed to prevent knowledge gained during earlier rounds from providing an easier search in later rounds.Operator stations were placed away from the arena and set up so that the operators would have their backs to the arena. Therefore, the operators were not able to see the progress of their robots in the arena; they had to assess the robots’ situations using their UIs. 3.3. Method for Studying Team Performance
Teams voluntarily registered for the competition. We asked them to participate in our study but made it clear that study participation was not a requirement for competition participation. The incentive to participate in the study was the chance to have their robot system used by a domain expert in the second part of the study.