ABSTRACT

An accepted evaluation methodology in HCI is to take a general set of prin­ciples and tailor them for use in evaluating a specific application (e.g., see Niel­sen, 1993). We operationalized and tailored Scholtz’s (2002) evaluation guidelines as follows to be more specific to the case of HRI in an urban search and rescue context.“Is the necessary information present for the human to be able to deter­mine that an intervention is needed?” becomes “Is sufficient status and ro­bot location information available so that the operator knows the robot is operating correctly and avoiding obstacles?” “Necessary information” is very broad. In the case of urban search and rescue robots, operators need information regarding the robot’s health, especially if it is not operating cor­rectly. Another critical piece of information operators need is the robot’s lo­cation relative to obstacles, regardless of whether the robot is operating in an autonomous or teleoperated mode. In either case, if the robot is not op­erating correctly or is about to collide with an obstacle, the operator will need to take corrective action.“Is the information presented in an appropriate form?,” becomes “Is the in­formation coming from the robots presented in a manner that minimizes oper­ator memory load, including the amount of information fusion that needs to be

performed in the operators’ heads?” Robotic systems are very complex. If pieces of information that are normally considered in tandem (e.g., video im­ages and laser ranging sensor information) are presented in different parts of the interface, the operator will need to switch his attention back and forth, re­membering what he or she saw in a previous window to fuse the information mentally. Operators can be assisted by information presentation that mini­mizes memory load and maximizes information fusion.“Is the interaction language efficient for both the human and the intelligent system? Are interactions handled efficiently and effectively-both from the user and the system perspective?” Combining these two, they become, “Are the means of interaction provided by the interface efficient and effective for the human and the robot (e.g., are shortcuts provided for the human)?” We consider these two guidelines together because there is little language per se in these interfaces; rather, the more important question is whether the interac­tions minimize the operator’s workload and result in the intended effects.We are looking at interaction in a local sense, that is, we are focused on inter­actions between an operator and one or more robots. The competitions cur­rently emphasize this type of interaction but do not provide an environment to study the operator-robot interaction within a larger search and rescue team.Interactions differ depending on autonomous capabilities of the robots. From the user perspective, we are interested in finding the most efficient means of communicating with robots at all levels of autonomy. For example, if a robot is capable of autonomous movement between waypoints, then how does the operator specify these points? The interaction language must also be efficient from the robot point of view. Can the input from the user be quickly and unambiguously parsed? If the operator inputs waypoints by pointing on a map, what is the granularity? If the user types robot commands, is the syntax of the commands easily understood? Are error dialogues needed in the case of missing or erroneous parameters?“Does the interaction architecture scale to multiple platforms and interac­tions?” becomes “Does the interface support the operator directing the actions of more than one robot simultaneously?” A goal in the robotics community is for a single operator to be able to direct the activities of more than one robot at a time. Multiple robots can allow more area to be covered, can allow for differ­ent types of sensing and mobility, or can allow for the team to continue operat­ing after an individual robot has failed. Obviously, if multiple robots are to be used, the interface needs to enable the operator to switch his or her attention among robots successfully.“Does the interaction architecture support evolution of platforms?” be­comes “Will the interface design allow for adding more sensors and more au­tonomy?” A robotic system that currently includes a small number of sensors is likely to add more sensors as they become available. In addition, robots will

become more autonomous, and the interaction architecture will need to sup­port this type of interaction. If the interaction architecture has not been de­signed with these possibilities in mind, it may not support growth. 3.2. Assessment Environment

The robots competed in the Reference Test Arenas for Autonomous Mo­bile Robots developed by the National Institute of Standards and Technology (NIST;Jacoff, Messina, & Evans, 2000, 2001). The arena consists of three sec­tions that vary in difficulty. The yellow section, the easiest to traverse, is similar to an office environment containing light debris (fallen blinds, overturned ta­ble and chairs). The orange section is more difficult to traverse due to the vari­able floorings, a second story accessible by stairs or a ramp, and holes in the second story flooring. The red section, the most difficult section, is an unstruc­tured environment containing a simulated pancake building collapse, piles of debris, unstable platforms to simulate a secondary collapse, and other hazard­ous junk such as rebar and wire cages. Figure 1 shows one possible floor plan for the NIST arena. The walls of the arena are easily modified to create new in­ternal floor layouts, which prevent operators from having prior knowledge of the arena map.In the arena, victims are simulated using mannequins. Some of the manne­quins are equipped with heating pads to show body warmth, motors to create movement in the fingers and arms, tape recorders to play recordings of people calling for help, or all three. Victims in the yellow arena are easier to locate than victims in the orange and red arenas. In the yellow arena, most victims are located in the open. In the orange arena, victims are usually hidden behind obstacles or on the second level of the arena. In the red arena, victims are in the pancake layers of the simulated collapse. Between rounds, the victim loca­tions are changed to prevent knowledge gained during earlier rounds from providing an easier search in later rounds.Operator stations were placed away from the arena and set up so that the operators would have their backs to the arena. Therefore, the operators were not able to see the progress of their robots in the arena; they had to assess the robots’ situations using their UIs. 3.3. Method for Studying Team Performance

Teams voluntarily registered for the competition. We asked them to partici­pate in our study but made it clear that study participation was not a require­ment for competition participation. The incentive to participate in the study was the chance to have their robot system used by a domain expert in the sec­ond part of the study.