ABSTRACT

The interaction architectures we studied do not support robot evolution. Robot evolution usually involves additional sensors and more autonomy; more sensors will require more windows if the current interaction architec­tures are extended. Although one robot UI we examined does support various modes of autonomy that could ease operator workload, it currently falls to the operator to determine which mode should be used and to switch the robot as necessary. An examination of the percentage of navigation time spent in each of three autonomy modes9 for Team B, as well as the number of mode switches made during the run, shows that Team B’s operator made 20 mode switches in Run 1 and 19 mode switches in Run 2. The chief changed modes 12 times dur­ing his run, with the majority of the switches occurring at the end of his time with the robot.It would be more helpful if the robot could determine the necessary mode based on sensor information and suggest it to the operator, rather than relying on the operator to constantly revisit the decision regarding the optimal mode. 6.1. Victim Identification

that the rescue teams can construct plans to reach the victims. In Run 3, Team B found no victims, yet spent 23% of the time trying to identify victims that the operator thought he saw. In Run 1, Team A spent additional time obtaining a clearer image of a victim for positive identification.Sending rescue teams to extract victims is not without risks. Therefore, the operator needs to be reasonably confident of his victim assessment. Video is currently the most utilized means of victim identification, but addi­tional sensors are needed to more accurately assess victim state. Video transmission is difficult even in semicontrolled circumstances such as the competitions. In actual search and rescue situations, the interference in communications will likely be worse. Relying on video alone makes victim identification difficult. 6.2. Time on Tasks

Our analysis showed that failures take up a good percentage of time during runs. Two teams lost time to failures. Failure types differed from communica­tions losses to other issues such as latency. GUIs need to have a low latency time if the operators are using teleoperation to control the robot. Real-time sit­uation awareness is an issue for all types of control, which is hampered by high latencies.A large percentage of time was also spent in logistics. Multiple robots are beneficial especially if different sized robots are being deployed (e.g., using smaller robots to probe small voids). However, deployment mechanisms need to be carefully analyzed for maximum efficiency. Team A used multiple ro­bots with a low percentage of time devoted to logistics. However, when the chief (a less experienced operator) deployed a second robot, a slightly elevated percentage of time was needed for logistics. 6.3. Navigation in a Difficult Environment

Bumping into walls is penalized in the competitions. In an actual disaster, a robot that bumped a wall could trigger more damage and cause a wall to col­lapse. The test arena has a number of partitions that simulate walls and win­dows. Different wall coverings are difficult to detect with various sensors. For example, the results of our study showed the difficulty of relying on vision to detect Plexiglas.Obstacles in the arena consist of office furnishings and building material de­bris: chairs, papers, Venetian blinds, pipes, electrical cords, and bricks or cin­der blocks. As robot mobility increases, the test arena will incorporate more realistic obstacles. The goal is to avoid these obstacles, but that is not always