ABSTRACT
The setting for our study was a 3-day disaster response training exercise in Miami, Florida on November 28 through 30,2001. The exercise consisted of 2 days of intensive hands-on training, which included collapse shoring, concrete breaching and breaking, heavy metal cutting and crane operations, technical search operations, and weapons of mass destruction-hazardous materials operations followed by a 16-hr deployment drill on an actual collapse site. As part of the Technical Search Operations module, which exposes course participants to the latest technical search innovations, all students received 20 min of awareness-level instruction in rescue robotics conducted by researchers from the Center for Robot-Assisted Search and Rescue. The awareness training course was designed to provide the students with a mental model of how the robot worked, and to provide an opportunity for hands-on experience teleoperating a robot (although time constraints precluded all students from having the chance to do so).For the 16-hr high-fidelity response drill, a two-story warehouse in a light industrial park near the airport was partially collapsed, creating a large rubble pile. In addition to the collapsed building, two large rubble piles and an abandoned automobile that was set on fire were used for training operations. Figure 2 shows the layout of the collapse site and debris and rubble piles. The site was not simplified, and significant safety hazards were present. Large chunks of concrete walls, tangled rebar, and loose electrical wiring posed the main hazards to people on the piles. Weather and visibility conditions are not always conducive to rescue operations, but in this case, the night was clear (almost full moon) and the temperature normal for the area (70 °F).In this kind of disaster environment, the visual technical search task consists of four activities in order of importance: search for signs of victims, report of findings to the team or task force leader, note any relevant structural information that might impact the further investigation of the void, and estimate the volume that has been searched and map it relative to the rubble pile. The technical search task is highly focused and generally limited to a short period of time where the searcher is called onto the pile, carries the technical equipment to the site, sets it up, gets results, and then returns to the forward station. In the Miami drill, robots collected this visual data. In our study, we attempted to capture how the operator used the robot to search for signs of survivors and
noted structural information, because these were the activities with direct human-robot interaction. Participants
The 5 participants in the study were 3 student participants of the disaster response training exercise and 2 instructors. These participants were a subset of the approximately 75 students and approximately 15 instructors involved in the drill, who can be characterized as current USAR task force members serving as instructors or completing required recertification training hours, or first responders (firefighters and emergency medical technicians) seeking USAR certification to be eligible to serve on a regional task force team. The majority of students had no previous USAR experience in a large-scale disaster, such as the collapse of a large building due to an explosion or earthquake. Apparatus
The apparatus used in the study consisted of three robotic systems: two Inuktun Microtracs System robots (see Figure 3a) and an Inuktun Micro Variable Geometry Tracked Vehicle robot (see Figure 3b). The user interface offers little information beyond a visual view of the environment from the robot’s camera. Scale, dimensionality, and color resolution are known constraints. The three robots are small, tracked platforms equipped with a color CCD camera on a tilt unit and two-way audio through a set of microphones and speakers on the robot and operator control unit. The Inuktun Micro Variable Geometry Tracked Vehicle is a polymorphic robot that can change from a flat position to a raised, triangular position. Its design allows the vehicle to change shape while moving to meet terrain and obstacle challenges, and it is capable of lifting the camera up to a higher vantage point (about 10.5-in. high when raised to maximum height). All three robots are powered and controlled through a 100-ft tether cord that connects the operator control unit and the robot. The Inuktun robots have limited communication capability. The operator is given basic control capability: traversal, power, camera tilt, focus, illumination, and height change for the polymorphic robot. Procedure
At the start of the drill, participants were checked in, divided into three teams, assigned roles, and transported to the site. Once at the site, they established scene security, set up the base of operations, and conducted site safety and operational surveys. Field operations commenced at 10:30 p.m., approxi-
(b) mately 4 hr after the drill began. During field operations, the robot cache was available for deployment on call. Robots were deployed in three areas of the hot zone, as shown in Figure 2. When a team requested a robot via radio, two or three researchers would move to the requested location and set up the robot for use, explaining the controls to the operator as needed. A student or researcher was designated as tether manager for the operator (i.e., uncoiled and recoiled the tether cord, and sometimes shook or popped the cord to free it from debris).Our data collection was a modified version of the procedure used by Casper (2002). Two cameras simultaneously recorded the view through the robot’s camera (what it sees) and a view of the operator and the operator control unit (what the operator is seeing and doing.) When the robot was visible, a third video unit recorded an external view of the robot in use.The robot was deployed five times (see Figures 2 and 4). Three of the five runs (Runs 1,2, and 5) were initiated on request by the teams; the other two runs
(Runs 3 and 4) were initiated by instructors to gain hands-on experience with the robots. The first two runs searched the main rubble pile located next to the collapsed building. In the next two runs, areas that had already been searched by the teams were explored. The fifth run used the robot during victim recovery operations on the smaller rubble pile in an attempt to get a visual of or pathway to the victim. In each run, members of the team self-organized to use the robot. In Runs 2,4, and 5, two members of the team were involved in robot operation. In Runs 1 and 3, a third participant got involved spontaneously by looking over the shoulder of the operator and interacting with the operator. The remainder of the team was either occupied with other tasks or passively observed. The five runs yielded 66 min, 16 sec of videotape for analysis. 4.2. Measures
As we observed the operators throughout the course of the drill, we realized that they did not work alone, and that the communications between team members played an important role in operator behaviors and performance. Therefore, we sought to measure communication between the operator and other team members as well as situation awareness. Because robot-assisted search and rescue is a relatively new field, there are no existing domain-relevant methods of analysis (e.g., communication coding schemes). The Federal Aviation Administration’s Controller-to-Controller Communication and Coordination Taxonomy (C4T; Peterson, Bailey, & Willems, 2001) uses verbal information to assess team member interaction from communication exchanges in an air traffic control environment. The C4T is applicable to our work in that it captures the “how” and “what” of team communication by coding form, content, and mode of communication. Our goal, however, was two
fold, not only to capture the how and what of USAR robot operator teams, but also the “who,” and to capture observable indicators of robot operator situational awareness. Therefore, we developed a new coding scheme, the Robot-Assisted Search and Rescue Communication Coding Scheme (RASAR CCS). Although the development of the coding scheme is guided by the structure of the C4T, and incorporates relevant portions of the C4T, the RASAR CCS is domain specific. It was developed to examine USAR robot operator interactions with team members and to capture observable indicators of robot operator situation awareness.The RASAR CCS addresses our goals of capturing team process and situation awareness by coding each statement on four categories: (a) speaker-recipient dyad, (b) form or grammatical structure of the communication, (c) function or intent of the communication, and (d) content or topic of the communication. By examining dyad, form, and content, we can determine which team members are interacting and what they are communicating about. Similarly, exploring elements of content and function allows us to examine indicators of operator situation awareness. The development of the RASAR CCS is described later, and the complete coding scheme is provided in Figure 5. Dyads
Speaker-recipient dyad codes were developed as a function of speaker-recipient pairs of individuals anticipated in a USAR environment. Nine dyads were constructed to describe conversations between individuals. Five dyad codes classify statements made by the operator to another person (or persons): the tether manager, another team member, the researcher or robot technician, group, or other. The remaining four classify statements received by the operator from another person: the tether manager, another team member, the researcher or robot technician, or other.The primary dyads involve the operator and tether manager (the person manipulating the robot’s tether during teleoperation), operator and researcher, or operator and another team member. The element operator-other is used when the operator addresses a specific person who does not match one of those roles. The operator-group dyad is used when the operator is addressing those present as a group, or when the operator’s statements are not clearly addressed to a specific individual. Verbalizations between individuals that did not include the operator were not coded. Form
The form category contains the following elements: question, instruction, comment, or answer. (A statement can be a whole sentence or a meaningful
phrase or sentence fragment.) Statements not matching these categories are classified as undetermined.To establish content and function codes, a subset of operator statements (177 of the 272 total statements) were subjected to a Qjsort content analysis (Sachs, 2000). Two subject matter experts not involved in the study sorted operator statement on content (according to the topic being discussed) and on function (according to the high-level purpose of the statement). Qjsort categories were reviewed and refined by two additional subject matter experts to ensure the elements reflected the domain of content and function. C o n t e n t
The Qjsort analysis based on content yielded seven elements representing the content category: 1. Statements related to robot functions, parts, errors, or capabilities (state of the robot).2. Statements describing characteristics, conditions, or events in the search environment (state of the environment).3. Statements reflecting associations between current observations and prior observations or knowledge (state of information gathered).4. Statements surrounding the robot’s location, spatial orientation in the environment, or position (robot situatedness).5. Indicators of direction of movement or route (navigation).6. Statements reflecting search task plans, procedures, or decisions (search strategy).7. Statements unrelated to the task (off task). The first four content elements are necessary for building and maintaining situation awareness in search operations, whereas the elements of navigation and search strategy require situation awareness. Situation awareness is generated through information perceived (Level 1) and comprehended (Level 2) about the robot and environment. Because navigation and search strategy are elements that cannot be executed efficiently without situation awareness, statements reflecting these are indicators of operator situation awareness (Level 3).