ABSTRACT

• Provide fused sensor information to lower the cognitive load on user. In the three interfaces with multiple data types (Systems A, B, and C), all required the user to mentally fuse video with other sensor streams.• Provide UIs that support multiple robots in a single display. We saw in Section 5.3 that the number of commands doubled when two robots were used instead of one. These commands needed to be entered in two separate windows.• Minimize the use of multiple windows. With additional sensor fusion, more information could be displayed in a single window.• Provide more spatial information about the robot in the environment. Spatial information could take the form of a map, discussed earlier, or some other method. At the very least, operators must be aware of their robots’ immediate surroundings to avoid bumping into obsta­cles or victims.• Provide robot help in deciding which level of autonomy is most use­ful. Team B’s system had four levels of autonomy available, and the operator needed to select the method appropriate for this situation. The sensor data on the robot could be processed to assist with this de­cision. For example, we noticed that Team B’s operator changed to autonomous mode whenever he felt that he was in a very tight situa­tion; the robot could easily automate this switch or the suggestion of this switch.