ABSTRACT

This chapter presents a multimodal system for trajectory programming and on-line control, merging augmented reality (AR), electromyography reading, gesture control, speech control, and tactile feedback. Gestures are widely used by humans as a way of communication, and many researchers have found applications to human-robot interaction. While the uptake for AR-based robot programming systems is growing, most current AR systems lack a force feedback channel. However, force feedback during task execution is important and informative for achieving task success. Within the context of this work, our goal is to integrate haptics in an AR interaction system as a low-attention feedback mechanism to allow users more control over a human-robot interaction. Our software system allows the user to visualize the robot, create trajectories using gaze and speech, preview the trajectory, execute the trajectory using gestures, control the trajectory’s force profile online using muscle activation, and provides tactile and visual force feedback during execution.