ABSTRACT

Learning in Situated Domains Introduction

Reinforcement learning (RL) has been successfully applied to a variety of domains, and has recently been attempted on situated agents such as mobile robots. While simulation results are encouraging, work on physical robots has been slow to repeated that success. The key challenges of situated domains include: 1) modeling a combination of discrete and continuous state spaces based on multimodal perceptual inputs; 2) modeling real-world events that may neither be caused directly by the agents nor perceived by it, but subsequently affect its behavior; 3) the number of learning trials reasonably available to an agent and the non-uniform exploration of the learning space mandated by the agent's external environment; 4) dealing with multiple concurrent and sequential goals; 5) modeling a combination of discrete and continuous, immediate and delayed, multimodal feedback that may be available to the agent.