ABSTRACT

An evolutionary view of spatial cognition must start by explaining “simpler” tasks by simple mechanisms and try to scale these mechanisms to deal with more advanced problems. One such scalable model is the view-graph approach to spatial cognition. View graphs allow for goal dependent flexibility and planning of spatial behavior, i.e. spatial cognition. The model has been formulated as a computer program using real sensor data is input and motor commands as output. It has been implemented on autonomous robots and proofed to generate biologically plausible spatial behavior. Metric information concerning the nodes of a graph can be obtained from local metric measurements such as the distances between connected nodes and the angles between adjacent steps of a route. A modified multidimensional scaling algorithm can than be used to generate a global metric embedding of the graph.