ABSTRACT

Presumably, software agents are naturally related to human agents since the former are frequently designed to replicate or mirror the latter at different degrees of precision. Brian Arthur provided the two earliest guidelines for this mirroring function (Arthur, 1991, 1993). Following the idea of calibration in general economic modeling, he also proposed calibrating artificial agents, i.e., calibrating the designs of software agents in light of real data. He suggested two criteria on which the calibration is based. The first one is a statistical criterion, whereas the second one is an AI one, namely the Turing test.