ABSTRACT

The goal of Intelligence research (AI) has shifted somewhat from 'making computers smarter' to 'understanding intelligent processes independent of their particular physical realization'. The explanatory content of an AI model is independent of the kind of mechanism used in the interior of the model. The stems from the demand that explanations should uncover the essential and leave out the inessential, a requirement worth keeping in mind when investigating the value of AI models for explanation. A successful AI program is a proof that it is possible to realize some 'intelligent' behavior through effective procedures or 'symbol systems', and that it is possible to realize this behavior in the very way specified by the procedures. Moreover, AI research has certainly revealed many problems which psychological and philosophical research had not previously exposed. AI modelling certainly does provide us with deeper experience in recognizing what makes it possible for a system to produce certain 'intelligent' behavior.