ABSTRACT

The chapter provides a detailed overview of the latest tools and techniques required for ‘performance evaluation’ of intelligent reasoning systems. Performance evaluation serves two main objectives. First, one can test: whether the designed system performs satisfactorily for the class of problems, for which it is built. This is called validation. Secondly, by evaluating the performance, we can determine whether the tools and techniques have been properly used to model “the expert”. This is called verification. Besides validation and verification, the third important issue is “the maintenance “, which is required to update the knowledge base and refine the parameters of an expert system. The chapter covers all these issues in sufficient detail.