ABSTRACT

Introduction

In software testing, one is often interested in judging how well a series of test inputs tests a piece of code — the main idea being to uncover as many faults as possible with a potent set of tests. Unfortunately, it is almost impossible to say quantitatively how many "potential faults" are uncovered by a test set, not only because of the diversity of the faults themselves, but because the very concept of a "fault" is only vaguely defined (Friedman & Voas, 1995). This has lead to the development of test adequacy criteria, criteria that are believed to distinguish good test sets from bad ones. When a test adequacy criterion has been selected, the question that arises next is how one should go about creating a test set that is "good" with respect to that criterion. That question is the topic of this abstract.