ABSTRACT
Once the necessary preliminaries are accomplished,
the real testing can begin. A unit test involves an input
and an expected output from that input. (A “single” input
can be arbitrarily complex, in general, but since unit
testing is often on relatively smaller pieces of code,
these inputs tend to be manageable.) Both the selection
of inputs and the generation of corresponding expected
outputs are challenging tasks. Except for unusually sim-
ple units (e.g., a function that takes two binary inputs and
returns a binary result), it is generally impractical to test
more than a relatively small fraction of the possible
inputs; selecting a set of inputs (sometimes called a “test
suite”) that will be both efficient and effective at finding
faults in the unit is a topic of much research. Two inter-
acting issues influence this research: how to pick indivi-
dual tests and groups of tests that will tend to be effective
at revealing faults in the unit, and how to determine when
to stop generating tests. Debates rage over both these
issues. Some of the strategies suggested to deal with
these problems are random testing (quick to generate,
but not targeted),[6] structural coverages (discussed in
this entry), functional coverage based on specifications,[7]
data flow,[8] and mutation testing.[9]
Once a set of test inputs is selected, it is necessary to
determine an expected output for each test input. If the
unit produces this expected output when it is executed on
the test input, then no fault has been detected, and the unit
is assumed to function correctly on that input. However, it
is non-trivial, in general, to determine these expected
outputs. The problem of determining the “correct” output
given a test input is called the “oracle problem.”[10] The
number of tests that can be completed in a practical
amount of time is generally determined more by the
oracle problem than by the time taken to choose test
inputs.