ABSTRACT

As computers are used in more complex design, computational, monitoring, and control situations, company reputations and company product liability risks become closely associated with the adequacy of computer programs. Particularly in a research laboratory, as the technology advances, so must the means to store, compute, and recall data be improved. Researchers cannot afford the time to perform long manual calculations or to take individual readings at hundreds of measurement points around the test model. This must all be done automatically and efficiently and, therefore, by computer. Yet there is no practical means for proving computer programs correct, particularly those which are developed for one-of-a-kind experiments or tests. There is no universal testing procedure which could be invoked to demonstrate program correctness, e. g., isolated from program requirements and program design structure and documentation. There have been a variety of industry and government investigations to try to solve the problem of how to "test quality in" to computer programs. Huge sums of money have been spent on the development of computer software and, unfortunately, schedule and budget overruns have been a way of life for software development. Often the final products were useless or, at best, something less than was anticipated. In an effort to overcome these problems, rigorous development and documentation procedures were devised. Though costly to implement, the procedures do reduce the element of surprise in software products and improve communications between software developers/users and management.