ABSTRACT

Most companies do not even start measuring quality until after unit test, so all requirement and design defects are excluded, as are static analysis defects and unit test defects. The result is a defect count that understates the true numbers of bugs by more than 75". In fact, some companies do not measure defects until after release of the software. Thus most companies cannot safely use their own historical data for predictive purposes. When benchmark consulting personnel go on-site and interview managers and technical personnel, these errors and omissions can be partially corrected by interviews. A more fundamental problem is that most enterprises simply do not record data for anything but a small subset of the activities actually performed. Unfortunately, the bulk of the software literature and many historical studies only report information to the level of complete projects, rather than to the level of specific activities.