ABSTRACT

An experienced team will understand that it is typically necessary to manipulate, or normalise, all sourced data prior to being able to use it. The typical kind of attributes that sourced cost data must be normalised for includes:

• Attribution of overheads • Outturn versus constant cost • Quantity effects • Fixed and variable • Recurring versus non-recurring • Currency • Imperial to metric • Questionnaire of method of allocation of costs (QMAC) • Number of users • Schedule durations

A mature organisation will have established processes on how data should be normalised, and these will be applied uniformly across the business. Typically, at the end of data manipulation there needs to be a book of anomalies. The book of anomalies will identify changes to the data since the last data gathering exercise; for example, it might highlight cost items which were previously directly attributable to projects which have now become overheads. This would account for a reduction in the costs that have been gathered in this annual data gathering exercise compared to the previous year. Other anomalies might highlight to the user of the data abnormal trends in the data or peculiarities in projects; for example, if a project had moved from the development phase to the manufacturing phase. The complete data set needs to be compared with the data set from the previous year and any trends identified. If there are any outliers from the trends that have been experienced in the past, then you should

investigate these rigorously and explain them. Once human error has been eliminated you should be able to document an explanation in the anomalies register.