ABSTRACT

What do assessment centers measure? How well do assessment center ratings predict job performance? How can the reliability and validity of assessment center ratings be optimized? These are the primary questions underlying the vast majority of the research literature on assessment centers. As established in the previous chapters, assessment centers (ACs) emerged and continue to be primarily used as an approach to the measurement of individual differences relevant to work performance. So like any measurement tool, a fundamental concern for ACs is establishing how well they measure the individual differences they purport to measure and the appropriateness of the inferences that are drawn from these measures. In essence, this defines the construct validation process (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, (1999); Society for Industrial and Organizational Psychology, (2003)). Thus, a primary theme throughout the AC literature has been the accumulation of evidence pertaining to the interpretation of AC ratings. The primary goal of the present chapter is to present a general overview of the methods and analytic approaches that have been applied in this endeavor. Toward this end, we begin with a discussion of the scores that result from the use of the AC method, that is AC ratings. We then turn to the ways in which these ratings have been analyzed in order to address the underlying questions of interest.