As a technique, distributive evaluation involves seeking multiple perspectives and responses to student compositions. For instance, works in students’ electronic portfolios at Alverno College can be read by multiple audiences-various course instructors, advisors, and other administrators (Diagnostic Digital Portfolio, 2000; “Finding Proof,” 1997; Iannozzi, 1997; Hutchings, 1996). Each of these audiences brings its own expertise and perspective to readings. If individuals are encouraged to record their responses, and these responses are then associated with the compositions in a database that is available to the instructor and other evaluators, it is possible to build a situated evaluation of a student’s composition. This evaluation acknowledges that writing, composing, and communicating are localized social activities by incorporating disparate responses from teachers, studentauthors, peers, and outside audiences. Unlike the commonly used 1-to 6-point holistic reading or the more detailed rubric-based multitrait scoring systems, a distributive assessment system does not insist that all readers read alike. Research (Elbow, 1997; Hirsch, 2003) has shown that different readers read differently in nonconstrained settings. The objective of a writing assessment system that values validity-that values an accurate evaluation of how well a student writes-should be to include multiple, and potentially different, responses to a composition.