Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in designing questions and tasks with which computers can effectively interface (I. E. , for scoring and score reporting purposes) while still gathering meaningful measurement evidence.
This paper introduces a taxonomy or desegregation of 28 innovative item types that may be useful in computer-based assessment. Organized along the degree of constraint on the respondent’s options for answering or interacting with the assessment item or task, the proposed taxonomy describes a set of iconic item types termed “intermediate constraint” items. These item types have responses that fall somewhere between fully constrained responses (I. E. , the conventional multiple-choice question), which can be far too limiting to tap much of the potential of new Information technologies, and fully constructed responses (I. The traditional essay), which can be a challenge for computers to meaningfully analyze even with today’s sophisticated tools. The 28 example types discussed in this paper are based on 7 categories of ordering involving successively decreasing response constraints from fully selected to fully constructed. Each category of constraint Includes four Iconic examples. The Intended purpose of the proposed taxonomy Is to provide a practical resource for assessment developers as well as a useful framework for the discussion of Innovative assessment formats and uses In computer-based settings.