Always Learning

Constructed Response Scoring

Find Research:
TITLE
DESCRIPTION
AUTHOR
DATE
Establishing an Evidence-based Validity Argument for Performance Assessment

Recent initiatives have proposed to use performance tasks in ambitious new ways, including monitoring student growth and evaluating teacher effectiveness.

Lai, Emily R., Wei, Hua, Hall, Erika L., Fulkerson, Dennis 09-01-2012
Improving Text Complexity Measurement through the Reading Maturity Metric

The purposes of this paper are to describe how Word Maturity has been incorporated into Pearson’s text complexity measure, to present initial comparisons between this new measure of text complexity and traditional readability measures, and to address measurement issues in the development and use of text complexity measurements.

Landauer, Tom, Way, Walter D. 04-01-2012
Pearson's Text Complexity Measure

Pearson's Knowledge Technologies group has developed a new measure of text complexity that is fundamentally different from current readability measures.

Landauer, Thomas K. 05-02-2011
Pearson's Automated Scoring of Writing, Speaking, and Mathematics

This document describes several examples of current item types that Pearson has designed and fielded successfully with automatic scoring.

Streeter, Lynn, Bernstein, Jared, Foltz, Peter, DeLand, Donald 05-01-2011
Application of Latent Trait Models to Identifying Substantively Interesting Raters

This study demonstrates how existing latent trait modeling procedures can identify groups of raters who may be of substantive interest to those studying the experiential, cognitive, and contextual aspects of ratings.

Wolfe, Edward W., McVay, Aaron 04-01-2011
Through-Course Common Core Assessments in the United States: Can Summative Assessment Be Formative?

In this paper, we present a design for enhancing the formative uses of summative through-course assessments.

Way, Walter D., Larsen McClarty, Katie, Murphy, Dan, Ken, Leslie , Fuhrken, Charles 04-01-2011
Considerations for Performance Scoring When Designing and Developing Next Generation Assessments

This white paper explores the interactions between test design and scoring approach, and the implications for performance scoring quality, cost, and efficiency in next generation assessments.

Jones, Marianne, Vickers, Daisy 03-01-2011
Rater Effects as a Function of Rater Training Context

This study examined the influence of rater training and scoring context on the manifestation of rater effects in a group of trained raters.

Wolfe, Edward W., McVay, Aaron 10-01-2010
Bulletin #17: A Comparison of Distributed and Regional Scoring

Distributed scoring provides access to a wider pool of readers than those that could be included through regional scoring alone, thereby allowing for a larger number of readers to be recruited and permitting greater selectivity in reader recruitment. This has the potential for increased efficiency in training time for readers and could facilitate shorter turnaround times in performance scoring, which would, in turn, shorten the time between test administration and the reporting of test scores.

Keng, Leslie, Davis, Laurie L., Ragland, Shelley 09-01-2010
Bulletin #16: Pearson’s Automated Scoring

Pearson’s automated scoring technology, the Intelligent Essay Assessor (IEA), delivers fast, accurate, and valid assessment scores.

Knowledge Technologies 07-01-2010
Automated Scoring for the Assessment of Common Core Standards

This paper discusses automated scoring as a means for helping to achieve valid and efficient  measurement of abilities that are best measured by constructed-response (CR) items.

Williamson, David M., Bennett, Randy E., Lazer, Stephen, Bernstein, Jared, Foltz, Peter W., Landauer, Thomas K., Rubin, David P., Way, Walter D., Sweeney, Kevin 07-01-2010
Conference Reports; Constructed Response Scoring

An increasing number of large scale assessments contain constructed response items such  as essays for the advantages they offer over traditional multiple-choice measures. Writing  assessments in particular often contain a mixture of multiple-choice and essay items. These  mixed-format assessments pose many technical challenges for psychometricians. This study  directly builds upon the Meyers et al. (2009) study by investigating how ability estimation, essay scoring approach, measurement model, and proportion of points allocated to multiple choice  items and the essay item on mixed-format assessments interact to recover ability and item  parameter estimates under different degrees of multidimensionality.

Meyers, Jason L., Turhan, Ahmet, Fitzpatrick, Steven J. 05-01-2010
Thoughts on Linking and Comparing Assessments of Common Core Standards

The purpose of this paper is to discuss the types of comparisons that can and cannot be made among students who take different assessments supposedly developed to measure a single set of standards.

Lazer, Stephen, Mazzeo, John, Way, Walter D., Twing, Jon S., Camara, Wayne, Sweeney, Kevin 05-01-2010
Bulletin #7: Online Scorer Training

Online Scorer Training Increasingly, technology is being employed to improve the effectiveness and efficiency of delivery, scoring, and reporting of largescale assessments.

Wolfe, Edward W., PhD 08-01-2009
Effects of Different Training and Scoring Approaches on Human Constructed Response Scoring

This paper summarizes and discusses research studies related to the human scoring of constructed response items that have been conducted recently at a large scale testing company.

Nichols, Paul, Vickers, Daisy, Way, Walter D. 04-01-2008
The Validity Case for Assessing Direct Writing by Computer

Technology continues to provide opportunities for changing how teachers give instruction and how students learn.

Davis, Laurie L., Ph.D., Strain-Seymour, Ellen, Ph.D., Way, Walter D., Ph.D. 01-01-2008