Pearson Psychometricians Contribute Expertise Developing and Implementing Standards-Based Assessments
Always Learning
Print This Page

Pearson Psychometricians Contribute Expertise Developing and Implementing Standards-Based Assessments

BLOOMINGTON, Minn. — March 12, 2012 —

Seven top psychometric scientists from Pearson are contributing authors to a new book on standards-based assessment. The title, "Setting Performance Standards: Foundations, Methods, and Innovations," now in its second edition, is the only book providing a comprehensive profile of both the issue and the how-to methods that define this complex field.

Under the editorship of Gregory J. Cizek, Ph.D., professor of education and measurement at the University of North Carolina at Chapel Hill, this new book provides education leaders with an in-depth treatment of modern standard setting methods; presents the state-of-the-art in contemporary thinking about the science and practice of setting performance standards; and reflects critically on what the future of standard setting theory and practice can accomplish.

"Sharing our extensive experience in standard-setting in this influential book supports Pearson's overall commitment to investing in the successful use of this 21st century approach to assessment throughout the industry," said Kimberly O'Malley, Ph.D., senior vice president of Research and Development at Pearson, and leader of the company's new R&D center's initiatives. "Our scientists were able to contribute hands-on experience with this contemporary approach to measuring performance which will guide and influence the field of psychometrics as it develops in this direction. Furthermore, by including evidence throughout the standard-setting process, states and organizations will set performance standards that accurately reflect student performance—these methods force transparency in student progress to college and career readiness."

Pearson expert psychometricians authored five chapters in the book:

  • Chapter 15: "From Z to A: Using Validity Evidence to Set Performance Standards"
  • A team of three Pearson scientists, Dr. O’Malley, Leslie Keng, Ph.D. and Julie Miles, Ph.D., who developed the evidence-based standard-setting methodology, contributed their considerable expertise in developing and managing high stakes assessments to this chapter which describes the ways in which states and assessment programs are adopting common standard-setting approaches to emphasize more strongly empirical and policy evidence throughout the process. The authors highlight the evidence-based standard-setting process Pearson used for the American Diploma Project Algebra II End-of-Course Exam and for the Texas statewide assessment system as examples of a comprehensive approach. In addition Drs. O'Malley, Keng and Miles are part of the Psychometric and Research Services team at Pearson. Dr. Miles is currently the lead research scientist on the Virginia Standards of Learning Program and as director of Psychometric and Research Services oversees the research and psychometric activities of staff members that support testing programs in Tennessee, Georgia, New York, Washington, D.C. and New Jersey as well as the American Diploma project and the Readistep Project. As a senior research scientist, Dr. Keng currently works on the psychometrics team supporting the Texas assessment program.

  • Chapter 14: "The Briefing Book Method"
  • Dr. Miles and Jen Beimers, Ph.D. contributed to this chapter which explores using validity evidence to support standard setting within a new paradigm of considering competing cut scores from a variety of perspectives. As a research scientist, Dr. Beimers currently works as the lead on a New York assessment program as well as the American Diploma Project.

  • Chapter 22: "Standard-Setting for Computer-Based Assessments"
  • Pearson scientists Walter (Denny) Way, Ph.D., and Katie Larsen McClarty, Ph.D., address research on mode compatibility and its implications for standard setting with computer-based assessments. The chapter reviews the current status of computer-based testing (CBT) applications in large scale state assessment programs and explores the limitations that suggested the need for dual-mode testing in these programs. As senior vice president of the Psychometric and Research Services group at Pearson, Dr. Way has more than 20 years of assessment experience in a variety of settings. He is a nationally known expert on CBT and has led testing programs in higher education, licensure and certification and K-12 assessment. Dr. McClarty is manager of Psychometric and Research Services at Pearson. She has several publications on mode comparability, and her current research interests focus on the interplay between research, educational measurement and policy.

  • Chapter 13: "The Item Descriptor (ID) Matching Method"
  • New to Pearson's team of psychometricians, Steve Ferrara, Ph.D., developed the ID Matching standard setting method that is presented in this chapter along with emerging research on standard-setting panelist cognition and decision making. Dr. Ferrara, vice president and co-director of Pearson's new Center for Performance Assessment, has extensive background in classroom and large-scale assessment, including test design, development, and validation; test score scaling, equating, and reporting; standard setting; and the role of assessment in standard-based educational reform.

  • Chapter 5: "Performance Level Descriptors: History, Practice and a Proposed Framework"
  • Dr. Ferrara is also a co-author on this chapter that provides an in-depth discussion of the development, uses and reporting of performance level descriptors in assessments.

For more information about "Setting Performance Standards: Foundations, Methods, and Innovations," click here.

About Pearson

Pearson, the world's leading learning company, has global reach and market-leading businesses in education, business information and consumer publishing (NYSE: PSO). For more information, visit

For more information:

Adam Gaber, Pearson
(800) 745-8489 /
@apgaber or @nextgenassess (Twitter)