You can rely on our patented solutions for assessing students in a classroom or testing employees at work.
The Versant speech assessment technology is specifically tuned for linguistic assessment, recognizing words, segments, syllables, and phrases in speech. The system assesses speaking ability by comparing responses against Pearson’s linguistic and acoustic models for native and non-native speakers and applying our proprietary scoring method.
Using criteria developed by expert linguists, the Versant testing system can automatically score overall speaking proficiency, as well as sub-skills such as sentence mastery, vocabulary, fluency, and pronunciation. Independent studies and extensive field research have shown that Versant tests are as or more objective and reliable than many human-rated tests, including one-on-one oral proficiency interviews. Versant tests are also highly reliable (average split-half reliability of 0.97), delivering more consistent testing without the variability typical of multiple evaluators and locations.
Essay prompts are scored by the Intelligent Essay Assessor (IEA) using scores assigned by human raters to several hundred representative student essays, all written in response to a particular essay prompt or question for a particular grade level. By using computational modeling, IEA mimics the way in which human readers score. In study after study comparing the performance of IEA to that of skilled human graders, the quality of IEA’s assessment equals or surpasses that of the humans.
Existing text complexity measures rely on only a few simple and superficial measures, such as the length of words and the length of sentences. These measures do not identify individual words that readers find challenging or consider their usefulness for readers. Pearson’s Reading Maturity Metric overcomes this lack by applying a well established and accurate computational language simulation model, Latent Semantic Analysis (LSA), to mimic the way that word (and paragraph) meanings are learned through reading. In doing so, it applies many of the same underlying technologies as Pearson's automatic reading, writing, speaking and listening assessment technologies all of which match human judgments 90-100% as well as human judgments match each other.
In our reading and writing programs, student writing is measured by the state-of-the-art Knowledge Analysis Technologies™ (KAT) engine. The KAT engine is a unique automated assessment technology that evaluates the meaning of text, not just grammatical correctness or spelling.