Policy and Legislation in Education

Educator credential assessments have faced substantial pressure over recent years. National and state-specific educator shortages in particular content areas and geographic regions, coupled with the ongoing academic ramifications of COVID-19, have led to a noticeable shift in education preparation and credential standards. There is more momentum than ever among state legislatures to be deeply involved in the educational process concerning educators. Their involvement has resulted in state policy decisions that have altered educator preparation and credentialing requirements nationally.
Considering these transformative shifts in recent years, Evaluation Systems has helped bridge the gap between policy and practice by proactively supporting states while they respond to these ongoing changes. Here at Evaluations Systems, we are embedding innovative approaches in our work to provide customized and enhanced supports while states navigate the complexities of policy implementation and reform in educator credential and preparation – ensuring our processes reflect an equitable and accessible system. To do so, we are continuously monitoring legislation and policy developments to meet the evolving needs of the educator workforce, and to further support a brighter future for aspiring educators and students.
Data and Insights
Evaluation Systems provides state clients with state-of-the-art analytic tools, used by individual educator preparation programs designated by the state.
Psychometrics
Pearson’s Research and Psychometrics team has partnered with clients to complete a variety of psychometric analyses and related supports, including test design support, standard setting, item analyses, equating, custom studies, and individualized client support. These partners include, but are not limited to, the Massachusetts Department of Elementary and Secondary Education, the Oklahoma Office of Educational Quality and Accountability, edTPA, and the Florida Department of Education. Specifically, our team is well versed in the implementation of:
- Test design support: Ensuring that tests encompass the content required by the client, produce reliable measures of candidate ability, and ensuring that candidates have adequate time.
- Standard Setting: Defining levels of achievement or proficiency and the cut or passing scores corresponding to those levels.
- Item Analyses (a.k.a. Test Question Analysis): Determining how well individual test items assess what students have learned.
- Technical reports: Highlighting the qualities of our assessments at the item and test level.
- Equating: Placing scores from two or more parallel test forms onto a common score scale in order to compare performance directly.
- Customized research projects:
- Item exposure analysis
- Candidate flagging reports to detect improper conduct
- Analyses of various test designs, including single assessment vs. subtests vs. testlets
- Methodology:
- Employ a wider variety of methods, including Classical Test Theory (CTT) and IRT.
- Custom studies:
- Timing studies to determine speededness
- Pass rate analyses
- Differential Item Functioning (DIF) studies: statistical analysis of assessment data to determine if items are performing in a biased manner against a group of examinees (e.g., gender, ethnicity, first language)
- Factor analyses: identify the underlying dimensions that account for the relationships among the variables
- Client Support
- Tools that aid clients in analyzing their own candidate data (e.g., Results Analyzer, Ed Reports)
edReports
edReports, our proprietary score reporting website, provides up-to-date information on candidate performance – it reports which tests the candidate took and when, and whether the candidate passed or failed. It also produces reports that include candidate performance at the competency level. The report shows the percentage correct by competency area data for both the institution and statewide averages. Institutions use these data points to highlight core areas to address in their educator preparation programs to ensure candidate success.
Analytics and Reporting
Our team of psychometricians and data scientists have partnered with clients to provide insights into security breaches, test-level timing, and fairness and bias.
Security
To prevent security breaches, we perform an item exposure analysis that identifies potential candidate misconduct using customized flagging reports.
Test Data
Test-level timing analyzes and evaluates testing data to ensure that candidates have enough time to complete a test.
Fairness and Bias
Our fairness and bias analyses examine pass rates by subgroup at the test level and examine item performance by subgroup with innovative Item-Response Theory (IRT)-based Differential Item Functioning (DIF) methodologies. These techniques help ensure that our tests are fair and unbiased and can help guide item development and provide feedback to educator preparation programs.