Leigh Harrell-Williams (University of Memphis), Hartono Tjoe (The Pennsylvania State University, Berks) Charlotte Bolch (Midwestern University), Stephanie Casey (Eastern Michigan University)
Abstract
Background. This presentation focuses on the process and results of the work completed by the Statistics Education Synthesis group within the NSF-funded Validity Evidence for Measurement in Mathematics Education (VM²Ed) project. The project goal is to identify and compile mathematics and statistics education assessments/instruments through multiple rounds of literature searches, with the end product being a searchable repository of instruments with summarized validity evidence. The project work centered around the 2014 AERA/APA/NCME Standards for Educational and Psychological Testing framework. The presentation will summarize the Statistics Education Synthesis group’s findings and provide a short overview of the repository website.
Methods. The Statistics Education Synthesis group’s process involved three rounds of work. In the first round, the team identified Statistics Education journal articles, conference proceedings papers, and chapters from the Handbooks of Research in Statistics Education where assessments/instruments were used or mentioned ranging from the years 2000-2020. A list of instruments was generated from these articles. In the second round, the team identified articles, proceedings papers, and dissertations that mentioned or used these instruments. In the final round, validity evidence was identified and classified for each instrument for entry into the repository.
Findings. The team identified 91 statistics education instruments. Approximately 75% of these were classified as single-use/single-user instruments. Approximately 41% of the 91 instruments had explicit score interpretation statements, and only 30% of the 91 instruments had explicit use statements. Approximately 46% of the 91 instruments had explicit claims about validity or reliability evidence. The most common type of evidence identified was about test content or internal structure. Very few instruments had evidence related to response processes or consequences of testing.
Implications For Teaching and For Research. The repository is now live (http://mathedmeasures.org/), with virtual training modules on repository use, instructions for submitting new information for inclusion in the repository, and an overview of instrument validation using the 2014 AERA/APA/NCME Standards for Educational and Psychological Testing framework. This work is valuable to the field of Statistics Education, as the repository allows for easier identification of existing instruments and evaluation of existing validity evidence. Participants will also be encouraged to use the repository to design research studies to fill in gaps in validity evidence.