Description: Validity Evidence for Measurement in Mathematics Education (VM2Ed), is a Level 3, Track 1 grant seeking to advance knowledge in Research on STEM Learning and Learning Environments. ECR supports research that builds and expands upon foundations for evaluating STEM learning; however, it is uncertain how best to improve STEM learning without a solid understanding that the results and interpretations from quantitative instruments are grounded in sufficient validity evidence and robust validity arguments. Discussions of validity with regards to instruments’ outcomes are noticeably absent from the literature (Bostic, Krupa, Carney, & Shih, in press; Bostic, Lesseig, Sherman, & Boston, in press; Hill & Shih, 2009, Ziebarth, Fonger, & Kratky, 2014), much less contain connections to the Standards for Educational and Psychological Testing (AERA et al., 2014, 1999). Instrument quality strongly influences the quality of data collected and findings of a research study (AERA et al., 2014; Bostic, 2017, Bostic, Krupa, & Shih, 2019; Gall, Gall, & Borg, 2007). Instruments with a clearly defined purpose and supporting validity evidence are foundational to conducting high quality large-scale quantitative work (Bostic, 2018; Newcomer, 2009). A lack of attention to validity may lead to spurious research findings and/or results from studies that are not generalizable or replicable (Bostic, Krupa, et al., in press; Bostic, Lesseig, et al., in press). These problems present a serious threat to research that aims to be highly impactful and broadly accessible.
V-M2Ed responds to critical needs in scholarship across mathematics education and aligns with goals for future STEM research by supporting a means to explore “common metrics to address progress” (National Science & Technology Council, 2018, p.28). Its research aims are to (1) Develop a framework for categorizing and describing quantitative instruments for mathematics education contexts; (2) Synthesize published materials on instruments used in mathematics education; (3) Create a repository of quantitative instruments for mathematics education contexts; and (4) Train scholars and practitioners to use the repository (face-to-face and online) via scale-up activities.
Intellectual Merit: There is intellectual merit associated with each research aim. (1) A framework creates a shared, justifiable means to evaluate (i.e., describe and categorize) quantitative measures. In turn, this framework may guide scholars’ future instrument development. (2) The synthesis will allow for comparisons of quantitative instruments in new ways so that scholars will be better able to judge merits of instruments for a desired purpose. This work will connect modern notions of validity to a synthesis of measures of K-20 student/teacher knowledge, instruments like observation protocols, and surveys of K-20 students/teachers. (3) A repository that provides information on robust and emergent measures and assessments will support scholars to select appropriate quantitative tools for their needs. (4) Training others promotes the longevity of the repository and rigorous scholarship practices related to validation.
Broader Impacts: There is broader impact associated with each research aim. (1) This project generates buy-in across multiple fields (e.g., math education, psychometrics, educational psychology, and related fields) because it fosters collaboration around a shared interest. It is collaborative within mathematics education as a singular field and across fields of researchers that engage in mathematics education scholarship. (2) Syntheses of quantitative instruments allow researchers across various fields a means to compare them and their validity arguments. (3) Scholars may take up emergent instruments lacking validation arguments and gather validity evidence for their use, thus generating new scholarship. It also supports NSF’s goals for fostering robust proposals and generalizable research through the use of instruments with strong validation arguments. Scholars across fields will have a searchable repository of quantitative tools alongside a framework focused on validity evidence. (4) Further, our scale-up will prepare current and future scholars to responsibly utilize instruments in the repository and to rigorously fill gaps in existing validity arguments in current and future assessments used in mathematics education.
This project is supported by the National Science Foundation under Grant No. DRL 1920619 awarded to North Carolina State University. Any opinions, findings, and conclusions or recommendations expressed herein are those of the principal investigators and do not necessarily reflect the views of the National Science Foundation.