ABSTRACT
Despite calls by task forces, statisticians, and critical theorists for changes in how quantitative methods are used and reported, several systemic issues persist in educational research and more broadly in the social sciences. These issues are most prevalent in research using null hypothesis testing methods. Three major problems are discussed: favoring p-values over other more informative statistical results, neglecting null results, and decontextualizing results. Consequences of these three issues include distorted distributions of effects in published literature, the file drawer problem, favoring trivial but statistically significant effects over meaningful but statistically nonsignificant effects, slowing scientific progress, and unintentionally facilitating harm to individuals from underrepresented groups and identities. Publicly available education data are used to demonstrate these issues and examples of research avoiding these issues are provided. Three immediately feasible solutions are presented: 1) Report the entire statistical story for every quantitative finding; 2) Interpret all results within existing contexts; and 3) Weight quality of research most heavily in publication decisions. Smaller actions from journals and editors (i.e., manuscript submission requirements or checklists, information collected on reviewer expertise and reviewer assignment) and professional organizations (i.e., incentivizing collaborative teams with the collective quantitative, sociocultural and critical theory, and substantive expertise to encourage these solutions).