ABSTRACT

In recent decades, linguistics has taken an empirical turn—experimental methods have become a standard part of the toolkit for researchers in areas like syntax, semantics, and pragmatics. Because experimental science requires statistical tools, and because experimental data has historically been usually analysed using frequentist methods, linguists have adopted these standardly used methods. However, in doing so, linguistics also imported all the problems that frequentist methods have engendered, the replication crisis being perhaps the most dramatic one of all. Most of these problems arise due to the way the null hypothesis significance testing procedure is set up: A straw-man null hypothesis is rejected that was never of any interest in the first place; binary accept/reject decisions are made based on the p-value, disregarding the uncertainty in the estimates. One is encouraged through conveniently available software to fit canned statistical models with fixed assumptions, even when those assumptions are completely unreasonable. There is no way to cumulatively build on previous findings when analysing data. In this chapter, an alternative approach is discussed that leads to more robust inferences. The problems with standard methods are explained in detail, the advantages spelled out of adopting an uncertainty quantification-based approach to statistical inference using modern Bayesian tools.