ABSTRACT

This chapter addresses some issues: the dependence of the predictions of such models on the explicit structure of their decision variable variance functions, the extent to which there is a trade-off between the choice of the equation for the loudness function and the form of the variance function. In signal-detection theory models of intensity discrimination, the decision variable variance plays the important role of connecting Weber fractions to loudness-growth and loudness-matching data. In neural counting models, variances proportional to the square root of the decision variable with and without deadtime corrections have proven to be good choices. Fechnerian-like models such as the proportional just-noticeable difference (JND) theory use constant variances. In other models, loudness itself is the decision variable. The predictive power of the model came from combining the first-order approximation of the Taylor series for the neural-count function in powers of the intensity JND with the assumption that the decision variable obeyed Poisson statistics.