ABSTRACT

When measuring a model’s fit to the matrix S, we want to regard lack of fit as due exclusively to misspecifying certain constraints on certain parameters asserted in the hypothesis. Thus the values for the free parameters, which are not given by hypothesis but are needed to complete the model, are required to be those values for these parameters that uniquely minimize the discrepancy between the model’s reproduced covariance matrix Σˆ0 and the sample covariance matrix S conditional on the explicitly constrained parameters of the model. The reason for conditioning on the constraints is that we want the estimates to be dependent on the constraints. Then any discrepancy will be due to the constraints. In other words, if there were no constraints other than those to minimally just-identify the model, then the model would fit the data perfectly. But, of course, without an over-identifying set of constraints, there would be no test of a hypothesis. The discrepancy is measured with a discrepancy function, F[Σˆ0(θ), S], where θ is a vector θ = (θˆ, θ∗), with θˆ the independent free and yoked parameters and θ∗ the fixed parameters of the structural equation model.