ABSTRACT

A statistical model postulates how the probability distribution of observable variables depends on unknown parameters. This leads to the possibility of estimating these parameters, after having observed the variables for a sample of units from the population under study. A nice property for a statistical model to possess is that of identification. While more technical statements are possible, the intuitive content of identification is that different settings of the parameters cannot produce the same joint probability distribution of the observables. More formally, consider the mapping from the parameter space, the set of possible values for the parameters, to the set of possible distributions for the observables. Identification corresponds to invertibility of this mapping. That is, multiple points in the parameter space never map to the same distribution of observables. When identification holds, nice things happen. Principally, as we collect more data, we can better estimate the distribution of observables. Therefore, under the additional assumption that the mapping is smooth, as well as invertible, this learning translates very directly to learning about the parameters.