ABSTRACT

In this chapter, we consider maximum likelihood estimation (MLE) and Bayesian inference for general nonlinear state space models (NLSS). MLE is, of course, a fundamental inference tool in classical statistics; we have already discussed the procedure in Section 2.3 for linear Gaussian state space models and in Section 8.1 for Markovian models. In NLSS, the maximization of the likelihood function is computationally involved. The likelihood and its gradient are seldom available in closed form, and Monte Carlo techniques are required to approximate these quantities. A first solution consists of using a derivative-free optimization method for noisy function; see Section 12.1.1. This approach is generally slow, and is limited to the case where the likelihood depends on only a few parameters. Another solution is to use a gradient-based search technique (for example, the steepest descent algorithm or a damped Gauss-Newton method) to compute estimates. This requires the computation of a gradient of the likelihood and the score function; for linear Gaussian state space models, the likelihood can be computed by deriving the recursions defining the Kalman filter (see Section 2.3.1). For general state space, the score function should also be approximated by Monte Carlo integration using Fisher's identity (see Section 12.1.2).