ABSTRACT

This chapter examines the convergence rate problem for numerically computing a fixed density of Markov operators defined by stochastic kernels. Markov operators are widely used in the study of density evolutions under dynamical systems. The purpose is toward obtaining an upper bound of the error estimate for some numerical methods for computing a fixed density of the Markov operator. Such an estimate is of practical importance since many problems in applied probability, stochastic analysis, and ergodic theory and dynamical systems are related to the computation of a fixed density of Markov operators. The chapter gives a general error estimate result under some conditions and use the result to get an explicit error bound for numerically computing a fixed density. Some computational issues are also presented in the end.