ABSTRACT

In this chapter we learn how to model a special class of random processes in which the future evolution depends only on the present state, and not at all on the historical evolution leading up to the present. Such processes are called Markov chains after Andrei Markov (1856-1922) who first modelled and studied them. We study Markov chains in both discrete and continuous time. For discrete time chains we establish some basic formulae building on the so-called transition matrix, and discuss classification of states, limiting behaviour (including balance equations), and finite absorbing chains (including expected absorption times and probabilities). For continuous chains we introduce the rate matrix, and consider associated formulae including the Kolmogorov forward and backward equations, expected hitting times, and limiting distributions (including the balance equations). This chapter contains some elegant and powerful mathematical tools for modelling randomness: tools that have been very successfully and broadly applied across many disciplines.