ABSTRACT

A stochastic process is a collection of random variables {Yt | t ∈ U}, where U is some index set. If U = {0,1,2, . . .}, then the process is in discrete time. If U = (0,∞), then the process is in continuous time. Variable Yt is the state of the process at time t. The set S of all possible values of Yt is the state space. Here we limit ourselves to S being the finite discrete-state space {1,2...,D} for some integer D. The theory for basic discrete-time Markov processes is

well established. The material in this section can also be found in Norris

(1997), Kulkarni (2011), or in any other textbook on stochastic processes.