ABSTRACT

This chapter presents a study on a class of memoryless discrete time processes with discrete state space, called Markov chains. It deduces the conditional and unconditional distributions of the state of the process at a fixed time n. The chapter then shows an inductive method for the problem of finding the distribution of the time when a given state is first visited by the process. For a time-homogeneous chain, the conditional probability that the process goes to state j next, given that at the current time it is at state i, does not change as time progresses. The chapter also offers a rather detailed discussion of the limiting properties of Markov chains as time becomes infinite. Some chains can be shown to spend a stable fraction of time in each state, while others can be completely absorbed by a state.