ABSTRACT

This chapter reviews the basic theory of variational approximations and how they can be used to design algorithms for approximate marginal inference and learning in graphical models. In practice, loopy belief propagation/Gibbs sampling often take much too long to converge, limiting maximum likelihood estimation to relatively small graphical models. The theory and algorithms for approximate inference will turn out to form the backbone of methods for approximate learning, as such, the chapter deals with a discussion of inference and with its application in learning. The fundamental observation of variational methods for approximate marginal inference is that the computation of the partition function can be formulated as a concave optimization problem. The chapter presents a variety of algorithms for exact and approximate inference and learning for Markov random fields. While the exact methods are well-understood theoretically, limited computational power necessitates the use of their approximate counterparts in most practical settings.