Robust Estimation of Topic Summaries Leveraging Word Frequency and Exclusivity
An ongoing challenge in the analysis of document collections is how to summarize content in terms of a set of inferred themes that can be interpreted substantively in terms of topics. However, the current practice in mixed membership models of text (Blei et al., 2003) of parametrizing the themes in terms of most frequent words limits interpretability by ignoring the differential use of words across topics. Words that are both common and exclusive to a theme are more effective at characterizing the topical content of such a theme. We consider a setting where professional editors have annotated documents to a collection of topic categories, organized into a tree, in which leaf-nodes correspond to the most specific topics. Each document is annotated to multiple categories, at different levels of the tree. We introduce hierarchical Poisson convolution (HPC) as a model to analyze annotated documents in this setting. The model leverages the structure among categories defined by professional editors to infer a clear semantic description for each topic in terms of words that are both
Membership Models and
frequent and exclusive. We develop a parallelized Hamiltonian Monte Carlo sampler that allows the inference to scale to millions of documents.