Topic: Approximation of Markov-Random Fields by Recursive Cavity Modeling
By: Jason K. Johnson (MIT), Alan S. Willsky (MIT)
This research is aimed at developing near-optimal, scalable inference and learning methods for Markov random fields (MRFs), e.g. distributed probability models defined on graphs and hypergraphs. The approach being developed, referred to as recursive cavity modeling (RCM), combines ideas from machine learning (e.g. information-projection/maximum-entropy, Akaike information criterion, iterative proportional fitting) with recursive inference methodologies (e.g. variable elimination, divide-and-conquer tree structured inference) and is strongly influenced both by methods developed by researchers in our group before (multiscale modeling, thinned nested dissection methods) and recent \"Markov blanket filtering\" methods developed for dynamic Bayes nets. Our method develops \"cavity\" and \"blanket\" models which provide sufficient summary of a given subfield (or, in the case of blanket models, of the exterior of a subfield) for the sake of near-optimal inference outside (inside) that subfield. We build these models recursively employing a combination of standard variable elimination techniques and our model thinning method that we have developed for this purpose. Previosly, estimation of Gauss-Markov random fields has been the focus of this work. Applications of this method are being considered in the areas of sensor-network calibration and simultaneous localization and mapping problems. We are now also looking at extensions of this method for approximation of finite-state MRFs. Here, we also consider methods for computation of \"max-marginal\" distributions to support estimation of the maximum a posterior (MAP) configuration of finite-state MRFs. In future work, we also intend to consider hybrid MRFs having both discrete and (possibly non-Gaussian) continuous variables.
Questions and comments to the webmaster. This page is copyrighted by the Massachusetts Institute of Technology.