Publication | Closed Access
Learning Markov Networks With Arithmetic Circuits
48
Citations
17
References
2013
Year
Unknown Venue
Markov networks are an effective way to rep-resent complex probability distributions. How-ever, learning their structure and parameters or using them to answer queries is typically in-tractable. One approach to making learning and inference tractable is to use approximations, such as pseudo-likelihood or approximate inference. An alternate approach is to use a restricted class of models where exact inference is always effi-cient. Previous work has explored low treewidth models, models with tree-structured features, and latent variable models. In this paper, we in-troduce ACMN, the first ever method for learn-ing efficient Markov networks with arbitrary con-junctive features. The secret to ACMN’s greater flexibility is its use of arithmetic circuits, a linear-time inference representation that can handle many high treewidth models by exploiting local structure. ACMN uses the size of the correspond-ing arithmetic circuit as a learning bias, allow-ing it to trade off accuracy and inference com-plexity. In experiments on 12 standard datasets, the tractable models learned by ACMN are more accurate than both tractable models learned by other algorithms and approximate inference in intractable models. 1
| Year | Citations | |
|---|---|---|
Page 1
Page 1