OPTIMIZATIONWITHPARITYCONSTRAINTSFROMBINARY.pptVIP

  • 1
  • 0
  • 约1.59千字
  • 约 24页
  • 2017-05-05 发布于湖北
  • 举报
OPTIMIZATIONWITHPARITYCONSTRAINTSFROMBINARY

OPTIMIZATION WITH PARITY CONSTRAINTS: FROM BINARY CODES TO DISCRETE INTEGRATION;High-dimensional integration;Discrete Integration;Hardness ; The algorithm requires only O(n log n) MAP queries to approximate the partition function within a constant factor;Visual working of the algorithm;Theorem [ICML-13]: With probability at least 1- δ (e.g., 99.9%) WISH computes a 16-approximation of the partition function (discrete integral) by solving θ(n log n) MAP inference queries (optimization). Theorem [ICML-13]: Can improve the approximation factor to (1+ε) by adding extra variables and factors. Example: factor 2 approximation with 4n variables Remark: faster than enumeration only when combinatorial optimization is efficient ;Summary of contributions;MAP INFERENCE WITH PARITY CONSTRAINTS;Making WISH more scalable;Error correcting codes;Decoding a binary code;Decoding via Integer Programming;Iterative bound tightening;SPARSITY OF THE PARITY CONSTRAINTS;Inducing sparsity;Improvements from sparsity;WISH based on Universal Hashing: Randomly generate A in {0,1}i×n, b in {0,1}i Then A x + b (mod 2) is: Uniform over {0,1}i Pairwise independent Suppose we generate a sparse matrix A At most k variables per parity constraint (up to k ones per row of A) A x+b (mod 2) is still uniform, not pairwise independent anymore E.g. for k=1, A x = b mod 2 is equivalent to fixing i variables. Lots of correlation. (Knowing A x = b tells me a lot about A y = b);Using sparse parity constraints;MAP with sparse parity constraints;Experimental results;Experimental results (2);Conclusions;Extra slides

文档评论(0)

1亿VIP精品文档

相关文档