Inference Beyond the Limits


Inference by replication - a schematic description

  • Funding source: The Leverhulme Trust (F/00 250/M)
  • Funding amount: £ 154,017
  • Duration: 9/2010-9/2013

Project Background

Approximate Bayesian inference techniques arguably offer the most principled approach to information extraction, decision making and machine learning by combining a rigorous statistical approach with a feasible and systematic approximation. It facilitates the derivation of approximate expected values based on local calculations that scale well, linearly or quadratically, with the system size. Such approaches have proved useful in a broad range of applications from telecommunication to bioinformatics; they are particularly important in the analysis of complex nonlinear systems, where their large scale prohibits one from obtaining numerical solutions, either deterministically or probabilistically.
The main difficulty in most current belief propagation-based algorithms is that they break down completely in regimes characterised by a fragmented solution space, where multiple solutions give rise to conflicting messages and averages over all possible solutions are required. The statistical physics methodology, that is exact in the large system limit and does not rely on simulations, offers insight into the source of the problem and hence possible ways to rectify it.
The current project is centred on a new virtual-replication based inference algorithm introduced recently by the proposers, where averages over infinitely many virtually-replicated systems are carried out by saddle point methods using assumptions about the increasingly more complex correlations between solutions. The existing method is limited to problems that can be mapped onto densely connected graphs with real data and noise model; these are essential ingredients in the current method as the dense connectivity invokes the theorem of large numbers and the real variables and noise facilitate a one-to-one mapping relating the variable and evidential (factor) nodes. The project aims to gain better understanding of the method and the manner in which it should be applied to problems of a different nature that are of both practical and theoretical significance, such as lossy compression and decoding in error-correcting codes beyond the existing computationally feasible limits.
The research programme includes theoretical aspects, such as the development of generic algorithms, as well as applicative activities, tackling specific challenging problems of high practical relevance. The generic activities include the development of efficient algorithms for inference in systems with discrete evidential (factor) nodes and noise model as well as in highly constrained sparsely connected systems beyond the point where the solution-space fragmentation phenomena occurs; it is substantially more difficult to apply our method to both problems in comparison to the densely connected systems with real noise and data variables handled so far as they lack the main ingredients that facilitate the calculation in the original formulation (densely connected, real variables). The specific applications we will address include a prohibitive inference problem of a capacity-achieving source coding (lossy compression) technique, characterised by discrete data variables and noise model, and decoding in Low Density Party Check error correcting codes beyond the dynamical transition, where the solution space becomes fragmented, which defines the current decoding limits. Both problems offer formidable challenges and have remained unsolved for several decades; they will serve as exemplars to highlight the potential of the new approach. Finally, we will apply increasingly more complex variants of our model to one of the most celebrated models of statistical physics, the Sherrington-Kirkpatrick model as well as to sparsely connected benchmark problems to evaluate the improvements provided by our approach as well as its limitations. All problems considered are of great interest both theoretically and for practical applications of commercial value; moreover, a successful application of our method to these different classes of problems will open up new possibilities for using them in a broad range of inference tasks.


The proposal is interdisciplinary, combining methodology from statistical physics and Bayesian statistics to address challenging problems in the areas of telecommunication, computer science and physics. It exploits insight from statistical physics to provide improved inferences methods for highly-constrained sparsely and densely connected systems. In this project we will:
(a) Extend the new approach to devise principled inference algorithms for problems that can be mapped onto dense graphs with discrete evidential nodes.
(b) Apply the new algorithm to a capacity achieving method for lossy compression.
(c) Extend the new inference algorithm to problems that can be mapped onto sparse graphs.
(d) Apply the new algorithm to a challenging practical problem, for instance, LDPC decoding beyond the existing decoding limits.
(e) Apply the new algorithm to key models in statistical physics such as the SK model to show it can obtain improved solutions by adopting increasingly more complex solution space structures.


The main achievement of the project are in devising a general distributed inference method for tackling problems that can be mapped onto densely connected graphs, where evidential data nodes are discrete. The method has been applied to the hard computational problems of Ising perceptron training, lossy compression, inference in instances of the Sherrington-Kirkpatrick model and the inverse Ising model; all of which are relevant to both application domains and theoretical research. We showed how such methods can be used for the various problems, gaining better understanding of their properties, their advantages and drawbacks, and the related computational complexity. In a separate study we also investigated the dynamics of non-equilibrium systems held at different temperatures, but coupled through cross-system interactions. We studied the stationary solutions obtained and their stability.


Journal papers

  1. R. Alamino, J. Neirotti and D. Saad, Replication-based Inference Algorithms for Hard Computational Problems, Phys. Rev. E 88, 013313 (2013)
  2. R.C. Alamino, J. Neirotti and D. Saad, Compression Through Replication, Phys. Rev. E 89,  (2014)
  3. R.C. Alamino, A. Chattopadhyay and D. Saad, Interacting Non-equilibrium Systems with Two Temperatures, Phys. Rev. E 87, 052123 (2013) | AURA Preprint