The Bayesian graphical representation of a factor graph is equivalent to a (sparse) matrix representation or the optimization representation as shown in Figure 1.
Optimization Representation
The goal of a maximum likelihood estimator (MLE) is to find the values of the hidden variables that maximize the probability of those values. MLE converts factor graph into an optimization problem. Let \(\mathbf{X}\) be the set of all states from \(0\) to \(N\). To find the maximum probability state estimate:
This is a non-linear least squares problem and can be solved with any optimization methods. While converting a factor graph to its equivalent optimization explains at a high
level how to find the maximum likelihood variables as a weighted least-squares problem, the true computational savings are realized by analyzing the sparse
matrix representation.
Sparse Matrix Representation
The Bayesian graph can be converted into an adjacency matrix \(\mathbf{L}\) as shown in Figure 2. The squares are the non-zero elements. Every column of the \(\mathbf{L}\) matrix corresponds to a hidden variable, and every row corresponds to a factor. The graph structure enforces sparsity. The \(d_i\)s are the "special" factors that correspond to the dynamics of the system.