JSBA - a Java interface for generic Sparse Bundle Adjustment

SBA is a Levenberg-Marquardt Algorithm based bundle adjustment library developed by Manolis Lourakis. More information can be found on the SBA page. Detailed descriptions of the theory behind sba can be found in the corresponding ACM TOMS paper or the 2004 ICS/FORTH Technical Report #340 entitled The Design and Implementation of a Generic Sparse Bundle Adjustment Software Package Based on the Levenberg-Marquardt Algorithm. An overview presentation of sba and BA can be found in these slides.

Sparse BA is also discussed in Appendix 6 of Hartley & Zisserman's Multiple View Geometry in Computer Vision Second Edition.

Theory (from M. Lourakis slides)

1. Levenberg-Marquardt algorithm (LM)

Let \(f: \mathcal{R}^{m}\rightarrow\mathcal{R}^{n}\). Given an initial estimate \(p_{0}\in\mathcal{R}^{m}\) and a measurement vector \(x\in{}\mathcal{R}^{n}\), LM seeks to find \(p^{+}\) that minimizes: \[\epsilon^{T}\epsilon = x - f(p)\]

This minimization problem is a non linear least squares problem since \(\epsilon^{T}\epsilon\ =\ ||x\ -\ f(p)||^{2}\) with \(||x||\) is the L2 norm of \(x\).

The minimizer can be found by the Gauss-Newton method, which iteratively linearizes \(f\) at \(p\) and determines incremental update steps \(\delta_{p}\) by solving the normal equations \[\mathbf{J}^{T}\mathbf{J}\delta_{p} = \mathbf{J}^{T}\epsilon\]

where \(\mathbf{J}\) is the Jacobian of \(f\) at \(p\) and \(\mathbf{J}^{T}\mathbf{J}\) is the approximate Hessian of \(||\epsilon||^{2}\).

To ensure convergence, LM use dampling by adaptively altering the diagonal elements of \(\mathbf{J}^{T}\mathbf{J}\) and solving the augmented normal equations \((\mathbf{J}^{T}\mathbf{J} + \mu{}\mathbf{I})\delta_{p} = \mathbf{J}^{T}\epsilon,\ \mu > 0\)

2. Bundle adjustment (BA)

Let a set of \(n\) 3D points seen in \(m\) views, let \(x_{ij}\) be the projection of the \(i^{th}\) 3D point on the \(j^{th}\) image, let \(a_{j}\) be the vector of parameters for camera \(j\) and \(b_{i}\) be the vector of parameters for point \(i\). Bundle adjustment minimizes the reprojection error over all points and camera parameters, more formally:\[\min_{a_{j}, b_{i}}\displaystyle\sum_{i=1}^{n}\displaystyle\sum_{j=1}^{m}v_{ij}d(Q(a_{j}, b_{i}), x_{ij})^{2}\] with:

This minimization problem is large as if \(\kappa\) and \(\lambda\) are respectively the dimension of vectors \(a_{j}\) and \(b_{i}\), the total number of parameters involved in Bundle adjustment is \(m\kappa + n\lambda\).

3. Bundle adjustment as nonlinear least squares problem

Let \(P = (a_{1}^{T},\ldots,a_{m}^{T},\ldots,b_{1}^{T},\ldots,b_{n}^{T})^{T}\) be a parameter vector defined by partitioning parameters and let \(X = (x_{11}^{T}, \ldots, x_{1m}^{T}, x_{21}^{T}, x_{2m}^{T}, \ldots, x_{n1}^{T}, \ldots, x_{nm}^{T})^{T}\) a measurement vector.

For each parameter vector, let \(\hat{X} = (\hat{x}_{11}^{T}, \ldots, \hat{x}_{1m}^{T}, \hat{x}_{21}^{T}, \hat{x}_{2m}^{T}, \ldots, \hat{x}_{n1}^{T}, \ldots, \hat{x}_{nm}^{T})^{T}\) be an estimated measurement with \(\hat{x}_{ij} \equiv Q(a_{j}, b_{i})\).

Finally, let \((\epsilon_{11}^{T}, \ldots, \epsilon_{1m}^{T}, \epsilon_{21}^{T}, \epsilon_{2m}^{T}, \ldots, \epsilon_{n1}^{T}, \ldots, \epsilon_{nm}^{T})^{T}\) be the corresponding error with \(\epsilon_{ij} \equiv x_{ij} - \hat{x}_{ij}\).

With the above definitions, Bundle adjustment computation is the minimization over \(P\) of: \[\displaystyle\sum_{i=1}^{n}\displaystyle\sum_{j=1}^{m}\|\epsilon_{ij}\|^{2} = \|X - \hat{X}\|^{2}\]

This problem is a nonlinear least squares problem.

4. Jacobian block structure

The Jacobian \(\mathbf{J} = \frac{\delta{}\hat{X}}{\delta{}P}\) has a block structure \(\begin{bmatrix}\mathbf{A}|\mathbf{B}\end{bmatrix}\) with \(\mathbf{A} = \begin{bmatrix}\frac{\delta{}\hat{X}}{\delta{}a}\end{bmatrix}\) and \(\mathbf{B} = \begin{bmatrix}\frac{\delta{}\hat{X}}{\delta{}b}\end{bmatrix}\).

If we denote \((\delta_{a}^{T}, \delta_{b}^{T})^{T}\) the partitionning of the LM updating vector \(\delta\), the normal equations become:
\[ \left[ \begin{array}{c|c} \mathbf{A}^{T}\mathbf{A} & \mathbf{A}^{T}\mathbf{B} \\ \hline \mathbf{B}^{T}\mathbf{A} & \mathbf{B}^{T}\mathbf{B} \end{array} \right] \left( \frac{\delta_{a}}{\delta_{b}} \right) = \left( \frac{\mathbf{A}^{T}\epsilon}{\mathbf{B}^{T}\epsilon} \right) \]

The left-hand side matrix abose is sparse as \(\mathbf{A}\) and \(\mathbf{B}\) are sparse, therefore \(\forall{} j, k, j \neq k, \frac{\delta\hat{x}_{ij}}{\delta{}a_{k}} = 0\) and \(\frac{\delta\hat{x}_{ij}}{\delta{}b_{k}} = 0\) This Jacobian block structure is called primary structure of Bundle adjustment.

5. Hessian block computation

Let \(\mathbf{A}_{ij} = \frac{\delta{}\hat{x}_{ij}}{\delta{}a_{j}}\) and \(\mathbf{B}_{ij} = \frac{\delta{}\hat{x}_{ij}}{\delta{}b_{i}}\). The approximate Hessian \(\mathbf{J}^{T}\mathbf{J}\) can be written: \[ \mathbf{J}^{T}\mathbf{J} = \begin{array}{cc} & \begin{array}{cccccc} \ a_{1}^{T}\ & \ \cdots\ & \ a_{m}^{T}\ & \ b_{1}^{T}\ & \ \cdots\ & \ b_{n}^{T}\ \end{array} \\ \begin{array}{c} a_{1} \\ \vdots \\ a_{m} \\ b_{1} \\ \vdots \\ b_{n} \end{array} & \begin{bmatrix} \mathbf{U}_{1} & 0 & 0 & \mathbf{W}_{11} & \cdots & \mathbf{W}_{n1} \\ 0 & \mathbf{U}_{j} & 0 & \mathbf{W}_{1j} & \cdots & \mathbf{W}_{nj} \\ 0 & 0 & \mathbf{U}_{m} & \mathbf{W}_{1m} & \cdots & \mathbf{W}_{nm} \\ \mathbf{W}_{11}^{T} & \cdots & \mathbf{W}_{1m}^{T} & \mathbf{V}_{1} & 0 & 0 \\ \mathbf{W}_{i1}^{T} & \cdots & \mathbf{W}_{im}^{T} & 0 & \mathbf{V}_{j} & 0 \\ \mathbf{W}_{n1}^{T} & \cdots & \mathbf{W}_{nm}^{T} & 0 & 0 & \mathbf{V}_{n} \end{bmatrix} \end{array} = \begin{bmatrix}\mathbf{U} & \mathbf{W} \\ \mathbf{W}^{T} & \mathbf{V} \end{bmatrix} \]


It can be denoted that \(\mathbf{U}\) and \(\mathbf{V}\) are block diagonal and \(\mathbf{W}\) are arbitrary sparse.

5. Solving the augmented normal equations

The augmented normal equations \(\mathbf{J}^{T}\mathbf{J}\delta_{p} = \mathbf{J}^{T}\epsilon\) take the form: \[ \begin{pmatrix} \mathbf{U}^{*} & \mathbf{W} \\ \mathbf{W}^{T} & \mathbf{V}^{*} \end{pmatrix} \begin{pmatrix} \delta_{a} \\ \delta_{b} \end{pmatrix} = \begin{pmatrix} \epsilon_{a} \\ \epsilon_{b} \end{pmatrix} \]

Performing block Gaussian elimination in the left hand side matrix, \(\delta_{a}\) is determined with Cholesky form from \(\mathbf{V}^{*}\) Schur complement: \[ (\mathbf{U}^{*} - \mathbf{W}\mathbf{V}^{*^{-1}}\mathbf{W}^{T})\delta_{a} = \epsilon_{a} - \mathbf{W}\mathbf{V}\mathbf{V}^{*^{-1}}\epsilon_{b} \] with: \[ \mathbf{V}\mathbf{V}^{*^{-1}} = \begin{pmatrix} \mathbf{V}_{1}^{*^{-1}} & 0 & \cdots \\ 0 & \mathbf{V}_{2}^{*^{-1}} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \] The solving of \(\delta_{a}\) in first is due to \(m \ll n\) (more points than images).

\(\delta_{b}\) can be obtained by back substitution into: \[ \mathbf{V}^{*}\delta_{b} = \epsilon_{b} - \mathbf{W}^{T}\delta_{a} \]

6. Reduced camera matrix

The left hand side matrix \(\mathbf{S} = \mathbf{U}^{*} - \mathbf{W}\mathbf{V}^{*^{-1}}\mathbf{W}^{T}\) can be referred as reduced camera matrix. Since not all scene points appear in all cameras, \(\mathbf{S}\) is sparse. This is known as secondary structure.

The secondary structure depends on the observed point tracks and is hard to predict. This not crucial up to a few hundred cameras.

For very large datasets, two classes of applications exist:

The Sparse Bundle Adjustment (SBA) algorithm is adapted to visual mapping and should not be used for centered objects with a large number of images.

Using JSBA

1. Integration with Maven projects

JSBA can be used as a maven project dependency. The pom.xml file has to be updated with:



2. Standalone download

If your project does not relies on maven you can simply add the jsba jar into your classpath. There is no more dependency to handle. The lastest version of JSBA can be obtained with following links:

JSBA 1.0.0 (the JAR archive including native libraries) - javadoc - sources (including natice JNI / C sources)

3. Native libraries