blob: 64604c4a671242320b3cea932223ebc24c319cba [file] [log] [blame]
Austin Schuh70cc9552019-01-21 19:46:48 -08001.. _chapter-solving_faqs:
2
3.. default-domain:: cpp
4
5.. cpp:namespace:: ceres
6
7=======
8Solving
9=======
10
11#. How do I evaluate the Jacobian for a solved problem?
12
13 Using :func:`Problem::Evaluate`.
14
15#. How do I choose the right linear solver?
16
17 When using the ``TRUST_REGION`` minimizer, the choice of linear
18 solver is an important decision. It affects solution quality and
19 runtime. Here is a simple way to reason about it.
20
21 1. For small (a few hundred parameters) or dense problems use
22 ``DENSE_QR``.
23
24 2. For general sparse problems (i.e., the Jacobian matrix has a
25 substantial number of zeros) use
26 ``SPARSE_NORMAL_CHOLESKY``. This requires that you have
27 ``SuiteSparse`` or ``CXSparse`` installed.
28
29 3. For bundle adjustment problems with up to a hundred or so
30 cameras, use ``DENSE_SCHUR``.
31
32 4. For larger bundle adjustment problems with sparse Schur
33 Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This
34 requires that you build Ceres with support for ``SuiteSparse``,
35 ``CXSparse`` or Eigen's sparse linear algebra libraries.
36
37 If you do not have access to these libraries for whatever
38 reason, ``ITERATIVE_SCHUR`` with ``SCHUR_JACOBI`` is an
39 excellent alternative.
40
41 5. For large bundle adjustment problems (a few thousand cameras or
42 more) use the ``ITERATIVE_SCHUR`` solver. There are a number of
43 preconditioner choices here. ``SCHUR_JACOBI`` offers an
44 excellent balance of speed and accuracy. This is also the
45 recommended option if you are solving medium sized problems for
46 which ``DENSE_SCHUR`` is too slow but ``SuiteSparse`` is not
47 available.
48
49 .. NOTE::
50
51 If you are solving small to medium sized problems, consider
52 setting ``Solver::Options::use_explicit_schur_complement`` to
53 ``true``, it can result in a substantial performance boost.
54
55 If you are not satisfied with ``SCHUR_JACOBI``'s performance try
56 ``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL`` in that
57 order. They require that you have ``SuiteSparse``
58 installed. Both of these preconditioners use a clustering
59 algorithm. Use ``SINGLE_LINKAGE`` before ``CANONICAL_VIEWS``.
60
61#. Use :func:`Solver::Summary::FullReport` to diagnose performance problems.
62
63 When diagnosing Ceres performance issues - runtime and convergence,
64 the first place to start is by looking at the output of
65 ``Solver::Summary::FullReport``. Here is an example
66
67 .. code-block:: bash
68
69 ./bin/bundle_adjuster --input ../data/problem-16-22106-pre.txt
70
71 iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
72 0 4.185660e+06 0.00e+00 2.16e+07 0.00e+00 0.00e+00 1.00e+04 0 7.50e-02 3.58e-01
73 1 1.980525e+05 3.99e+06 5.34e+06 2.40e+03 9.60e-01 3.00e+04 1 1.84e-01 5.42e-01
74 2 5.086543e+04 1.47e+05 2.11e+06 1.01e+03 8.22e-01 4.09e+04 1 1.53e-01 6.95e-01
75 3 1.859667e+04 3.23e+04 2.87e+05 2.64e+02 9.85e-01 1.23e+05 1 1.71e-01 8.66e-01
76 4 1.803857e+04 5.58e+02 2.69e+04 8.66e+01 9.93e-01 3.69e+05 1 1.61e-01 1.03e+00
77 5 1.803391e+04 4.66e+00 3.11e+02 1.02e+01 1.00e+00 1.11e+06 1 1.49e-01 1.18e+00
78
79 Ceres Solver v1.12.0 Solve Report
80 ----------------------------------
81 Original Reduced
82 Parameter blocks 22122 22122
83 Parameters 66462 66462
84 Residual blocks 83718 83718
85 Residual 167436 167436
86
87 Minimizer TRUST_REGION
88
89 Sparse linear algebra library SUITE_SPARSE
90 Trust region strategy LEVENBERG_MARQUARDT
91
92 Given Used
93 Linear solver SPARSE_SCHUR SPARSE_SCHUR
94 Threads 1 1
95 Linear solver threads 1 1
96 Linear solver ordering AUTOMATIC 22106, 16
97
98 Cost:
99 Initial 4.185660e+06
100 Final 1.803391e+04
101 Change 4.167626e+06
102
103 Minimizer iterations 5
104 Successful steps 5
105 Unsuccessful steps 0
106
107 Time (in seconds):
108 Preprocessor 0.283
109
110 Residual evaluation 0.061
111 Jacobian evaluation 0.361
112 Linear solver 0.382
113 Minimizer 0.895
114
115 Postprocessor 0.002
116 Total 1.220
117
118 Termination: NO_CONVERGENCE (Maximum number of iterations reached.)
119
120 Let us focus on run-time performance. The relevant lines to look at
121 are
122
123
124 .. code-block:: bash
125
126 Time (in seconds):
127 Preprocessor 0.283
128
129 Residual evaluation 0.061
130 Jacobian evaluation 0.361
131 Linear solver 0.382
132 Minimizer 0.895
133
134 Postprocessor 0.002
135 Total 1.220
136
137
138 Which tell us that of the total 1.2 seconds, about .3 seconds was
139 spent in the linear solver and the rest was mostly spent in
140 preprocessing and jacobian evaluation.
141
142 The preprocessing seems particularly expensive. Looking back at the
143 report, we observe
144
145 .. code-block:: bash
146
147 Linear solver ordering AUTOMATIC 22106, 16
148
149 Which indicates that we are using automatic ordering for the
150 ``SPARSE_SCHUR`` solver. This can be expensive at times. A straight
151 forward way to deal with this is to give the ordering manually. For
152 ``bundle_adjuster`` this can be done by passing the flag
153 ``-ordering=user``. Doing so and looking at the timing block of the
154 full report gives us
155
156 .. code-block:: bash
157
158 Time (in seconds):
159 Preprocessor 0.051
160
161 Residual evaluation 0.053
162 Jacobian evaluation 0.344
163 Linear solver 0.372
164 Minimizer 0.854
165
166 Postprocessor 0.002
167 Total 0.935
168
169
170
171 The preprocessor time has gone down by more than 5.5x!.