blob: 6de800e59f467b82f4c5048cfc8f552916a9bc1d [file] [log] [blame]
Austin Schuh70cc9552019-01-21 19:46:48 -08001.. highlight:: c++
2
3.. default-domain:: cpp
4
Austin Schuh3de38b02024-06-25 18:25:10 -07005.. cpp:namespace:: ceres
6
Austin Schuh70cc9552019-01-21 19:46:48 -08007.. _chapter-nnls_tutorial:
8
9========================
10Non-linear Least Squares
11========================
12
13Introduction
14============
15
16Ceres can solve bounds constrained robustified non-linear least
17squares problems of the form
18
19.. math:: :label: ceresproblem
20
21 \min_{\mathbf{x}} &\quad \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right) \\
22 \text{s.t.} &\quad l_j \le x_j \le u_j
23
24Problems of this form comes up in a broad range of areas across
25science and engineering - from `fitting curves`_ in statistics, to
26constructing `3D models from photographs`_ in computer vision.
27
28.. _fitting curves: http://en.wikipedia.org/wiki/Nonlinear_regression
29.. _3D models from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment
30
31In this chapter we will learn how to solve :eq:`ceresproblem` using
32Ceres Solver. Full working code for all the examples described in this
33chapter and more can be found in the `examples
34<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
35directory.
36
37The expression
38:math:`\rho_i\left(\left\|f_i\left(x_{i_1},...,x_{i_k}\right)\right\|^2\right)`
39is known as a ``ResidualBlock``, where :math:`f_i(\cdot)` is a
40:class:`CostFunction` that depends on the parameter blocks
41:math:`\left[x_{i_1},... , x_{i_k}\right]`. In most optimization
42problems small groups of scalars occur together. For example the three
43components of a translation vector and the four components of the
44quaternion that define the pose of a camera. We refer to such a group
45of small scalars as a ``ParameterBlock``. Of course a
46``ParameterBlock`` can just be a single parameter. :math:`l_j` and
47:math:`u_j` are bounds on the parameter block :math:`x_j`.
48
49:math:`\rho_i` is a :class:`LossFunction`. A :class:`LossFunction` is
50a scalar function that is used to reduce the influence of outliers on
51the solution of non-linear least squares problems.
52
53As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
54function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
55the more familiar `non-linear least squares problem
56<http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.
57
58.. math:: \frac{1}{2}\sum_{i} \left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2.
59 :label: ceresproblemnonrobust
60
61.. _section-hello-world:
62
63Hello World!
64============
65
66To get started, consider the problem of finding the minimum of the
67function
68
69.. math:: \frac{1}{2}(10 -x)^2.
70
71This is a trivial problem, whose minimum is located at :math:`x = 10`,
72but it is a good place to start to illustrate the basics of solving a
73problem with Ceres [#f1]_.
74
75The first step is to write a functor that will evaluate this the
76function :math:`f(x) = 10 - x`:
77
78.. code-block:: c++
79
80 struct CostFunctor {
81 template <typename T>
82 bool operator()(const T* const x, T* residual) const {
83 residual[0] = 10.0 - x[0];
84 return true;
85 }
86 };
87
88The important thing to note here is that ``operator()`` is a templated
89method, which assumes that all its inputs and outputs are of some type
90``T``. The use of templating here allows Ceres to call
91``CostFunctor::operator<T>()``, with ``T=double`` when just the value
92of the residual is needed, and with a special type ``T=Jet`` when the
93Jacobians are needed. In :ref:`section-derivatives` we will discuss the
94various ways of supplying derivatives to Ceres in more detail.
95
96Once we have a way of computing the residual function, it is now time
97to construct a non-linear least squares problem using it and have
98Ceres solve it.
99
100.. code-block:: c++
101
102 int main(int argc, char** argv) {
103 google::InitGoogleLogging(argv[0]);
104
105 // The variable to solve for with its initial value.
106 double initial_x = 5.0;
107 double x = initial_x;
108
109 // Build the problem.
110 Problem problem;
111
112 // Set up the only cost function (also known as residual). This uses
113 // auto-differentiation to obtain the derivative (jacobian).
114 CostFunction* cost_function =
Austin Schuh3de38b02024-06-25 18:25:10 -0700115 new AutoDiffCostFunction<CostFunctor, 1, 1>();
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800116 problem.AddResidualBlock(cost_function, nullptr, &x);
Austin Schuh70cc9552019-01-21 19:46:48 -0800117
118 // Run the solver!
119 Solver::Options options;
120 options.linear_solver_type = ceres::DENSE_QR;
121 options.minimizer_progress_to_stdout = true;
122 Solver::Summary summary;
123 Solve(options, &problem, &summary);
124
125 std::cout << summary.BriefReport() << "\n";
126 std::cout << "x : " << initial_x
127 << " -> " << x << "\n";
128 return 0;
129 }
130
131:class:`AutoDiffCostFunction` takes a ``CostFunctor`` as input,
132automatically differentiates it and gives it a :class:`CostFunction`
133interface.
134
135Compiling and running `examples/helloworld.cc
136<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
137gives us
138
139.. code-block:: bash
140
141 iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
142 0 4.512500e+01 0.00e+00 9.50e+00 0.00e+00 0.00e+00 1.00e+04 0 5.33e-04 3.46e-03
143 1 4.511598e-07 4.51e+01 9.50e-04 9.50e+00 1.00e+00 3.00e+04 1 5.00e-04 4.05e-03
144 2 5.012552e-16 4.51e-07 3.17e-08 9.50e-04 1.00e+00 9.00e+04 1 1.60e-05 4.09e-03
145 Ceres Solver Report: Iterations: 2, Initial cost: 4.512500e+01, Final cost: 5.012552e-16, Termination: CONVERGENCE
146 x : 0.5 -> 10
147
148Starting from a :math:`x=5`, the solver in two iterations goes to 10
149[#f2]_. The careful reader will note that this is a linear problem and
150one linear solve should be enough to get the optimal value. The
151default configuration of the solver is aimed at non-linear problems,
152and for reasons of simplicity we did not change it in this example. It
153is indeed possible to obtain the solution to this problem using Ceres
154in one iteration. Also note that the solver did get very close to the
155optimal function value of 0 in the very first iteration. We will
156discuss these issues in greater detail when we talk about convergence
157and parameter settings for Ceres.
158
159.. rubric:: Footnotes
160
161.. [#f1] `examples/helloworld.cc
162 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
163.. [#f2] Actually the solver ran for three iterations, and it was
164 by looking at the value returned by the linear solver in the third
165 iteration, it observed that the update to the parameter block was too
166 small and declared convergence. Ceres only prints out the display at
167 the end of an iteration, and terminates as soon as it detects
168 convergence, which is why you only see two iterations here and not
169 three.
170
171.. _section-derivatives:
172
173
174Derivatives
175===========
176
177Ceres Solver like most optimization packages, depends on being able to
178evaluate the value and the derivatives of each term in the objective
179function at arbitrary parameter values. Doing so correctly and
180efficiently is essential to getting good results. Ceres Solver
181provides a number of ways of doing so. You have already seen one of
182them in action --
183Automatic Differentiation in `examples/helloworld.cc
184<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
185
186We now consider the other two possibilities. Analytic and numeric
187derivatives.
188
189
190Numeric Derivatives
191-------------------
192
193In some cases, its not possible to define a templated cost functor,
194for example when the evaluation of the residual involves a call to a
195library function that you do not have control over. In such a
196situation, numerical differentiation can be used. The user defines a
197functor which computes the residual value and construct a
198:class:`NumericDiffCostFunction` using it. e.g., for :math:`f(x) = 10 - x`
199the corresponding functor would be
200
201.. code-block:: c++
202
203 struct NumericDiffCostFunctor {
204 bool operator()(const double* const x, double* residual) const {
205 residual[0] = 10.0 - x[0];
206 return true;
207 }
208 };
209
210Which is added to the :class:`Problem` as:
211
212.. code-block:: c++
213
214 CostFunction* cost_function =
Austin Schuh3de38b02024-06-25 18:25:10 -0700215 new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>();
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800216 problem.AddResidualBlock(cost_function, nullptr, &x);
Austin Schuh70cc9552019-01-21 19:46:48 -0800217
218Notice the parallel from when we were using automatic differentiation
219
220.. code-block:: c++
221
222 CostFunction* cost_function =
Austin Schuh3de38b02024-06-25 18:25:10 -0700223 new AutoDiffCostFunction<CostFunctor, 1, 1>();
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800224 problem.AddResidualBlock(cost_function, nullptr, &x);
Austin Schuh70cc9552019-01-21 19:46:48 -0800225
226The construction looks almost identical to the one used for automatic
227differentiation, except for an extra template parameter that indicates
228the kind of finite differencing scheme to be used for computing the
229numerical derivatives [#f3]_. For more details see the documentation
230for :class:`NumericDiffCostFunction`.
231
232**Generally speaking we recommend automatic differentiation instead of
233numeric differentiation. The use of C++ templates makes automatic
234differentiation efficient, whereas numeric differentiation is
235expensive, prone to numeric errors, and leads to slower convergence.**
236
237
238Analytic Derivatives
239--------------------
240
241In some cases, using automatic differentiation is not possible. For
242example, it may be the case that it is more efficient to compute the
243derivatives in closed form instead of relying on the chain rule used
244by the automatic differentiation code.
245
246In such cases, it is possible to supply your own residual and jacobian
247computation code. To do this, define a subclass of
248:class:`CostFunction` or :class:`SizedCostFunction` if you know the
249sizes of the parameters and residuals at compile time. Here for
250example is ``SimpleCostFunction`` that implements :math:`f(x) = 10 -
251x`.
252
253.. code-block:: c++
254
255 class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
256 public:
257 virtual ~QuadraticCostFunction() {}
258 virtual bool Evaluate(double const* const* parameters,
259 double* residuals,
260 double** jacobians) const {
261 const double x = parameters[0][0];
262 residuals[0] = 10 - x;
263
264 // Compute the Jacobian if asked for.
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800265 if (jacobians != nullptr && jacobians[0] != nullptr) {
Austin Schuh70cc9552019-01-21 19:46:48 -0800266 jacobians[0][0] = -1;
267 }
268 return true;
269 }
270 };
271
272
273``SimpleCostFunction::Evaluate`` is provided with an input array of
274``parameters``, an output array ``residuals`` for residuals and an
275output array ``jacobians`` for Jacobians. The ``jacobians`` array is
276optional, ``Evaluate`` is expected to check when it is non-null, and
277if it is the case then fill it with the values of the derivative of
278the residual function. In this case since the residual function is
279linear, the Jacobian is constant [#f4]_ .
280
281As can be seen from the above code fragments, implementing
282:class:`CostFunction` objects is a bit tedious. We recommend that
283unless you have a good reason to manage the jacobian computation
284yourself, you use :class:`AutoDiffCostFunction` or
285:class:`NumericDiffCostFunction` to construct your residual blocks.
286
287More About Derivatives
288----------------------
289
290Computing derivatives is by far the most complicated part of using
291Ceres, and depending on the circumstance the user may need more
292sophisticated ways of computing derivatives. This section just
293scratches the surface of how derivatives can be supplied to
294Ceres. Once you are comfortable with using
295:class:`NumericDiffCostFunction` and :class:`AutoDiffCostFunction` we
296recommend taking a look at :class:`DynamicAutoDiffCostFunction`,
297:class:`CostFunctionToFunctor`, :class:`NumericDiffFunctor` and
298:class:`ConditionedCostFunction` for more advanced ways of
299constructing and computing cost functions.
300
301.. rubric:: Footnotes
302
303.. [#f3] `examples/helloworld_numeric_diff.cc
304 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_numeric_diff.cc>`_.
305.. [#f4] `examples/helloworld_analytic_diff.cc
306 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_analytic_diff.cc>`_.
307
308
309.. _section-powell:
310
311Powell's Function
312=================
313
314Consider now a slightly more complicated example -- the minimization
315of Powell's function. Let :math:`x = \left[x_1, x_2, x_3, x_4 \right]`
316and
317
318.. math::
319
320 \begin{align}
321 f_1(x) &= x_1 + 10x_2 \\
322 f_2(x) &= \sqrt{5} (x_3 - x_4)\\
323 f_3(x) &= (x_2 - 2x_3)^2\\
324 f_4(x) &= \sqrt{10} (x_1 - x_4)^2\\
325 F(x) &= \left[f_1(x),\ f_2(x),\ f_3(x),\ f_4(x) \right]
326 \end{align}
327
328
329:math:`F(x)` is a function of four parameters, has four residuals
330and we wish to find :math:`x` such that :math:`\frac{1}{2}\|F(x)\|^2`
331is minimized.
332
333Again, the first step is to define functors that evaluate of the terms
334in the objective functor. Here is the code for evaluating
335:math:`f_4(x_1, x_4)`:
336
337.. code-block:: c++
338
339 struct F4 {
340 template <typename T>
341 bool operator()(const T* const x1, const T* const x4, T* residual) const {
342 residual[0] = sqrt(10.0) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
343 return true;
344 }
345 };
346
347
348Similarly, we can define classes ``F1``, ``F2`` and ``F3`` to evaluate
349:math:`f_1(x_1, x_2)`, :math:`f_2(x_3, x_4)` and :math:`f_3(x_2, x_3)`
350respectively. Using these, the problem can be constructed as follows:
351
352
353.. code-block:: c++
354
355 double x1 = 3.0; double x2 = -1.0; double x3 = 0.0; double x4 = 1.0;
356
357 Problem problem;
358
Austin Schuh3de38b02024-06-25 18:25:10 -0700359 // Add residual terms to the problem using the autodiff
Austin Schuh70cc9552019-01-21 19:46:48 -0800360 // wrapper to get the derivatives automatically.
361 problem.AddResidualBlock(
Austin Schuh3de38b02024-06-25 18:25:10 -0700362 new AutoDiffCostFunction<F1, 1, 1, 1>(), nullptr, &x1, &x2);
Austin Schuh70cc9552019-01-21 19:46:48 -0800363 problem.AddResidualBlock(
Austin Schuh3de38b02024-06-25 18:25:10 -0700364 new AutoDiffCostFunction<F2, 1, 1, 1>(), nullptr, &x3, &x4);
Austin Schuh70cc9552019-01-21 19:46:48 -0800365 problem.AddResidualBlock(
Austin Schuh3de38b02024-06-25 18:25:10 -0700366 new AutoDiffCostFunction<F3, 1, 1, 1>(), nullptr, &x2, &x3);
Austin Schuh70cc9552019-01-21 19:46:48 -0800367 problem.AddResidualBlock(
Austin Schuh3de38b02024-06-25 18:25:10 -0700368 new AutoDiffCostFunction<F4, 1, 1, 1>(), nullptr, &x1, &x4);
Austin Schuh70cc9552019-01-21 19:46:48 -0800369
370
371Note that each ``ResidualBlock`` only depends on the two parameters
372that the corresponding residual object depends on and not on all four
373parameters. Compiling and running `examples/powell.cc
374<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_
375gives us:
376
377.. code-block:: bash
378
379 Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
380 iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
Austin Schuh3de38b02024-06-25 18:25:10 -0700381 0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 2.91e-05 3.40e-04
382 1 5.036190e+00 1.02e+02 2.00e+01 0.00e+00 9.53e-01 3.00e+04 1 4.98e-05 3.99e-04
383 2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 2.15e-06 4.06e-04
384 3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 9.54e-07 4.10e-04
385 4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 1.91e-06 4.14e-04
386 5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 1.91e-06 4.18e-04
387 6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 1.19e-06 4.21e-04
388 7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 1.91e-06 4.25e-04
389 8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 9.54e-07 4.28e-04
390 9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 9.54e-07 4.32e-04
391 10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 9.54e-07 4.35e-04
392 11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 9.54e-07 4.38e-04
393 12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 2.15e-06 4.42e-04
394 13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 1.91e-06 4.45e-04
395 14 1.120029e-15 1.68e-14 3.64e-11 1.51e-04 9.37e-01 4.78e+10 1 2.15e-06 4.48e-04
Austin Schuh70cc9552019-01-21 19:46:48 -0800396
Austin Schuh3de38b02024-06-25 18:25:10 -0700397 Solver Summary (v 2.2.0-eigen-(3.4.0)-lapack-suitesparse-(7.1.0)-metis-(5.1.0)-acceleratesparse-eigensparse)
398
Austin Schuh70cc9552019-01-21 19:46:48 -0800399 Original Reduced
400 Parameter blocks 4 4
401 Parameters 4 4
402 Residual blocks 4 4
Austin Schuh3de38b02024-06-25 18:25:10 -0700403 Residuals 4 4
Austin Schuh70cc9552019-01-21 19:46:48 -0800404
405 Minimizer TRUST_REGION
406
407 Dense linear algebra library EIGEN
408 Trust region strategy LEVENBERG_MARQUARDT
Austin Schuh70cc9552019-01-21 19:46:48 -0800409 Given Used
410 Linear solver DENSE_QR DENSE_QR
411 Threads 1 1
Austin Schuh3de38b02024-06-25 18:25:10 -0700412 Linear solver ordering AUTOMATIC 4
Austin Schuh70cc9552019-01-21 19:46:48 -0800413
414 Cost:
415 Initial 1.075000e+02
Austin Schuh3de38b02024-06-25 18:25:10 -0700416 Final 1.120029e-15
Austin Schuh70cc9552019-01-21 19:46:48 -0800417 Change 1.075000e+02
418
Austin Schuh3de38b02024-06-25 18:25:10 -0700419 Minimizer iterations 15
420 Successful steps 15
Austin Schuh70cc9552019-01-21 19:46:48 -0800421 Unsuccessful steps 0
422
423 Time (in seconds):
Austin Schuh3de38b02024-06-25 18:25:10 -0700424 Preprocessor 0.000311
Austin Schuh70cc9552019-01-21 19:46:48 -0800425
Austin Schuh3de38b02024-06-25 18:25:10 -0700426 Residual only evaluation 0.000002 (14)
427 Jacobian & residual evaluation 0.000023 (15)
428 Linear solver 0.000043 (14)
429 Minimizer 0.000163
Austin Schuh70cc9552019-01-21 19:46:48 -0800430
Austin Schuh3de38b02024-06-25 18:25:10 -0700431 Postprocessor 0.000012
432 Total 0.000486
Austin Schuh70cc9552019-01-21 19:46:48 -0800433
434 Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)
435
Austin Schuh3de38b02024-06-25 18:25:10 -0700436 Final x1 = 0.000146222, x2 = -1.46222e-05, x3 = 2.40957e-05, x4 = 2.40957e-05
437
438
439
Austin Schuh70cc9552019-01-21 19:46:48 -0800440
441It is easy to see that the optimal solution to this problem is at
442:math:`x_1=0, x_2=0, x_3=0, x_4=0` with an objective function value of
443:math:`0`. In 10 iterations, Ceres finds a solution with an objective
444function value of :math:`4\times 10^{-12}`.
445
446.. rubric:: Footnotes
447
448.. [#f5] `examples/powell.cc
449 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_.
450
451
452.. _section-fitting:
453
454Curve Fitting
455=============
456
457The examples we have seen until now are simple optimization problems
458with no data. The original purpose of least squares and non-linear
459least squares analysis was fitting curves to data. It is only
460appropriate that we now consider an example of such a problem
461[#f6]_. It contains data generated by sampling the curve :math:`y =
462e^{0.3x + 0.1}` and adding Gaussian noise with standard deviation
463:math:`\sigma = 0.2`. Let us fit some data to the curve
464
465.. math:: y = e^{mx + c}.
466
467We begin by defining a templated object to evaluate the
468residual. There will be a residual for each observation.
469
470.. code-block:: c++
471
472 struct ExponentialResidual {
473 ExponentialResidual(double x, double y)
474 : x_(x), y_(y) {}
475
476 template <typename T>
477 bool operator()(const T* const m, const T* const c, T* residual) const {
478 residual[0] = y_ - exp(m[0] * x_ + c[0]);
479 return true;
480 }
481
482 private:
483 // Observations for a sample.
484 const double x_;
485 const double y_;
486 };
487
488Assuming the observations are in a :math:`2n` sized array called
489``data`` the problem construction is a simple matter of creating a
490:class:`CostFunction` for every observation.
491
492
493.. code-block:: c++
494
495 double m = 0.0;
496 double c = 0.0;
497
498 Problem problem;
499 for (int i = 0; i < kNumObservations; ++i) {
500 CostFunction* cost_function =
Austin Schuh3de38b02024-06-25 18:25:10 -0700501 new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>
502 (data[2 * i], data[2 * i + 1]);
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800503 problem.AddResidualBlock(cost_function, nullptr, &m, &c);
Austin Schuh70cc9552019-01-21 19:46:48 -0800504 }
505
506Compiling and running `examples/curve_fitting.cc
507<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
508gives us:
509
510.. code-block:: bash
511
512 iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
513 0 1.211734e+02 0.00e+00 3.61e+02 0.00e+00 0.00e+00 1.00e+04 0 5.34e-04 2.56e-03
514 1 1.211734e+02 -2.21e+03 0.00e+00 7.52e-01 -1.87e+01 5.00e+03 1 4.29e-05 3.25e-03
515 2 1.211734e+02 -2.21e+03 0.00e+00 7.51e-01 -1.86e+01 1.25e+03 1 1.10e-05 3.28e-03
516 3 1.211734e+02 -2.19e+03 0.00e+00 7.48e-01 -1.85e+01 1.56e+02 1 1.41e-05 3.31e-03
517 4 1.211734e+02 -2.02e+03 0.00e+00 7.22e-01 -1.70e+01 9.77e+00 1 1.00e-05 3.34e-03
518 5 1.211734e+02 -7.34e+02 0.00e+00 5.78e-01 -6.32e+00 3.05e-01 1 1.00e-05 3.36e-03
519 6 3.306595e+01 8.81e+01 4.10e+02 3.18e-01 1.37e+00 9.16e-01 1 2.79e-05 3.41e-03
520 7 6.426770e+00 2.66e+01 1.81e+02 1.29e-01 1.10e+00 2.75e+00 1 2.10e-05 3.45e-03
521 8 3.344546e+00 3.08e+00 5.51e+01 3.05e-02 1.03e+00 8.24e+00 1 2.10e-05 3.48e-03
522 9 1.987485e+00 1.36e+00 2.33e+01 8.87e-02 9.94e-01 2.47e+01 1 2.10e-05 3.52e-03
523 10 1.211585e+00 7.76e-01 8.22e+00 1.05e-01 9.89e-01 7.42e+01 1 2.10e-05 3.56e-03
524 11 1.063265e+00 1.48e-01 1.44e+00 6.06e-02 9.97e-01 2.22e+02 1 2.60e-05 3.61e-03
525 12 1.056795e+00 6.47e-03 1.18e-01 1.47e-02 1.00e+00 6.67e+02 1 2.10e-05 3.64e-03
526 13 1.056751e+00 4.39e-05 3.79e-03 1.28e-03 1.00e+00 2.00e+03 1 2.10e-05 3.68e-03
527 Ceres Solver Report: Iterations: 13, Initial cost: 1.211734e+02, Final cost: 1.056751e+00, Termination: CONVERGENCE
528 Initial m: 0 c: 0
529 Final m: 0.291861 c: 0.131439
530
531Starting from parameter values :math:`m = 0, c=0` with an initial
532objective function value of :math:`121.173` Ceres finds a solution
533:math:`m= 0.291861, c = 0.131439` with an objective function value of
534:math:`1.05675`. These values are a bit different than the
535parameters of the original model :math:`m=0.3, c= 0.1`, but this is
536expected. When reconstructing a curve from noisy data, we expect to
537see such deviations. Indeed, if you were to evaluate the objective
538function for :math:`m=0.3, c=0.1`, the fit is worse with an objective
539function value of :math:`1.082425`. The figure below illustrates the fit.
540
541.. figure:: least_squares_fit.png
542 :figwidth: 500px
543 :height: 400px
544 :align: center
545
546 Least squares curve fitting.
547
548
549.. rubric:: Footnotes
550
551.. [#f6] `examples/curve_fitting.cc
552 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
553
554
555Robust Curve Fitting
556=====================
557
558Now suppose the data we are given has some outliers, i.e., we have
559some points that do not obey the noise model. If we were to use the
560code above to fit such data, we would get a fit that looks as
561below. Notice how the fitted curve deviates from the ground truth.
562
563.. figure:: non_robust_least_squares_fit.png
564 :figwidth: 500px
565 :height: 400px
566 :align: center
567
568To deal with outliers, a standard technique is to use a
569:class:`LossFunction`. Loss functions reduce the influence of
570residual blocks with high residuals, usually the ones corresponding to
571outliers. To associate a loss function with a residual block, we change
572
573.. code-block:: c++
574
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800575 problem.AddResidualBlock(cost_function, nullptr , &m, &c);
Austin Schuh70cc9552019-01-21 19:46:48 -0800576
577to
578
579.. code-block:: c++
580
581 problem.AddResidualBlock(cost_function, new CauchyLoss(0.5) , &m, &c);
582
583:class:`CauchyLoss` is one of the loss functions that ships with Ceres
584Solver. The argument :math:`0.5` specifies the scale of the loss
585function. As a result, we get the fit below [#f7]_. Notice how the
586fitted curve moves back closer to the ground truth curve.
587
588.. figure:: robust_least_squares_fit.png
589 :figwidth: 500px
590 :height: 400px
591 :align: center
592
593 Using :class:`LossFunction` to reduce the effect of outliers on a
594 least squares fit.
595
596
597.. rubric:: Footnotes
598
599.. [#f7] `examples/robust_curve_fitting.cc
600 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robust_curve_fitting.cc>`_
601
602
603Bundle Adjustment
604=================
605
606One of the main reasons for writing Ceres was our need to solve large
607scale bundle adjustment problems [HartleyZisserman]_, [Triggs]_.
608
609Given a set of measured image feature locations and correspondences,
610the goal of bundle adjustment is to find 3D point positions and camera
611parameters that minimize the reprojection error. This optimization
612problem is usually formulated as a non-linear least squares problem,
613where the error is the squared :math:`L_2` norm of the difference between
614the observed feature location and the projection of the corresponding
6153D point on the image plane of the camera. Ceres has extensive support
616for solving bundle adjustment problems.
617
618Let us solve a problem from the `BAL
619<http://grail.cs.washington.edu/projects/bal/>`_ dataset [#f8]_.
620
621The first step as usual is to define a templated functor that computes
622the reprojection error/residual. The structure of the functor is
623similar to the ``ExponentialResidual``, in that there is an
624instance of this object responsible for each image observation.
625
626Each residual in a BAL problem depends on a three dimensional point
627and a nine parameter camera. The nine parameters defining the camera
628are: three for rotation as a Rodrigues' axis-angle vector, three
629for translation, one for focal length and two for radial distortion.
630The details of this camera model can be found the `Bundler homepage
631<http://phototour.cs.washington.edu/bundler/>`_ and the `BAL homepage
632<http://grail.cs.washington.edu/projects/bal/>`_.
633
634.. code-block:: c++
635
636 struct SnavelyReprojectionError {
637 SnavelyReprojectionError(double observed_x, double observed_y)
638 : observed_x(observed_x), observed_y(observed_y) {}
639
640 template <typename T>
641 bool operator()(const T* const camera,
642 const T* const point,
643 T* residuals) const {
644 // camera[0,1,2] are the angle-axis rotation.
645 T p[3];
646 ceres::AngleAxisRotatePoint(camera, point, p);
647 // camera[3,4,5] are the translation.
648 p[0] += camera[3]; p[1] += camera[4]; p[2] += camera[5];
649
650 // Compute the center of distortion. The sign change comes from
651 // the camera model that Noah Snavely's Bundler assumes, whereby
652 // the camera coordinate system has a negative z axis.
653 T xp = - p[0] / p[2];
654 T yp = - p[1] / p[2];
655
656 // Apply second and fourth order radial distortion.
657 const T& l1 = camera[7];
658 const T& l2 = camera[8];
659 T r2 = xp*xp + yp*yp;
660 T distortion = 1.0 + r2 * (l1 + l2 * r2);
661
662 // Compute final projected point position.
663 const T& focal = camera[6];
664 T predicted_x = focal * distortion * xp;
665 T predicted_y = focal * distortion * yp;
666
667 // The error is the difference between the predicted and observed position.
668 residuals[0] = predicted_x - T(observed_x);
669 residuals[1] = predicted_y - T(observed_y);
670 return true;
671 }
672
673 // Factory to hide the construction of the CostFunction object from
674 // the client code.
675 static ceres::CostFunction* Create(const double observed_x,
676 const double observed_y) {
Austin Schuh3de38b02024-06-25 18:25:10 -0700677 return new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>
678 (observed_x, observed_y);
Austin Schuh70cc9552019-01-21 19:46:48 -0800679 }
680
681 double observed_x;
682 double observed_y;
683 };
684
685
686Note that unlike the examples before, this is a non-trivial function
687and computing its analytic Jacobian is a bit of a pain. Automatic
688differentiation makes life much simpler. The function
689:func:`AngleAxisRotatePoint` and other functions for manipulating
690rotations can be found in ``include/ceres/rotation.h``.
691
692Given this functor, the bundle adjustment problem can be constructed
693as follows:
694
695.. code-block:: c++
696
697 ceres::Problem problem;
698 for (int i = 0; i < bal_problem.num_observations(); ++i) {
699 ceres::CostFunction* cost_function =
700 SnavelyReprojectionError::Create(
701 bal_problem.observations()[2 * i + 0],
702 bal_problem.observations()[2 * i + 1]);
703 problem.AddResidualBlock(cost_function,
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800704 nullptr /* squared loss */,
Austin Schuh70cc9552019-01-21 19:46:48 -0800705 bal_problem.mutable_camera_for_observation(i),
706 bal_problem.mutable_point_for_observation(i));
707 }
708
709
710Notice that the problem construction for bundle adjustment is very
711similar to the curve fitting example -- one term is added to the
712objective function per observation.
713
714Since this is a large sparse problem (well large for ``DENSE_QR``
715anyways), one way to solve this problem is to set
716:member:`Solver::Options::linear_solver_type` to
717``SPARSE_NORMAL_CHOLESKY`` and call :func:`Solve`. And while this is
718a reasonable thing to do, bundle adjustment problems have a special
719sparsity structure that can be exploited to solve them much more
720efficiently. Ceres provides three specialized solvers (collectively
721known as Schur-based solvers) for this task. The example code uses the
722simplest of them ``DENSE_SCHUR``.
723
724.. code-block:: c++
725
726 ceres::Solver::Options options;
727 options.linear_solver_type = ceres::DENSE_SCHUR;
728 options.minimizer_progress_to_stdout = true;
729 ceres::Solver::Summary summary;
730 ceres::Solve(options, &problem, &summary);
731 std::cout << summary.FullReport() << "\n";
732
733For a more sophisticated bundle adjustment example which demonstrates
734the use of Ceres' more advanced features including its various linear
Austin Schuh3de38b02024-06-25 18:25:10 -0700735solvers, robust loss functions and manifolds see
Austin Schuh70cc9552019-01-21 19:46:48 -0800736`examples/bundle_adjuster.cc
737<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
738
739
740.. rubric:: Footnotes
741
742.. [#f8] `examples/simple_bundle_adjuster.cc
743 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/simple_bundle_adjuster.cc>`_
744
745Other Examples
746==============
747
748Besides the examples in this chapter, the `example
749<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
750directory contains a number of other examples:
751
752#. `bundle_adjuster.cc
753 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
754 shows how to use the various features of Ceres to solve bundle
755 adjustment problems.
756
757#. `circle_fit.cc
758 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/circle_fit.cc>`_
759 shows how to fit data to a circle.
760
761#. `ellipse_approximation.cc
762 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/ellipse_approximation.cc>`_
763 fits points randomly distributed on an ellipse with an approximate
764 line segment contour. This is done by jointly optimizing the
765 control points of the line segment contour along with the preimage
766 positions for the data points. The purpose of this example is to
767 show an example use case for ``Solver::Options::dynamic_sparsity``,
768 and how it can benefit problems which are numerically dense but
769 dynamically sparse.
770
771#. `denoising.cc
772 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/denoising.cc>`_
773 implements image denoising using the `Fields of Experts
774 <http://www.gris.informatik.tu-darmstadt.de/~sroth/research/foe/index.html>`_
775 model.
776
777#. `nist.cc
778 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/nist.cc>`_
779 implements and attempts to solves the `NIST
780 <http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml>`_
781 non-linear regression problems.
782
783#. `more_garbow_hillstrom.cc
784 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/more_garbow_hillstrom.cc>`_
785 A subset of the test problems from the paper
786
787 Testing Unconstrained Optimization Software
788 Jorge J. More, Burton S. Garbow and Kenneth E. Hillstrom
789 ACM Transactions on Mathematical Software, 7(1), pp. 17-41, 1981
790
791 which were augmented with bounds and used for testing bounds
792 constrained optimization algorithms by
793
794 A Trust Region Approach to Linearly Constrained Optimization
795 David M. Gay
796 Numerical Analysis (Griffiths, D.F., ed.), pp. 72-105
797 Lecture Notes in Mathematics 1066, Springer Verlag, 1984.
798
799
800#. `libmv_bundle_adjuster.cc
801 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_bundle_adjuster.cc>`_
802 is the bundle adjustment algorithm used by `Blender <www.blender.org>`_/libmv.
803
804#. `libmv_homography.cc
805 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_homography.cc>`_
806 This file demonstrates solving for a homography between two sets of
807 points and using a custom exit criterion by having a callback check
808 for image-space error.
809
810#. `robot_pose_mle.cc
811 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robot_pose_mle.cc>`_
812 This example demonstrates how to use the ``DynamicAutoDiffCostFunction``
813 variant of CostFunction. The ``DynamicAutoDiffCostFunction`` is meant to
814 be used in cases where the number of parameter blocks or the sizes are not
815 known at compile time.
816
817 This example simulates a robot traversing down a 1-dimension hallway with
818 noise odometry readings and noisy range readings of the end of the hallway.
819 By fusing the noisy odometry and sensor readings this example demonstrates
820 how to compute the maximum likelihood estimate (MLE) of the robot's pose at
821 each timestep.
822
823#. `slam/pose_graph_2d/pose_graph_2d.cc
824 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_2d/pose_graph_2d.cc>`_
825 The Simultaneous Localization and Mapping (SLAM) problem consists of building
826 a map of an unknown environment while simultaneously localizing against this
827 map. The main difficulty of this problem stems from not having any additional
828 external aiding information such as GPS. SLAM has been considered one of the
829 fundamental challenges of robotics. There are many resources on SLAM
830 [#f9]_. A pose graph optimization problem is one example of a SLAM
831 problem. The following explains how to formulate the pose graph based SLAM
832 problem in 2-Dimensions with relative pose constraints.
833
834 Consider a robot moving in a 2-Dimensional plane. The robot has access to a
835 set of sensors such as wheel odometry or a laser range scanner. From these
836 raw measurements, we want to estimate the trajectory of the robot as well as
837 build a map of the environment. In order to reduce the computational
838 complexity of the problem, the pose graph approach abstracts the raw
839 measurements away. Specifically, it creates a graph of nodes which represent
840 the pose of the robot, and edges which represent the relative transformation
841 (delta position and orientation) between the two nodes. The edges are virtual
842 measurements derived from the raw sensor measurements, e.g. by integrating
843 the raw wheel odometry or aligning the laser range scans acquired from the
844 robot. A visualization of the resulting graph is shown below.
845
846 .. figure:: slam2d.png
847 :figwidth: 500px
848 :height: 400px
849 :align: center
850
851 Visual representation of a graph SLAM problem.
852
853 The figure depicts the pose of the robot as the triangles, the measurements
854 are indicated by the connecting lines, and the loop closure measurements are
855 shown as dotted lines. Loop closures are measurements between non-sequential
856 robot states and they reduce the accumulation of error over time. The
857 following will describe the mathematical formulation of the pose graph
858 problem.
859
860 The robot at timestamp :math:`t` has state :math:`x_t = [p^T, \psi]^T` where
861 :math:`p` is a 2D vector that represents the position in the plane and
862 :math:`\psi` is the orientation in radians. The measurement of the relative
863 transform between the robot state at two timestamps :math:`a` and :math:`b`
864 is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{\psi}_{ab}]`. The residual
865 implemented in the Ceres cost function which computes the error between the
866 measurement and the predicted measurement is:
867
868 .. math:: r_{ab} =
Austin Schuh3de38b02024-06-25 18:25:10 -0700869 \left[
870 \begin{array}{c}
871 R_a^T\left(p_b - p_a\right) - \hat{p}_{ab} \\
872 \mathrm{Normalize}\left(\psi_b - \psi_a - \hat{\psi}_{ab}\right)
873 \end{array}
874 \right]
Austin Schuh70cc9552019-01-21 19:46:48 -0800875
876 where the function :math:`\mathrm{Normalize}()` normalizes the angle in the range
877 :math:`[-\pi,\pi)`, and :math:`R` is the rotation matrix given by
878
879 .. math:: R_a =
Austin Schuh3de38b02024-06-25 18:25:10 -0700880 \left[
881 \begin{array}{cc}
882 \cos \psi_a & -\sin \psi_a \\
883 \sin \psi_a & \cos \psi_a \\
884 \end{array}
885 \right]
Austin Schuh70cc9552019-01-21 19:46:48 -0800886
887 To finish the cost function, we need to weight the residual by the
888 uncertainty of the measurement. Hence, we pre-multiply the residual by the
889 inverse square root of the covariance matrix for the measurement,
890 i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
891 the covariance.
892
Austin Schuh3de38b02024-06-25 18:25:10 -0700893 Lastly, we use a manifold to normalize the orientation in the range
894 :math:`[-\pi,\pi)`. Specially, we define the
895 :member:`AngleManifold::Plus()` function to be:
896 :math:`\mathrm{Normalize}(\psi + \Delta)` and
897 :member:`AngleManifold::Minus()` function to be
898 :math:`\mathrm{Normalize}(y) - \mathrm{Normalize}(x)`.
Austin Schuh70cc9552019-01-21 19:46:48 -0800899
900 This package includes an executable :member:`pose_graph_2d` that will read a
901 problem definition file. This executable can work with any 2D problem
902 definition that uses the g2o format. It would be relatively straightforward
903 to implement a new reader for a different format such as TORO or
904 others. :member:`pose_graph_2d` will print the Ceres solver full summary and
905 then output to disk the original and optimized poses (``poses_original.txt``
906 and ``poses_optimized.txt``, respectively) of the robot in the following
907 format:
908
909 .. code-block:: bash
910
911 pose_id x y yaw_radians
912 pose_id x y yaw_radians
913 pose_id x y yaw_radians
914
915 where ``pose_id`` is the corresponding integer ID from the file
916 definition. Note, the file will be sorted in ascending order for the
917 ``pose_id``.
918
919 The executable :member:`pose_graph_2d` expects the first argument to be
920 the path to the problem definition. To run the executable,
921
922 .. code-block:: bash
923
924 /path/to/bin/pose_graph_2d /path/to/dataset/dataset.g2o
925
926 A python script is provided to visualize the resulting output files.
927
928 .. code-block:: bash
929
930 /path/to/repo/examples/slam/pose_graph_2d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
931
932 As an example, a standard synthetic benchmark dataset [#f10]_ created by
933 Edwin Olson which has 3500 nodes in a grid world with a total of 5598 edges
934 was solved. Visualizing the results with the provided script produces:
935
936 .. figure:: manhattan_olson_3500_result.png
937 :figwidth: 600px
938 :height: 600px
939 :align: center
940
941 with the original poses in green and the optimized poses in blue. As shown,
942 the optimized poses more closely match the underlying grid world. Note, the
943 left side of the graph has a small yaw drift due to a lack of relative
944 constraints to provide enough information to reconstruct the trajectory.
945
946 .. rubric:: Footnotes
947
948 .. [#f9] Giorgio Grisetti, Rainer Kummerle, Cyrill Stachniss, Wolfram
949 Burgard. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800950 Systems Magazine, 52(3):199-222, 2010.
Austin Schuh70cc9552019-01-21 19:46:48 -0800951
952 .. [#f10] E. Olson, J. Leonard, and S. Teller, “Fast iterative optimization of
953 pose graphs with poor initial estimates,” in Robotics and Automation
Austin Schuh1d1e6ea2020-12-23 21:56:30 -0800954 (ICRA), IEEE International Conference on, 2006, pp. 2262-2269.
Austin Schuh70cc9552019-01-21 19:46:48 -0800955
956#. `slam/pose_graph_3d/pose_graph_3d.cc
957 <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_3d/pose_graph_3d.cc>`_
958 The following explains how to formulate the pose graph based SLAM problem in
959 3-Dimensions with relative pose constraints. The example also illustrates how
960 to use Eigen's geometry module with Ceres's automatic differentiation
961 functionality.
962
963 The robot at timestamp :math:`t` has state :math:`x_t = [p^T, q^T]^T` where
964 :math:`p` is a 3D vector that represents the position and :math:`q` is the
965 orientation represented as an Eigen quaternion. The measurement of the
966 relative transform between the robot state at two timestamps :math:`a` and
967 :math:`b` is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{q}_{ab}^T]^T`.
968 The residual implemented in the Ceres cost function which computes the error
969 between the measurement and the predicted measurement is:
970
971 .. math:: r_{ab} =
972 \left[
973 \begin{array}{c}
974 R(q_a)^{T} (p_b - p_a) - \hat{p}_{ab} \\
975 2.0 \mathrm{vec}\left((q_a^{-1} q_b) \hat{q}_{ab}^{-1}\right)
976 \end{array}
977 \right]
978
979 where the function :math:`\mathrm{vec}()` returns the vector part of the
980 quaternion, i.e. :math:`[q_x, q_y, q_z]`, and :math:`R(q)` is the rotation
981 matrix for the quaternion.
982
983 To finish the cost function, we need to weight the residual by the
984 uncertainty of the measurement. Hence, we pre-multiply the residual by the
985 inverse square root of the covariance matrix for the measurement,
986 i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
987 the covariance.
988
Austin Schuh3de38b02024-06-25 18:25:10 -0700989 Given that we are using a quaternion to represent the orientation,
990 we need to use a manifold (:class:`EigenQuaternionManifold`) to
Austin Schuh70cc9552019-01-21 19:46:48 -0800991 only apply updates orthogonal to the 4-vector defining the
Austin Schuh3de38b02024-06-25 18:25:10 -0700992 quaternion. Eigen's quaternion uses a different internal memory
993 layout for the elements of the quaternion than what is commonly
994 used. Specifically, Eigen stores the elements in memory as
995 :math:`[x, y, z, w]` where the real part is last whereas it is
996 typically stored first. Note, when creating an Eigen quaternion
997 through the constructor the elements are accepted in :math:`w`,
998 :math:`x`, :math:`y`, :math:`z` order. Since Ceres operates on
999 parameter blocks which are raw double pointers this difference is
1000 important and requires a different parameterization.
Austin Schuh70cc9552019-01-21 19:46:48 -08001001
1002 This package includes an executable :member:`pose_graph_3d` that will read a
1003 problem definition file. This executable can work with any 3D problem
1004 definition that uses the g2o format with quaternions used for the orientation
1005 representation. It would be relatively straightforward to implement a new
1006 reader for a different format such as TORO or others. :member:`pose_graph_3d`
1007 will print the Ceres solver full summary and then output to disk the original
1008 and optimized poses (``poses_original.txt`` and ``poses_optimized.txt``,
1009 respectively) of the robot in the following format:
1010
1011 .. code-block:: bash
1012
1013 pose_id x y z q_x q_y q_z q_w
1014 pose_id x y z q_x q_y q_z q_w
1015 pose_id x y z q_x q_y q_z q_w
1016 ...
1017
1018 where ``pose_id`` is the corresponding integer ID from the file
1019 definition. Note, the file will be sorted in ascending order for the
1020 ``pose_id``.
1021
1022 The executable :member:`pose_graph_3d` expects the first argument to be the
1023 path to the problem definition. The executable can be run via
1024
1025 .. code-block:: bash
1026
1027 /path/to/bin/pose_graph_3d /path/to/dataset/dataset.g2o
1028
1029 A script is provided to visualize the resulting output files. There is also
1030 an option to enable equal axes using ``--axes_equal``
1031
1032 .. code-block:: bash
1033
1034 /path/to/repo/examples/slam/pose_graph_3d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
1035
1036 As an example, a standard synthetic benchmark dataset [#f9]_ where the robot is
1037 traveling on the surface of a sphere which has 2500 nodes with a total of
1038 4949 edges was solved. Visualizing the results with the provided script
1039 produces:
1040
1041 .. figure:: pose_graph_3d_ex.png
1042 :figwidth: 600px
1043 :height: 300px
1044 :align: center