Squashed 'third_party/ceres/' changes from e51e9b46f..399cda773
399cda773 Update build documentation to reflect detection of Eigen via config mode
bb127272f Fix typos.
a0ec5c32a Update version history for 2.0.0RC2
3f6d27367 Unify symbol visibility configuration for all compilers
29c2912ee Unbreak the bazel build some more
bf47e1a36 Fix the Bazel build.
600e8c529 fix minor typos
bdcdcc78a update docs for changed cmake usage
3f69e5b36 Corrections from William Rucklidge
8bfdb02fb Rewrite uses of VLOG_IF and LOG_IF.
d1b35ffc1 Corrections from William Rucklidge
f34e80e91 Add dividers between licenses.
65c397dae Fix formatting
f63b1fea9 Add the MIT license text corresponding to the libmv derived files.
542613c13 minor formatting fix for trust_region_minimizer.cc
6d9e9843d Remove inclusion of ceres/eigen.h
eafeca5dc Fix a logging bug in TrustRegionMinimizer.
1fd0be916 Fix default initialisation of IterationCallback::cost
137bbe845 add info about clang-format to contributing docs
d3f66d77f fix formatting generated files (best effort)
a9c7361c8 minor formatting fix (wrongly updated in earlier commit)
7b8f675bf fix formatting for (non-generated) internal source files
921368ce3 Fix a number of typos in covariance.h
7b6b2491c fix formatting for examples
82275d8a4 some fixes for Linux and macOS install docs
9d762d74f fix formatting for public header files
c76478c48 gitignore *.pyc
4e69a475c Fix potential for mismatched release/debug TBB libraries
8e1d8e32a A number of small changes.
368a738e5 AutoDiffCostFunction: optional ownership
8cbd721c1 Add erf and erfc to jet.h, including tests in jet_test.cc
31366cff2 Benchmarks for dynamic autodiff.
29fb08aea Use CMAKE_PREFIX_PATH to pass Homebrew install location
242c703b5 Minor fixes to the documentation
79bbf9510 Add changelog for 2.0.0
41d05f13d Fix lint errors in evaluation_callback_test.cc
4b67903c1 Remove unused variables from problem_test.cc
10449fc36 Add Apache license to the LICENSE file for FixedArray
8c3ecec6d Fix some minor errors in IterationCallback docs
7d3ffcb42 Remove forced CONFIG from find_package(Eigen3)
a029fc0f9 Use latest FindTBB.cmake from VTK project
aa1abbc57 Replace use of GFLAGS_LIBRARIES with export gflags target
db2af1be8 Add Problem::EvaluateResidualBlockAssumingParametersUnchanged
ab4ed32cd Replace NULL with nullptr in the documentation.
ee280e27a Allow SubsetParameterization to accept an empty vector of constant parameters.
4b8c731d8 Fix a bug in DynamicAutoDiffCostFunction
5cb5b35a9 Fixed incorrect argument name in RotationMatrixToQuaternion()
e39d9ed1d Add a missing term and remove a superfluous word
27cab77b6 Reformulate some sentences
8ac6655ce Fix documentation formatting issues
7ef83e075 Update minimum required C++ version for Ceres to C++14
1d75e7568 Improve documentation for LocalParameterization
763398ca4 Update the section on Preconditioners
a614f788a Call EvaluationCallback before evaluating the fixed cost.
70308f7bb Simplify documentation generation.
e886d7e65 Reduce the number of minimizer iterations in evaluation_callback_test.cc
9483e6f2f Simplify DynamicCompressedRowJacobianWriter::Write
323cc55bb Update the version in package.xml to 2.0.0.
303b078b5 Fix few typos and alter a NULL to nullptr.
cca93fed6 Bypass Ceres' FindGlog.cmake in CeresConfig.cmake if possible
77fc1d0fc Use build_depend for private dependencies in Catkin package.xml
a09682f00 Fix MSVC version check to support use of clang-cl front-end
b70687fcc Add namespace qualified Ceres::ceres CMake target
99efa54bd Replace type aliases deprecated/removed in C++17/C++20 from FixedArray
adb973e4a NULL -> nullptr
27b717951 Respect FIND_QUIETLY flag in cmake config file
646959ef1 Do not export class template LineParameterization
1f128d070 Change the type of parameter index/offset to match their getter/setter
072c8f070 Initialize integer variables with integer instead of double
8c36bcc81 Use inline & -inlinehint-threshold in auto-diff benchmarks
57cf20aa5 static const -> static constexpr where we can.
40b27482a Add std::numeric_limit specialization for Jets
e751d6e4f Remove AutodiffCodegen
e9eb76f8e Remove AutodiffCodegen CMake integration
9435e08a7 More clang-tidy and wjr@ comment fixes
d93fac4b7 Remove AutodiffCodegen Tests
2281c6ed2 Fixes for comments from William Rucklidge
d797a87a4 Use Ridders' method in GradientChecker.
41675682d Fix a MSVC type deduction bug in ComputeHouseholderVector
947ec0c1f Remove AutodiffCodegen autodiff benchmarks
27183d661 Allow LocalParameterizations to have zero local size.
7ac7d79dc Remove HelloWorldCodegen example
8c8738bf8 Add photometric and relative-pose residuals to autodiff benchmarks
9f7fb66d6 Add a constant cost function to the autodiff benchmarks
ab0d373e4 Fix a comment in autodiff.h
27bb99714 Change SVD algorithm in covariance computation.
84fdac38e Add const to GetCovarianceMatrix*
6bde61d6b Add line local parameterization.
2c1c0932e Update documentation in autodiff.h
8904fa488 Inline Jet initialization in Autodiff
18a464d4e Remove an errant CR from local_parameterization.cc
5c85f2179 Use ArraySelector in Autodiff
80477ff07 Add class ArraySelector
e7a30359e Pass kNumResiduals to Autodiff
f339d71dd Refactor the automatic differentiation benchmarks.
d37b4cb15 Fix some include headers in codegen/test_utils.cc/h
550766e6d Add Autodiff Brdf Benchmark
8da9876e7 Add more autodiff benchmarks
6da364713 Fix Tukey loss function
cf4185c4e Add Codegen BA Benchmark
75dd30fae Simplify GenerateCodeForFunctor
9049688c6 Default Initialize ExpressionRef to Zero
bf1aff2f0 Fix 3+ nested Jet constructor
92d6541c7 Move Codegen files into codegen/ directory
8e962f37d Add Autodiff Codegen Tests
13c7a22ce Codegen Optimizer API
90799e29e Fix install and unnecessary string copy
032d5844c AutoDiff Code Generation - CMake Integration
d82de91b8 Add ExpressionGraph::Erase(ExpressionId)
c8e35e19f Add namespaces to generated functions and constants
75e575cae Fix use of incomplete type in defaulted Problem methods
8def19616 Remove ExpressionRef Move Constructor
f26f95410 Fix windows MSVC build.
fdf9cfd32 Add functions to find the matching ELSE, ENDIF expressions
678c05b28 Fix invert PSD matrix.
a384a7e96 Remove not used using declaration
a60136b7a Add COMMENT ExpressionType
f212c9295 Let Problem::SetParameterization be called more than once.
a3696835b use CMake function to create CeresConfigVersion
67fcff918 Make Problem movable.
19728e72d Add documentation for Problem::IsParameterBlockConstant
ba6e5fb4a Make the custom uninstall target optional
8547cbd55 Make EventLogger more efficient.
edb8322bd Update the minimum required version of Eigen to 3.3.
aa6ef417f Specify Eigen3_DIR in iOS and Android Travis CI builds
4655f2549 Use find_package() instead of find_dependency() in CeresConfig.cmake
a548766d1 Use glfags target
33dd469a5 Use Eigen3::Eigen target
47e784bb4 NULL-jacobians are handled correctly in generated autodiff code
edd54b83e Update Jet.h and rotation.h to use the new IF/ELSE macros
848c1f90c Update return type in code generator and add tests for logical functions
5010421bb Add the expression return type as a member to Expression
f4dc670ee Improve testing of the codegen system
572ec4a5a Rework Expression creation and insertion
c7337154e Disable the code generation module by default
7fa0f3db4 Explicitly state PUBLIC/PRIVATE when linking
4362a2169 Run clang-format on the public headers. Also update copyright year.
c56702aac Fix installation of codegen headers
0d03e74dc Fix the include in the autodiff codegen example
d16026440 Autodiff Codegen Part 4: Public API
d1703db45 Moved AutoDiffCodeGen macros to a separate (public) header
5ce6c063d Fix ExpressionRef copy constructor and add a move constructor
a90b5a12c Pass ExpressionRef by const reference instead of by value
ea057678c Remove MakeFunctionCall() and add test for Ternary
1084c5460 Quote all configure-expanded paths
3d756b07c Test Expressions with 'insert' instead of a macro
486d81812 Add ExpressionGraph::InsertExpression
3831a1dd3 Expression and ExpressionGraph comparison
9bb1dcb84 Remove definition of ExpressionRef::ExpressionRef(double&);
5be2e4883 Autodiff Codegen Part 3: CodeGenerator
6cd633043 Remove unused ExpressionTypes
7d0d69a4d Fix ExpressionRef
6ba8c57d2 Fix expression_test IsArithmetic
2b494cfb3 Update Travis CI to Bionic & Xcode 11.2
a3dde6877 Require Xcode >= 11.2 on macOS 10.15 (Catalina)
6fd4f072d Autodiff Codegen Part 2: Conditionals
52d6477a4 Detect and disable -fstack-check on macOS 10.15 with Xcode 11
46ca461b7 Fix `gradient_check_relative_precision` docs typo
4247d420f Autodiff Codegen Part 1: Expressions
ba62397d8 Run clang-format on jet.h
667062dcc Introduce BlockSparseMatrixData
17becf461 Remove a CHECK failure from covariance_impl.cc
d7f428e5c Add a missing cast in rotation.h
ea4d66e7e clang-tidy fixes.
be15b842a Integrate the SchurEliminatorForOneFBlock for the case <2,3,6>
087b28f1b Remove use of SetUsage as it creates compilation problems.
573046d7f Protect declarations of lapack functions under CERES_NO_LAPACK
71d638ef3 Add a specialized schur eliminator.
2ffddaccf Use override & final instead of just using virtual.
e4577dd6d Use override instead of virtual for subclasses.
3e5db5bc2 Fixing documentation typo.
82d325b73 Avoid memory allocations in Accelerate Sparse[Refactor/Solve]().
f66b51382 Fix some clang-tidy warnings.
0428e2dd0 Fix missing #include of <memory>
487c1aa51 Expose SubsetPreconditioner in the API
bf709ecac Move EvaluationCallback from Solver::Options to Problem::Options.
059bcb7f8 Drop ROS dependency on catkin
c4dbc927d Default to any other sparse libraries over Accelerate
db1f5b57a Allow some methods in Problem to use const double*.
a60c14525 Explicitly delete the copy constructor and copy assignment operator
084042c25 Lint changes from William Rucklidge
93d869020 Use selfAdjoingView<Upper> in InvertPSDMatrix.
a0cd0854a Speed up InvertPSDMatrix
7b53262b7 Allow Solver::Options::max_num_line_search_step_size_iterations = 0.
3e2cdca54 Make LineSearchMinizer work correctly with negative valued functions.
3ff12a878 Fix a clang-tidy warning in problem_test.cc
57441fe90 Fix two bugs.
1b852c57e Add Problem::EvaluateResidualBlock.
54ba6c27b Fix missing declaration warnings in Ceres code
fac46d50e Modernize ProductParameterization.
53dc6213f Add some missing string-to-enum-to-string convertors.
c0aa9a263 Add checks in rotation.h for inplace operations.
0f57fa82d Update Bazel WORKSPACE for newest Bazel
f8e5fba7b TripletSparseMatrix: guard against self-assignment
939253c20 Fix Eigen alignment issues.
bf67daf79 Add the missing <array> header to fixed_array.h
25e1cdbb6 Switch to FixedArray implementation from abseil.
d467a627b IdentityTransformation -> IdentityParameterization
eaec6a9d0 Fix more typos in CostFunctionToFunctor documentation.
99b5aa4aa Fix typos in CostFunctionToFunctor documentation.
ee7e2cb3c Set Homebrew paths via HINTS not CMAKE_PREFIX_PATH
4f8a01853 Revert "Fix custom Eigen on macos (EIGEN_INCLUDE_DIR_HINTS)"
e6c5c7226 Fix custom Eigen on macos (EIGEN_INCLUDE_DIR_HINTS)
5a56d522e Add the 3,3,3 template specialization.
df5c23116 Reorder initializer list to make -Wreorder happy
0fcfdb0b4 Fix the build breakage caused by the last commit.
9b9e9f0dc Reduce machoness of macro definition in cost_functor_to_function_test.cc
21d40daa0 Remove UTF-8 chars
9350e57a4 Enable optional use of sanitizers
0456edffb Update Travis CI Linux distro to 16.04 (Xenial)
bef0dfe35 Fix a typo in cubic_interpolation.h
056ba9bb1 Add AutoDiffFirstOrderFunction
6e527392d Update googletest/googlemock to db9b85e2.
1b2940749 Clarify documentation of BiCubicInterpolator::Evaluate for out-of-bounds values
Change-Id: Id61dd832e8fbe286deb0799aa1399d4017031dae
git-subtree-dir: third_party/ceres
git-subtree-split: 399cda773035d99eaf1f4a129a666b3c4df9d1b1
diff --git a/docs/source/automatic_derivatives.rst b/docs/source/automatic_derivatives.rst
index 0c48c80..e15e911 100644
--- a/docs/source/automatic_derivatives.rst
+++ b/docs/source/automatic_derivatives.rst
@@ -266,7 +266,6 @@
Indeed, this is essentially how :class:`AutoDiffCostFunction` works.
-
Pitfalls
========
diff --git a/docs/source/bibliography.rst b/docs/source/bibliography.rst
index 5352c65..c13c676 100644
--- a/docs/source/bibliography.rst
+++ b/docs/source/bibliography.rst
@@ -17,12 +17,12 @@
.. [ByrdNocedal] R. H. Byrd, J. Nocedal, R. B. Schanbel,
**Representations of Quasi-Newton Matrices and their use in Limited
- Memory Methods**, *Mathematical Programming* 63(4):129–-156, 1994.
+ Memory Methods**, *Mathematical Programming* 63(4):129-156, 1994.
.. [ByrdSchnabel] R.H. Byrd, R.B. Schnabel, and G.A. Shultz, **Approximate
solution of the trust region problem by minimization over
two dimensional subspaces**, *Mathematical programming*,
- 40(1):247–263, 1988.
+ 40(1):247-263, 1988.
.. [Chen] Y. Chen, T. A. Davis, W. W. Hager, and
S. Rajamanickam, **Algorithm 887: CHOLMOD, Supernodal Sparse
@@ -31,14 +31,27 @@
.. [Conn] A.R. Conn, N.I.M. Gould, and P.L. Toint, **Trust region
methods**, *Society for Industrial Mathematics*, 2000.
+.. [Dellaert] F. Dellaert, J. Carlson, V. Ila, K. Ni and C. E. Thorpe,
+ **Subgraph-preconditioned conjugate gradients for large scale SLAM**,
+ *International Conference on Intelligent Robots and Systems*, 2010.
+
.. [GolubPereyra] G.H. Golub and V. Pereyra, **The differentiation of
pseudo-inverses and nonlinear least squares problems whose
variables separate**, *SIAM Journal on numerical analysis*,
- 10(2):413–432, 1973.
+ 10(2):413-432, 1973.
+
+.. [GouldScott] N. Gould and J. Scott, **The State-of-the-Art of
+ Preconditioners for Sparse Linear Least-Squares Problems**,
+ *ACM Trans. Math. Softw.*, 43(4), 2017.
.. [HartleyZisserman] R.I. Hartley & A. Zisserman, **Multiview
Geometry in Computer Vision**, Cambridge University Press, 2004.
+.. [Hertzberg] C. Hertzberg, R. Wagner, U. Frese and L. Schroder,
+ **Integrating Generic Sensor Fusion Algorithms with Sound State
+ Representations through Encapsulation of Manifolds**, *Information
+ Fusion*, 14(1):57-77, 2013.
+
.. [KanataniMorris] K. Kanatani and D. D. Morris, **Gauges and gauge
transformations for uncertainty description of geometric structure
with indeterminacy**, *IEEE Transactions on Information Theory*
@@ -53,27 +66,27 @@
IEEE Conference on Computer Vision and Pattern Recognition*, 2012.
.. [Kanzow] C. Kanzow, N. Yamashita and M. Fukushima,
- **Levenberg–Marquardt methods with strong local convergence
+ **Levenberg-Marquardt methods with strong local convergence
properties for solving nonlinear equations with convex
constraints**, *Journal of Computational and Applied Mathematics*,
- 177(2):375–397, 2005.
+ 177(2):375-397, 2005.
.. [Levenberg] K. Levenberg, **A method for the solution of certain
nonlinear problems in least squares**, *Quart. Appl. Math*,
- 2(2):164–168, 1944.
+ 2(2):164-168, 1944.
.. [LiSaad] Na Li and Y. Saad, **MIQR: A multilevel incomplete qr
preconditioner for large sparse least squares problems**, *SIAM
- Journal on Matrix Analysis and Applications*, 28(2):524–550, 2007.
+ Journal on Matrix Analysis and Applications*, 28(2):524-550, 2007.
.. [Madsen] K. Madsen, H.B. Nielsen, and O. Tingleff, **Methods for
nonlinear least squares problems**, 2004.
.. [Mandel] J. Mandel, **On block diagonal and Schur complement
- preconditioning**, *Numer. Math.*, 58(1):79–93, 1990.
+ preconditioning**, *Numer. Math.*, 58(1):79-93, 1990.
.. [Marquardt] D.W. Marquardt, **An algorithm for least squares
- estimation of nonlinear parameters**, *J. SIAM*, 11(2):431–441,
+ estimation of nonlinear parameters**, *J. SIAM*, 11(2):431-441,
1963.
.. [Mathew] T.P.A. Mathew, **Domain decomposition methods for the
@@ -82,7 +95,7 @@
.. [NashSofer] S.G. Nash and A. Sofer, **Assessing a search direction
within a truncated newton method**, *Operations Research Letters*,
- 9(4):219–221, 1990.
+ 9(4):219-221, 1990.
.. [Nocedal] J. Nocedal, **Updating Quasi-Newton Matrices with Limited
Storage**, *Mathematics of Computation*, 35(151): 773--782, 1980.
@@ -102,12 +115,15 @@
F'(x) F"(x)**, Advances in Engineering Software 4(2), 75-76, 1978.
.. [RuheWedin] A. Ruhe and P.Å. Wedin, **Algorithms for separable
- nonlinear least squares problems**, Siam Review, 22(3):318–337,
+ nonlinear least squares problems**, Siam Review, 22(3):318-337,
1980.
.. [Saad] Y. Saad, **Iterative methods for sparse linear
systems**, SIAM, 2003.
+.. [Simon] I. Simon, N. Snavely and S. M. Seitz, **Scene Summarization
+ for Online Image Collections**, *International Conference on Computer Vision*, 2007.
+
.. [Stigler] S. M. Stigler, **Gauss and the invention of least
squares**, *The Annals of Statistics*, 9(3):465-474, 1981.
@@ -124,9 +140,9 @@
.. [Wiberg] T. Wiberg, **Computation of principal components when data
are missing**, In Proc. *Second Symp. Computational Statistics*,
- pages 229–236, 1976.
+ pages 229-236, 1976.
.. [WrightHolt] S. J. Wright and J. N. Holt, **An Inexact
Levenberg Marquardt Method for Large Sparse Nonlinear Least
Squares**, *Journal of the Australian Mathematical Society Series
- B*, 26(4):387–403, 1985.
+ B*, 26(4):387-403, 1985.
diff --git a/docs/source/conf.py b/docs/source/conf.py
index c266746..c83468f 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -41,16 +41,16 @@
# General information about the project.
project = u'Ceres Solver'
-copyright = u'2018 Google Inc'
+copyright = u'2020 Google Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
-version = '1.14'
+version = '2.0'
# The full version, including alpha/beta/rc tags.
-release = '1.14.0'
+release = '2.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@@ -240,3 +240,15 @@
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
+
+# Custom configuration for MathJax.
+#
+# By default MathJax does not use TeX fonts, which is a tragedy. Also
+# scaling the fonts down a bit makes them fit better with font sizing
+# in the "Read The Docs" theme.
+mathjax_config = {
+'HTML-CSS' : {
+ 'availableFonts' : ["TeX"],
+ 'scale' : 90
+ }
+}
diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst
index 3ef8629..a128e30 100644
--- a/docs/source/contributing.rst
+++ b/docs/source/contributing.rst
@@ -56,7 +56,7 @@
On Mac and Linux, the ``CMake`` build will download and enable
the Gerrit pre-commit hook automatically. This pre-submit hook
- creates `Change-Id: ...` lines in your commits.
+ creates ``Change-Id: ...`` lines in your commits.
If this does not work OR you are on Windows, execute the
following in the root directory of the local ``git`` repository:
@@ -86,12 +86,26 @@
a recent `Git for Windows <https://git-scm.com/download/win>`_ install to
enable automatic lookup in the ``%USERPROFILE%\.gitcookies``.
+6. Install ``clang-format``.
+
+ * Mac ``brew install clang-format``.
+ * Linux ``sudo apt-get install clang-format``.
+ * Windows. You can get clang-format with `clang or stand-alone via
+ npm <https://superuser.com/a/1505297/1141693>`_.
+
+ You can ensure all sources files are correctly formatted before
+ committing by manually running ``clang-format -i FILENAME``, by
+ running the script ``./scripts/format_all.sh``, or by configuring
+ your editor to format upon saving.
+
Submitting a change
===================
1. Make your changes against master or whatever branch you
- like. Commit your changes as one patch. When you commit, the Gerrit
- hook will add a `Change-Id:` line as the last line of the commit.
+ like. Ensure that the changes are formatted according to
+ ``clang-format``. Commit your changes as one patch. When you
+ commit, the Gerrit hook will add a ``Change-Id:`` line as the last
+ line of the commit.
Make sure that your commit message is formatted in the `50/72 style
<http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html>`_.
diff --git a/docs/source/features.rst b/docs/source/features.rst
index e71bd39..724d6dc 100644
--- a/docs/source/features.rst
+++ b/docs/source/features.rst
@@ -44,7 +44,7 @@
solvers - dense QR and dense Cholesky factorization (using
`Eigen`_ or `LAPACK`_) for dense problems, sparse Cholesky
factorization (`SuiteSparse`_, `CXSparse`_ or `Eigen`_) for large
- sparse problems custom Schur complement based dense, sparse, and
+ sparse problems, custom Schur complement based dense, sparse, and
iterative linear solvers for `bundle adjustment`_ problems.
- **Line Search Solvers** - When the problem size is so large that
@@ -54,8 +54,9 @@
of Non-linear Conjugate Gradients, BFGS and LBFGS.
* **Speed** - Ceres Solver has been extensively optimized, with C++
- templating, hand written linear algebra routines and OpenMP or C++11 threads
- based multithreading of the Jacobian evaluation and the linear solvers.
+ templating, hand written linear algebra routines and OpenMP or
+ modern C++ threads based multithreading of the Jacobian evaluation
+ and the linear solvers.
* **Solution Quality** Ceres is the `best performing`_ solver on the NIST
problem set used by Mondragon and Borchers for benchmarking
@@ -63,7 +64,7 @@
* **Covariance estimation** - Evaluate the sensitivity/uncertainty of
the solution by evaluating all or part of the covariance
- matrix. Ceres is one of the few solvers that allows you to to do
+ matrix. Ceres is one of the few solvers that allows you to do
this analysis at scale.
* **Community** Since its release as an open source software, Ceres
diff --git a/docs/source/gradient_solver.rst b/docs/source/gradient_solver.rst
index 1356e74..dde9d7e 100644
--- a/docs/source/gradient_solver.rst
+++ b/docs/source/gradient_solver.rst
@@ -33,10 +33,10 @@
.. function:: bool FirstOrderFunction::Evaluate(const double* const parameters, double* cost, double* gradient) const
Evaluate the cost/value of the function. If ``gradient`` is not
- ``NULL`` then evaluate the gradient too. If evaluation is
+ ``nullptr`` then evaluate the gradient too. If evaluation is
successful return, ``true`` else return ``false``.
- ``cost`` guaranteed to be never ``NULL``, ``gradient`` can be ``NULL``.
+ ``cost`` guaranteed to be never ``nullptr``, ``gradient`` can be ``nullptr``.
.. function:: int FirstOrderFunction::NumParameters() const
diff --git a/docs/source/gradient_tutorial.rst b/docs/source/gradient_tutorial.rst
index 0bbdee4..3fef6b6 100644
--- a/docs/source/gradient_tutorial.rst
+++ b/docs/source/gradient_tutorial.rst
@@ -40,7 +40,7 @@
const double y = parameters[1];
cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
- if (gradient != NULL) {
+ if (gradient != nullptr) {
gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
gradient[1] = 200.0 * (y - x * x);
}
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index b3dfb50..7f49783 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -9,7 +9,7 @@
.. _section-source:
You can start with the `latest stable release
-<http://ceres-solver.org/ceres-solver-1.14.0.tar.gz>`_ . Or if you want
+<http://ceres-solver.org/ceres-solver-2.0.0.tar.gz>`_ . Or if you want
the latest version, you can clone the git repository
.. code-block:: bash
@@ -23,16 +23,15 @@
.. NOTE ::
- All versions of Ceres > 1.14 require a **fully C++11-compliant**
- compiler. In versions <= 1.14, C++11 was an optional requirement
- controlled by the ``CXX11 [Default: OFF]`` build option.
+ Starting with v2.0 Ceres requires a **fully C++14-compliant**
+ compiler. In versions <= 1.14, C++11 was an optional requirement.
Ceres relies on a number of open source libraries, some of which are
optional. For details on customizing the build process, see
:ref:`section-customizing` .
- `Eigen <http://eigen.tuxfamily.org/index.php?title=Main_Page>`_
- 3.2.2 or later **strongly** recommended, 3.1.0 or later **required**.
+ 3.3 or later **required**.
.. NOTE ::
@@ -40,8 +39,7 @@
library. Please see the documentation for ``EIGENSPARSE`` for
more details.
-- `CMake <http://www.cmake.org>`_ 3.5 or later.
- **Required on all platforms except for legacy Android.**
+- `CMake <http://www.cmake.org>`_ 3.5 or later **required**.
- `glog <https://github.com/google/glog>`_ 0.3.1 or
later. **Recommended**
@@ -77,13 +75,21 @@
<https://code.google.com/p/google-glog/issues/detail?id=194>`_.
- `gflags <https://github.com/gflags/gflags>`_. Needed to build
- examples and tests.
+ examples and tests and usually a dependency for glog.
- `SuiteSparse
<http://faculty.cse.tamu.edu/davis/suitesparse.html>`_. Needed for
solving large sparse linear systems. **Optional; strongly recomended
for large scale bundle adjustment**
+ .. NOTE ::
+
+ If SuiteSparseQR is found, Ceres attempts to find the Intel
+ Thread Building Blocks (TBB) library. If found, Ceres assumes
+ SuiteSparseQR was compiled with TBB support and will link to the
+ found TBB version. You can customize the searched TBB location
+ with the ``TBB_ROOT`` variable.
+
- `CXSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_.
Similar to ``SuiteSparse`` but simpler and slower. CXSparse has
no dependencies on ``LAPACK`` and ``BLAS``. This makes for a simpler
@@ -98,7 +104,7 @@
``SuiteSparse``, and optionally used by Ceres directly for some
operations.
- On ``UNIX`` OSes other than Mac OS X we recommend `ATLAS
+ On ``UNIX`` OSes other than macOS we recommend `ATLAS
<http://math-atlas.sourceforge.net/>`_, which includes ``BLAS`` and
``LAPACK`` routines. It is also possible to use `OpenBLAS
<https://github.com/xianyi/OpenBLAS>`_ . However, one needs to be
@@ -106,7 +112,7 @@
<https://github.com/xianyi/OpenBLAS/wiki/faq#wiki-multi-threaded>`_
inside ``OpenBLAS`` as it conflicts with use of threads in Ceres.
- Mac OS X ships with an optimized ``LAPACK`` and ``BLAS``
+ MacOS ships with an optimized ``LAPACK`` and ``BLAS``
implementation as part of the ``Accelerate`` framework. The Ceres
build system will automatically detect and use it.
@@ -124,18 +130,11 @@
We will use `Ubuntu <http://www.ubuntu.com>`_ as our example linux
distribution.
-.. NOTE::
+ .. NOTE ::
- Up to at least Ubuntu 14.04, the SuiteSparse package in the official
- package repository (built from SuiteSparse v3.4.0) **cannot** be used
- to build Ceres as a *shared* library. Thus if you want to build
- Ceres as a shared library using SuiteSparse, you must perform a
- source install of SuiteSparse or use an external PPA (see `bug report
- here
- <https://bugs.launchpad.net/ubuntu/+source/suitesparse/+bug/1333214>`_).
- It is recommended that you use the current version of SuiteSparse
- (4.2.1 at the time of writing).
-
+ These instructions are for Ubuntu 18.04 and newer. On Ubuntu 16.04
+ you need to manually get a more recent version of Eigen, such as
+ 3.3.7.
Start by installing all the dependencies.
@@ -144,30 +143,22 @@
# CMake
sudo apt-get install cmake
# google-glog + gflags
- sudo apt-get install libgoogle-glog-dev
+ sudo apt-get install libgoogle-glog-dev libgflags-dev
# BLAS & LAPACK
sudo apt-get install libatlas-base-dev
# Eigen3
sudo apt-get install libeigen3-dev
# SuiteSparse and CXSparse (optional)
- # - If you want to build Ceres as a *static* library (the default)
- # you can use the SuiteSparse package in the main Ubuntu package
- # repository:
- sudo apt-get install libsuitesparse-dev
- # - However, if you want to build Ceres as a *shared* library, you must
- # add the following PPA:
- sudo add-apt-repository ppa:bzindovic/suitesparse-bugfix-1319687
- sudo apt-get update
sudo apt-get install libsuitesparse-dev
We are now ready to build, test, and install Ceres.
.. code-block:: bash
- tar zxf ceres-solver-1.14.0.tar.gz
+ tar zxf ceres-solver-2.0.0.tar.gz
mkdir ceres-bin
cd ceres-bin
- cmake ../ceres-solver-1.14.0
+ cmake ../ceres-solver-2.0.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
@@ -181,7 +172,7 @@
.. code-block:: bash
- bin/simple_bundle_adjuster ../ceres-solver-1.14.0/data/problem-16-22106-pre.txt
+ bin/simple_bundle_adjuster ../ceres-solver-2.0.0/data/problem-16-22106-pre.txt
This runs Ceres for a maximum of 10 iterations using the
``DENSE_SCHUR`` linear solver. The output should look something like
@@ -198,7 +189,7 @@
5 1.803399e+04 5.33e+01 1.48e+04 1.23e+01 9.99e-01 8.33e+05 1 1.45e-01 1.08e+00
6 1.803390e+04 9.02e-02 6.35e+01 8.00e-01 1.00e+00 2.50e+06 1 1.50e-01 1.23e+00
- Ceres Solver v1.14.0 Solve Report
+ Ceres Solver v2.0.0 Solve Report
----------------------------------
Original Reduced
Parameter blocks 22122 22122
@@ -239,30 +230,16 @@
Termination: CONVERGENCE (Function tolerance reached. |cost_change|/cost: 1.769766e-09 <= 1.000000e-06)
-.. section-osx:
+.. section-macos:
-Mac OS X
-========
-.. NOTE::
+macOS
+=====
- Ceres will not compile using Xcode 4.5.x (Clang version 4.1) due to a
- bug in that version of Clang. If you are running Xcode 4.5.x, please
- update to Xcode >= 4.6.x before attempting to build Ceres.
+On macOS, you can either use `Homebrew
+<https://brew.sh/>`_ (recommended) or `MacPorts
+<https://www.macports.org/>`_ to install Ceres Solver.
-
-On OS X, you can either use `MacPorts <https://www.macports.org/>`_ or
-`Homebrew <http://mxcl.github.com/homebrew/>`_ to install Ceres Solver.
-
-If using `MacPorts <https://www.macports.org/>`_, then
-
-.. code-block:: bash
-
- sudo port install ceres-solver
-
-will install the latest version.
-
-If using `Homebrew <http://mxcl.github.com/homebrew/>`_ and assuming
-that you have the ``homebrew/science`` [#f1]_ tap enabled, then
+If using `Homebrew <https://brew.sh/>`_, then
.. code-block:: bash
@@ -277,9 +254,17 @@
will install the latest version in the git repo.
+If using `MacPorts <https://www.macports.org/>`_, then
+
+.. code-block:: bash
+
+ sudo port install ceres-solver
+
+will install the latest version.
+
You can also install each of the dependencies by hand using `Homebrew
-<http://mxcl.github.com/homebrew/>`_. There is no need to install
-``BLAS`` or ``LAPACK`` separately as OS X ships with optimized
+<https://brew.sh/>`_. There is no need to install
+``BLAS`` or ``LAPACK`` separately as macOS ships with optimized
``BLAS`` and ``LAPACK`` routines as part of the `vecLib
<https://developer.apple.com/library/mac/#documentation/Performance/Conceptual/vecLib/Reference/reference.html>`_
framework.
@@ -289,7 +274,7 @@
# CMake
brew install cmake
# google-glog and gflags
- brew install glog
+ brew install glog gflags
# Eigen3
brew install eigen
# SuiteSparse and CXSparse
@@ -299,10 +284,10 @@
.. code-block:: bash
- tar zxf ceres-solver-1.14.0.tar.gz
+ tar zxf ceres-solver-2.0.0.tar.gz
mkdir ceres-bin
cd ceres-bin
- cmake ../ceres-solver-1.14.0
+ cmake ../ceres-solver-2.0.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
@@ -310,13 +295,13 @@
# documentation for the EXPORT_BUILD_DIR option for more information.
make install
-Building with OpenMP on OS X
-----------------------------
+Building with OpenMP on macOS
+-----------------------------
-Up to at least Xcode 8, OpenMP support was disabled in Apple's version of
+Up to at least Xcode 12, OpenMP support was disabled in Apple's version of
Clang. However, you can install the latest version of the LLVM toolchain
from Homebrew which does support OpenMP, and thus build Ceres with OpenMP
-support on OS X. To do this, you must install llvm via Homebrew:
+support on macOS. To do this, you must install llvm via Homebrew:
.. code-block:: bash
@@ -330,7 +315,7 @@
.. code-block:: bash
- tar zxf ceres-solver-1.14.0.tar.gz
+ tar zxf ceres-solver-2.0.0.tar.gz
mkdir ceres-bin
cd ceres-bin
# Configure the local shell only (not persistent) to use the Homebrew LLVM
@@ -340,9 +325,8 @@
export LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
export CPPFLAGS="-I/usr/local/opt/llvm/include"
export PATH="/usr/local/opt/llvm/bin:$PATH"
- # Force CMake to use the Homebrew version of Clang. OpenMP will be
- # automatically enabled if it is detected that the compiler supports it.
- cmake -DCMAKE_C_COMPILER=/usr/local/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/usr/local/opt/llvm/bin/clang++ ../ceres-solver-1.14.0
+ # Force CMake to use the Homebrew version of Clang and enable OpenMP.
+ cmake -DCMAKE_C_COMPILER=/usr/local/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/usr/local/opt/llvm/bin/clang++ -DCERES_THREADING_MODEL=OPENMP ../ceres-solver-2.0.0
make -j3
make test
# Optionally install Ceres. It can also be exported using CMake which
@@ -353,19 +337,6 @@
Like the Linux build, you should now be able to run
``bin/simple_bundle_adjuster``.
-
-.. rubric:: Footnotes
-
-.. [#f1] Ceres and many of its dependencies are in `homebrew/science
- <https://github.com/Homebrew/homebrew-science>`_ tap. So, if you
- don't have this tap enabled, then you will need to enable it as
- follows before executing any of the commands in this section.
-
- .. code-block:: bash
-
- brew tap homebrew/science
-
-
.. _section-windows:
Windows
@@ -378,9 +349,9 @@
<https://github.com/tbennun/ceres-windows>`_ for Ceres Solver by Tal
Ben-Nun.
-On Windows, we support building with Visual Studio 2013 Release 4 or newer. Note
+On Windows, we support building with Visual Studio 2015.2 of newer. Note
that the Windows port is less featureful and less tested than the
-Linux or Mac OS X versions due to the lack of an officially supported
+Linux or macOS versions due to the lack of an officially supported
way of building SuiteSparse and CXSparse. There are however a number
of unofficial ways of building these libraries. Building on Windows
also a bit more involved since there is no automated way to install
@@ -409,8 +380,9 @@
#. Get dependencies; unpack them as subdirectories in ``ceres/``
(``ceres/eigen``, ``ceres/glog``, etc)
- #. ``Eigen`` 3.1 (needed on Windows; 3.0.x will not work). There is
- no need to build anything; just unpack the source tarball.
+ #. ``Eigen`` 3.3 . Configure and optionally install Eigen. It should be
+ exported into the CMake package registry by default as part of the
+ configure stage so installation should not be necessary.
#. ``google-glog`` Open up the Visual Studio solution and build it.
#. ``gflags`` Open up the Visual Studio solution and build it.
@@ -431,7 +403,7 @@
#. Unpack the Ceres tarball into ``ceres``. For the tarball, you
should get a directory inside ``ceres`` similar to
- ``ceres-solver-1.3.0``. Alternately, checkout Ceres via ``git`` to
+ ``ceres-solver-2.0.0``. Alternately, checkout Ceres via ``git`` to
get ``ceres-solver.git`` inside ``ceres``.
#. Install ``CMake``,
@@ -445,11 +417,10 @@
#. Try running ``Configure``. It won't work. It'll show a bunch of options.
You'll need to set:
- #. ``EIGEN_INCLUDE_DIR_HINTS``
+ #. ``Eigen3_DIR`` (Set to directory containing ``Eigen3Config.cmake``)
#. ``GLOG_INCLUDE_DIR_HINTS``
#. ``GLOG_LIBRARY_DIR_HINTS``
- #. ``GFLAGS_INCLUDE_DIR_HINTS``
- #. ``GFLAGS_LIBRARY_DIR_HINTS``
+ #. (Optional) ``gflags_DIR`` (Set to directory containing ``gflags-config.cmake``)
#. (Optional) ``SUITESPARSE_INCLUDE_DIR_HINTS``
#. (Optional) ``SUITESPARSE_LIBRARY_DIR_HINTS``
#. (Optional) ``CXSPARSE_INCLUDE_DIR_HINTS``
@@ -507,10 +478,10 @@
cmake \
-DCMAKE_TOOLCHAIN_FILE=\
$NDK_DIR/build/cmake/android.toolchain.cmake \
- -DEIGEN_INCLUDE_DIR=/path/to/eigen/header \
- -DANDROID_ABI=armeabi-v7a \
+ -DEigen3_DIR=/path/to/Eigen3Config.cmake \
+ -DANDROID_ABI=arm64-v8a \
-DANDROID_STL=c++_shared \
- -DANDROID_NATIVE_API_LEVEL=android-24 \
+ -DANDROID_NATIVE_API_LEVEL=android-29 \
-DBUILD_SHARED_LIBS=ON \
-DMINIGLOG=ON \
<PATH_TO_CERES_SOURCE>
@@ -538,6 +509,7 @@
the sample by running for example:
.. code-block:: bash
+
adb shell
cd /data/local/tmp
LD_LIBRARY_PATH=/data/local/tmp ./helloworld
@@ -563,7 +535,7 @@
cmake \
-DCMAKE_TOOLCHAIN_FILE=../ceres-solver/cmake/iOS.cmake \
- -DEIGEN_INCLUDE_DIR=/path/to/eigen/header \
+ -DEigen3_DIR=/path/to/Eigen3Config.cmake \
-DIOS_PLATFORM=<PLATFORM> \
<PATH_TO_CERES_SOURCE>
@@ -693,15 +665,6 @@
#. ``EIGENSPARSE [Default: ON]``: By default, Ceres will not use
Eigen's sparse Cholesky factorization.
- .. NOTE::
-
- For good performance, use Eigen version 3.2.2 or later.
-
- .. NOTE::
-
- Unlike the rest of Eigen (>= 3.1.1 MPL2, < 3.1.1 LGPL), Eigen's sparse
- Cholesky factorization is (still) licensed under the LGPL.
-
#. ``GFLAGS [Default: ON]``: Turn this ``OFF`` to build Ceres without
``gflags``. This will also prevent some of the example code from
building.
@@ -717,7 +680,7 @@
gains in the ``SPARSE_SCHUR`` solver, you can disable some of the
template specializations by turning this ``OFF``.
-#. ``CERES_THREADING_MODEL [Default: CXX11_THREADS > OPENMP > NO_THREADS]``:
+#. ``CERES_THREADING_MODEL [Default: CXX_THREADS > OPENMP > NO_THREADS]``:
Multi-threading backend Ceres should be compiled with. This will
automatically be set to only accept the available subset of threading
options in the CMake GUI.
@@ -730,10 +693,10 @@
solely for installation, and so must be installed in order for
clients to use it. Turn this ``ON`` to export Ceres' build
directory location into the `user's local CMake package registry
- <http://www.cmake.org/cmake/help/v3.2/manual/cmake-packages.7.html#user-package-registry>`_
+ <http://www.cmake.org/cmake/help/v3.5/manual/cmake-packages.7.html#user-package-registry>`_
where it will be detected **without requiring installation** in a
client project using CMake when `find_package(Ceres)
- <http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
+ <http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_
is invoked.
#. ``BUILD_DOCUMENTATION [Default: OFF]``: Use this to enable building
@@ -769,8 +732,16 @@
----------------------------------------------
Ceres uses the ``CMake`` `find_package
-<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
-function to find all of its dependencies using
+<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_
+function to find all of its dependencies. Dependencies that reliably
+provide config files on all supported platforms are expected to be
+found in "Config" mode of ``find_package`` (``Eigen``, ``gflags``).
+This means you can use the standard ``CMake`` facilities to customize
+where these dependencies are found, such as ``CMAKE_PREFIX_PATH``,
+the ``<DEPENDENCY_NAME>_DIR`` variables, or since ``CMake`` 3.12 the
+``<DEPENDENCY_NAME>_ROOT`` variables.
+
+Other dependencies are found using
``Find<DEPENDENCY_NAME>.cmake`` scripts which are either included in
Ceres (for most dependencies) or are shipped as standard with
``CMake`` (for ``LAPACK`` & ``BLAS``). These scripts will search all
@@ -826,7 +797,7 @@
======================
In order to use Ceres in client code with CMake using `find_package()
-<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
+<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_
then either:
#. Ceres must have been installed with ``make install``. If the
@@ -858,13 +829,13 @@
# helloworld
add_executable(helloworld helloworld.cc)
- target_link_libraries(helloworld ${CERES_LIBRARIES})
+ target_link_libraries(helloworld Ceres::ceres)
Irrespective of whether Ceres was installed or exported, if multiple
versions are detected, set: ``Ceres_DIR`` to control which is used.
If Ceres was installed ``Ceres_DIR`` should be the path to the
directory containing the installed ``CeresConfig.cmake`` file
-(e.g. ``/usr/local/share/Ceres``). If Ceres was exported, then
+(e.g. ``/usr/local/lib/cmake/Ceres``). If Ceres was exported, then
``Ceres_DIR`` should be the path to the exported Ceres build
directory.
@@ -874,6 +845,7 @@
as the exported Ceres CMake target already contains the definitions
of its public include directories which will be automatically
included by CMake when compiling a target that links against Ceres.
+ In fact, since v2.0 ``CERES_INCLUDE_DIRS`` is not even set.
Specify Ceres components
-------------------------------------
@@ -912,11 +884,9 @@
#. ``Multithreading``: Ceres built with *a* multithreading library.
This is equivalent to (``CERES_THREAD != NO_THREADS``).
-#. ``C++11``: Ceres built with C++11.
-
To specify one/multiple Ceres components use the ``COMPONENTS`` argument to
`find_package()
-<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_ like so:
+<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_ like so:
.. code-block:: cmake
@@ -934,7 +904,7 @@
Additionally, when CMake has found Ceres it can optionally check the package
version, if it has been specified in the `find_package()
-<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
+<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_
call. For example:
.. code-block:: cmake
@@ -992,9 +962,9 @@
All libraries and executables built using CMake are represented as
*targets* created using `add_library()
- <http://www.cmake.org/cmake/help/v3.2/command/add_library.html>`_
+ <http://www.cmake.org/cmake/help/v3.5/command/add_library.html>`_
and `add_executable()
- <http://www.cmake.org/cmake/help/v3.2/command/add_executable.html>`_.
+ <http://www.cmake.org/cmake/help/v3.5/command/add_executable.html>`_.
Targets encapsulate the rules and dependencies (which can be other
targets) required to build or link against an object. This allows
CMake to implicitly manage dependency chains. Thus it is
@@ -1007,10 +977,10 @@
directory is exported into the local CMake package registry (see
:ref:`section-install-vs-export`), in addition to the public headers
and compiled libraries, a set of CMake-specific project configuration
-files are also installed to: ``<INSTALL_ROOT>/share/Ceres`` (if Ceres
+files are also installed to: ``<INSTALL_ROOT>/lib/cmake/Ceres`` (if Ceres
is installed), or created in the build directory (if Ceres' build
directory is exported). When `find_package
-<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_ is
+<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_ is
invoked, CMake checks various standard install locations (including
``/usr/local`` on Linux & UNIX systems), and the local CMake package
registry for CMake configuration files for the project to be found
@@ -1022,9 +992,9 @@
Which is written by the developers of the project, and is
configured with the selected options and installed locations when
- the project is built and defines the CMake variables:
- ``<PROJECT_NAME>_INCLUDE_DIRS`` & ``<PROJECT_NAME>_LIBRARIES``
- which are used by the caller to import the project.
+ the project is built and imports the project targets and/or defines
+ the legacy CMake variables: ``<PROJECT_NAME>_INCLUDE_DIRS`` &
+ ``<PROJECT_NAME>_LIBRARIES`` which are used by the caller.
The ``<PROJECT_NAME>Config.cmake`` typically includes a second file
installed to the same location:
@@ -1039,39 +1009,33 @@
project using ``add_library()``. However, imported targets refer to
objects that have already been built by a different CMake project.
Principally, an imported target contains the location of the compiled
-object and all of its public dependencies required to link against it.
-Any locally declared target can depend on an imported target, and
-CMake will manage the dependency chain, just as if the imported target
-had been declared locally by the current project.
+object and all of its public dependencies required to link against it
+as well as all required include directories. Any locally declared target
+can depend on an imported target, and CMake will manage the dependency
+chain, just as if the imported target had been declared locally by the
+current project.
Crucially, just like any locally declared CMake target, an imported target is
identified by its **name** when adding it as a dependency to another target.
-Thus, if in a project using Ceres you had the following in your CMakeLists.txt:
+Since v2.0, Ceres has used the target namespace feature of CMake to prefix
+its export targets: ``Ceres::ceres``. However, historically the Ceres target
+did not have a namespace, and was just called ``ceres``.
-.. code-block:: cmake
-
- find_package(Ceres REQUIRED)
- message("CERES_LIBRARIES = ${CERES_LIBRARIES}")
-
-You would see the output: ``CERES_LIBRARIES = ceres``. **However**,
-here ``ceres`` is an **imported target** created when
-``CeresTargets.cmake`` was read as part of ``find_package(Ceres
-REQUIRED)``. It does **not** refer (directly) to the compiled Ceres
-library: ``libceres.a/so/dylib/lib``. This distinction is important,
-as depending on the options selected when it was built, Ceres can have
-public link dependencies which are encapsulated in the imported target
-and automatically added to the link step when Ceres is added as a
-dependency of another target by CMake. In this case, linking only
-against ``libceres.a/so/dylib/lib`` without these other public
-dependencies would result in a linker error.
+Whilst an alias target called ``ceres`` is still provided in v2.0 for backwards
+compatibility, it creates a potential drawback, if you failed to call
+``find_package(Ceres)``, and Ceres is installed in a default search path for
+your compiler, then instead of matching the imported Ceres target, it will
+instead match the installed libceres.so/dylib/a library. If this happens you
+will get either compiler errors for missing include directories or linker errors
+due to missing references to Ceres public dependencies.
Note that this description applies both to projects that are
**installed** using CMake, and to those whose **build directory is
exported** using `export()
-<http://www.cmake.org/cmake/help/v3.2/command/export.html>`_ (instead
+<http://www.cmake.org/cmake/help/v3.5/command/export.html>`_ (instead
of `install()
-<http://www.cmake.org/cmake/help/v3.2/command/install.html>`_). Ceres
+<http://www.cmake.org/cmake/help/v3.5/command/install.html>`_). Ceres
supports both installation and export of its build directory if the
``EXPORT_BUILD_DIR`` option is enabled, see
:ref:`section-customizing`.
@@ -1087,8 +1051,8 @@
project's build directory is **exported**, instead of copying the
compiled libraries and headers, CMake creates an entry for the project
in the `user's local CMake package registry
-<http://www.cmake.org/cmake/help/v3.2/manual/cmake-packages.7.html#user-package-registry>`_,
-``<USER_HOME>/.cmake/packages`` on Linux & OS X, which contains the
+<http://www.cmake.org/cmake/help/v3.5/manual/cmake-packages.7.html#user-package-registry>`_,
+``<USER_HOME>/.cmake/packages`` on Linux & macOS, which contains the
path to the project's build directory which will be checked by CMake
during a call to ``find_package()``. The effect of which is that any
client code uses the compiled libraries and headers in the build
@@ -1128,26 +1092,6 @@
.. code-block:: cmake
- # Importing Ceres in FooConfig.cmake using CMake 2.8.x style.
- #
- # When configure_file() is used to generate FooConfig.cmake from
- # FooConfig.cmake.in, @Ceres_DIR@ will be replaced with the current
- # value of Ceres_DIR being used by Foo. This should be passed as a hint
- # when invoking find_package(Ceres) to ensure that the same install of
- # Ceres is used as was used to build Foo.
- set(CERES_DIR_HINTS @Ceres_DIR@)
-
- # Forward the QUIET / REQUIRED options.
- if (Foo_FIND_QUIETLY)
- find_package(Ceres QUIET HINTS ${CERES_DIR_HINTS})
- elseif (Foo_FIND_REQUIRED)
- find_package(Ceres REQUIRED HINTS ${CERES_DIR_HINTS})
- else ()
- find_package(Ceres HINTS ${CERES_DIR_HINTS})
- endif()
-
-.. code-block:: cmake
-
# Importing Ceres in FooConfig.cmake using CMake 3.x style.
#
# In CMake v3.x, the find_dependency() macro exists to forward the REQUIRED
@@ -1158,3 +1102,33 @@
# CMake's search list before this call.
include(CMakeFindDependencyMacro)
find_dependency(Ceres)
+
+.. _section-migration:
+
+Migration
+=========
+
+The following includes some hints for migrating from previous versions.
+
+Version 2.0
+-----------
+
+- When using Ceres with CMake, the target name in v2.0 is
+ ``Ceres::ceres`` following modern naming convetions. The legacy
+ target ``ceres`` exists for backwards compatibility, but is
+ deprecated. ``CERES_INCLUDE_DIRS`` is not set any more, as the
+ exported Ceres CMake target already contains the definitions of its
+ public include directories which will be automatically included by
+ CMake when compiling a target that links against Ceres.
+- When building Ceres, some dependencies (Eigen, gflags) are not found
+ using custom ``Find<DEPENDENCY_NAME>.cmake`` modules any
+ more. Hence, instead of the custom variables (``<DEPENDENCY_NAME (CAPS)>_INCLUDE_DIR_HINTS``,
+ ``<DEPENDENCY_NAME (CAPS)>_INCLUDE_DIR``, ...) you should use standard
+ CMake facilities to customize where these dependencies are found, such as
+ ``CMAKE_PREFIX_PATH``, the ``<DEPENDENCY_NAME>_DIR`` variables, or
+ since CMake 3.12 the ``<DEPENDENCY_NAME>_ROOT`` variables.
+- While TBB is not used any more directly by Ceres, it might still try
+ to link against it, if SuiteSparseQR was found. The variable (environment
+ or CMake) to customize this is ``TBB_ROOT`` (used to be ``TBBROOT``).
+ For example, use ``cmake -DTBB_ROOT=/opt/intel/tbb ...`` if you want to
+ link against TBB installed from Intel's binary packages on Linux.
diff --git a/docs/source/interfacing_with_autodiff.rst b/docs/source/interfacing_with_autodiff.rst
index b79ed45..02f58b2 100644
--- a/docs/source/interfacing_with_autodiff.rst
+++ b/docs/source/interfacing_with_autodiff.rst
@@ -181,7 +181,7 @@
double* residuals,
double** jacobians) const {
if (!jacobians) {
- ComputeDistortionValueAndJacobian(parameters[0][0], residuals, NULL);
+ ComputeDistortionValueAndJacobian(parameters[0][0], residuals, nullptr);
} else {
ComputeDistortionValueAndJacobian(parameters[0][0], residuals, jacobians[0]);
}
diff --git a/docs/source/nnls_covariance.rst b/docs/source/nnls_covariance.rst
index 9c6cea8..66afd44 100644
--- a/docs/source/nnls_covariance.rst
+++ b/docs/source/nnls_covariance.rst
@@ -25,7 +25,7 @@
observations :math:`y` is the solution to the non-linear least squares
problem:
-.. math:: x^* = \arg \min_x \|f(x)\|^2
+.. math:: x^* = \arg \min_x \|f(x) - y\|^2
And the covariance of :math:`x^*` is given by
@@ -169,18 +169,18 @@
:member:`Covaraince::Options::sparse_linear_algebra_library_type`
to ``SUITE_SPARSE``.
- Neither ``SPARSE_QR`` cannot compute the covariance if the
+ ``SPARSE_QR`` cannot compute the covariance if the
Jacobian is rank deficient.
2. ``DENSE_SVD`` uses ``Eigen``'s ``JacobiSVD`` to perform the
computations. It computes the singular value decomposition
- .. math:: U S V^\top = J
+ .. math:: U D V^\top = J
and then uses it to compute the pseudo inverse of J'J as
- .. math:: (J'J)^{\dagger} = V S^{\dagger} V^\top
+ .. math:: (J'J)^{\dagger} = V D^{2\dagger} V^\top
It is an accurate but slow method and should only be used for
small to moderate sized problems. It can handle full-rank as
@@ -207,7 +207,7 @@
(J'J)^{-1} = \begin{bmatrix}
2.0471e+14& -2.0471e+14 \\
- -2.0471e+14 2.0471e+14
+ -2.0471e+14& 2.0471e+14
\end{bmatrix}
diff --git a/docs/source/nnls_modeling.rst b/docs/source/nnls_modeling.rst
index 860b689..c0c3227 100644
--- a/docs/source/nnls_modeling.rst
+++ b/docs/source/nnls_modeling.rst
@@ -70,7 +70,7 @@
:class:`CostFunction` is responsible for computing the vector
:math:`f\left(x_{1},...,x_{k}\right)` and the Jacobian matrices
-.. math:: J_i = \frac{\partial}{\partial x_i} f(x_1, ..., x_k) \quad \forall i \in \{1, \ldots, k\}
+.. math:: J_i = D_i f(x_1, ..., x_k) \quad \forall i \in \{1, \ldots, k\}
.. class:: CostFunction
@@ -108,29 +108,29 @@
that contains the :math:`i^{\text{th}}` parameter block that the
``CostFunction`` depends on.
- ``parameters`` is never ``NULL``.
+ ``parameters`` is never ``nullptr``.
``residuals`` is an array of size ``num_residuals_``.
- ``residuals`` is never ``NULL``.
+ ``residuals`` is never ``nullptr``.
``jacobians`` is an array of arrays of size
``CostFunction::parameter_block_sizes_.size()``.
- If ``jacobians`` is ``NULL``, the user is only expected to compute
+ If ``jacobians`` is ``nullptr``, the user is only expected to compute
the residuals.
``jacobians[i]`` is a row-major array of size ``num_residuals x
parameter_block_sizes_[i]``.
- If ``jacobians[i]`` is **not** ``NULL``, the user is required to
+ If ``jacobians[i]`` is **not** ``nullptr``, the user is required to
compute the Jacobian of the residual vector with respect to
``parameters[i]`` and store it in this array, i.e.
``jacobians[i][r * parameter_block_sizes_[i] + c]`` =
:math:`\frac{\displaystyle \partial \text{residual}[r]}{\displaystyle \partial \text{parameters}[i][c]}`
- If ``jacobians[i]`` is ``NULL``, then this computation can be
+ If ``jacobians[i]`` is ``nullptr``, then this computation can be
skipped. This is the case when the corresponding parameter block is
marked constant.
@@ -152,9 +152,7 @@
.. code-block:: c++
- template<int kNumResiduals,
- int N0 = 0, int N1 = 0, int N2 = 0, int N3 = 0, int N4 = 0,
- int N5 = 0, int N6 = 0, int N7 = 0, int N8 = 0, int N9 = 0>
+ template<int kNumResiduals, int... Ns>
class SizedCostFunction : public CostFunction {
public:
virtual bool Evaluate(double const* const* parameters,
@@ -177,23 +175,16 @@
template <typename CostFunctor,
int kNumResiduals, // Number of residuals, or ceres::DYNAMIC.
- int N0, // Number of parameters in block 0.
- int N1 = 0, // Number of parameters in block 1.
- int N2 = 0, // Number of parameters in block 2.
- int N3 = 0, // Number of parameters in block 3.
- int N4 = 0, // Number of parameters in block 4.
- int N5 = 0, // Number of parameters in block 5.
- int N6 = 0, // Number of parameters in block 6.
- int N7 = 0, // Number of parameters in block 7.
- int N8 = 0, // Number of parameters in block 8.
- int N9 = 0> // Number of parameters in block 9.
+ int... Ns> // Size of each parameter block
class AutoDiffCostFunction : public
- SizedCostFunction<kNumResiduals, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
+ SizedCostFunction<kNumResiduals, Ns> {
public:
- explicit AutoDiffCostFunction(CostFunctor* functor);
+ AutoDiffCostFunction(CostFunctor* functor, ownership = TAKE_OWNERSHIP);
// Ignore the template parameter kNumResiduals and use
// num_residuals instead.
- AutoDiffCostFunction(CostFunctor* functor, int num_residuals);
+ AutoDiffCostFunction(CostFunctor* functor,
+ int num_residuals,
+ ownership = TAKE_OWNERSHIP);
};
To get an auto differentiated cost function, you must define a
@@ -268,6 +259,21 @@
computing a 1-dimensional output from two arguments, both
2-dimensional.
+ By default :class:`AutoDiffCostFunction` will take ownership of the cost
+ functor pointer passed to it, ie. will call `delete` on the cost functor
+ when the :class:`AutoDiffCostFunction` itself is deleted. However, this may
+ be undesirable in certain cases, therefore it is also possible to specify
+ :class:`DO_NOT_TAKE_OWNERSHIP` as a second argument in the constructor,
+ while passing a pointer to a cost functor which does not need to be deleted
+ by the AutoDiffCostFunction. For example:
+
+ .. code-block:: c++
+
+ MyScalarCostFunctor functor(1.0)
+ CostFunction* cost_function
+ = new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(
+ &functor, DO_NOT_TAKE_OWNERSHIP);
+
:class:`AutoDiffCostFunction` also supports cost functions with a
runtime-determined number of residuals. For example:
@@ -284,10 +290,6 @@
Dimension of x ------------------------------------+ |
Dimension of y ---------------------------------------+
- The framework can currently accommodate cost functions of up to 10
- independent variables, and there is no limit on the dimensionality
- of each of them.
-
**WARNING 1** A common beginner's error when first using
:class:`AutoDiffCostFunction` is to get the sizing wrong. In particular,
there is a tendency to set the template parameters to (dimension of
@@ -303,10 +305,9 @@
.. class:: DynamicAutoDiffCostFunction
:class:`AutoDiffCostFunction` requires that the number of parameter
- blocks and their sizes be known at compile time. It also has an
- upper limit of 10 parameter blocks. In a number of applications,
- this is not enough e.g., Bezier curve fitting, Neural Network
- training etc.
+ blocks and their sizes be known at compile time. In a number of
+ applications, this is not enough e.g., Bezier curve fitting, Neural
+ Network training etc.
.. code-block:: c++
@@ -376,18 +377,9 @@
template <typename CostFunctor,
NumericDiffMethodType method = CENTRAL,
int kNumResiduals, // Number of residuals, or ceres::DYNAMIC.
- int N0, // Number of parameters in block 0.
- int N1 = 0, // Number of parameters in block 1.
- int N2 = 0, // Number of parameters in block 2.
- int N3 = 0, // Number of parameters in block 3.
- int N4 = 0, // Number of parameters in block 4.
- int N5 = 0, // Number of parameters in block 5.
- int N6 = 0, // Number of parameters in block 6.
- int N7 = 0, // Number of parameters in block 7.
- int N8 = 0, // Number of parameters in block 8.
- int N9 = 0> // Number of parameters in block 9.
+ int... Ns> // Size of each parameter block.
class NumericDiffCostFunction : public
- SizedCostFunction<kNumResiduals, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
+ SizedCostFunction<kNumResiduals, Ns> {
};
To get a numerically differentiated :class:`CostFunction`, you must
@@ -484,10 +476,6 @@
Dimension of y ---------------------------------------------------+
- The framework can currently accommodate cost functions of up to 10
- independent variables, and there is no limit on the dimensionality
- of each of them.
-
There are three available numeric differentiation schemes in ceres-solver:
The ``FORWARD`` difference method, which approximates :math:`f'(x)`
@@ -595,8 +583,7 @@
Like :class:`AutoDiffCostFunction` :class:`NumericDiffCostFunction`
requires that the number of parameter blocks and their sizes be
- known at compile time. It also has an upper limit of 10 parameter
- blocks. In a number of applications, this is not enough.
+ known at compile time. In a number of applications, this is not enough.
.. code-block:: c++
@@ -716,7 +703,7 @@
.. code-block:: c++
- struct IntrinsicProjection
+ struct IntrinsicProjection {
IntrinsicProjection(const double* observation) {
observation_[0] = observation[0];
observation_[1] = observation[1];
@@ -724,14 +711,14 @@
bool operator()(const double* calibration,
const double* point,
- double* residuals) {
+ double* residuals) const {
double projection[2];
ThirdPartyProjectionFunction(calibration, point, projection);
residuals[0] = observation_[0] - projection[0];
residuals[1] = observation_[1] - projection[1];
return true;
}
- double observation_[2];
+ double observation_[2];
};
@@ -746,10 +733,9 @@
struct CameraProjection {
CameraProjection(double* observation)
- intrinsic_projection_(
- new NumericDiffCostFunction<IntrinsicProjection, CENTRAL, 2, 5, 3>(
- new IntrinsicProjection(observation)) {
- }
+ : intrinsic_projection_(
+ new NumericDiffCostFunction<IntrinsicProjection, CENTRAL, 2, 5, 3>(
+ new IntrinsicProjection(observation))) {}
template <typename T>
bool operator()(const T* rotation,
@@ -759,13 +745,14 @@
T* residuals) const {
T transformed_point[3];
RotateAndTranslatePoint(rotation, translation, point, transformed_point);
- return intrinsic_projection_(intrinsics, transformed_point, residual);
+ return intrinsic_projection_(intrinsics, transformed_point, residuals);
}
private:
- CostFunctionToFunctor<2,5,3> intrinsic_projection_;
+ CostFunctionToFunctor<2, 5, 3> intrinsic_projection_;
};
+
:class:`DynamicCostFunctionToFunctor`
=====================================
@@ -908,7 +895,7 @@
std::vector<LocalParameterization*> local_parameterizations;
local_parameterizations.push_back(my_parameterization);
- local_parameterizations.push_back(NULL);
+ local_parameterizations.push_back(nullptr);
std::vector parameter1;
std::vector parameter2;
@@ -1103,8 +1090,8 @@
Given a loss function :math:`\rho(s)` and a scalar :math:`a`, :class:`ScaledLoss`
implements the function :math:`a \rho(s)`.
- Since we treat a ``NULL`` Loss function as the Identity loss
- function, :math:`rho` = ``NULL``: is a valid input and will result
+ Since we treat a ``nullptr`` Loss function as the Identity loss
+ function, :math:`rho` = ``nullptr``: is a valid input and will result
in the input being scaled by :math:`a`. This provides a simple way
of implementing a scaled ResidualBlock.
@@ -1146,8 +1133,7 @@
Theory
------
-Let us consider a problem with a single problem and a single parameter
-block.
+Let us consider a problem with a single parameter block.
.. math::
@@ -1165,8 +1151,8 @@
been ignored. Note that :math:`H(x)` is indefinite if
:math:`\rho''f(x)^\top f(x) + \frac{1}{2}\rho' < 0`. If this is not
the case, then its possible to re-weight the residual and the Jacobian
-matrix such that the corresponding linear least squares problem for
-the robustified Gauss-Newton step.
+matrix such that the robustified Gauss-Newton step corresponds to an
+ordinary linear least squares problem.
Let :math:`\alpha` be a root of
@@ -1187,7 +1173,7 @@
we limit :math:`\alpha \le 1- \epsilon` for some small
:math:`\epsilon`. For more details see [Triggs]_.
-With this simple rescaling, one can use any Jacobian based non-linear
+With this simple rescaling, one can apply any Jacobian based non-linear
least squares algorithm to robustified non-linear least squares
problems.
@@ -1197,6 +1183,113 @@
.. class:: LocalParameterization
+ In many optimization problems, especially sensor fusion problems,
+ one has to model quantities that live in spaces known as `Manifolds
+ <https://en.wikipedia.org/wiki/Manifold>`_ , for example the
+ rotation/orientation of a sensor that is represented by a
+ `Quaternion
+ <https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation>`_.
+
+ Manifolds are spaces, which locally look like Euclidean spaces. More
+ precisely, at each point on the manifold there is a linear space
+ that is tangent to the manifold. It has dimension equal to the
+ intrinsic dimension of the manifold itself, which is less than or
+ equal to the ambient space in which the manifold is embedded.
+
+ For example, the tangent space to a point on a sphere in three
+ dimensions is the two dimensional plane that is tangent to the
+ sphere at that point. There are two reasons tangent spaces are
+ interesting:
+
+ 1. They are Euclidean spaces, so the usual vector space operations
+ apply there, which makes numerical operations easy.
+
+ 2. Movement in the tangent space translate into movements along the
+ manifold. Movements perpendicular to the tangent space do not
+ translate into movements on the manifold.
+
+ Returning to our sphere example, moving in the 2 dimensional
+ plane tangent to the sphere and projecting back onto the sphere
+ will move you away from the point you started from but moving
+ along the normal at the same point and the projecting back onto
+ the sphere brings you back to the point.
+
+ Besides the mathematical niceness, modeling manifold valued
+ quantities correctly and paying attention to their geometry has
+ practical benefits too:
+
+ 1. It naturally constrains the quantity to the manifold through out
+ the optimization. Freeing the user from hacks like *quaternion
+ normalization*.
+
+ 2. It reduces the dimension of the optimization problem to its
+ *natural* size. For example, a quantity restricted to a line, is a
+ one dimensional object regardless of the dimension of the ambient
+ space in which this line lives.
+
+ Working in the tangent space reduces not just the computational
+ complexity of the optimization algorithm, but also improves the
+ numerical behaviour of the algorithm.
+
+ A basic operation one can perform on a manifold is the
+ :math:`\boxplus` operation that computes the result of moving along
+ delta in the tangent space at x, and then projecting back onto the
+ manifold that x belongs to. Also known as a *Retraction*,
+ :math:`\boxplus` is a generalization of vector addition in Euclidean
+ spaces. Formally, :math:`\boxplus` is a smooth map from a
+ manifold :math:`\mathcal{M}` and its tangent space
+ :math:`T_\mathcal{M}` to the manifold :math:`\mathcal{M}` that
+ obeys the identity
+
+ .. math:: \boxplus(x, 0) = x,\quad \forall x.
+
+ That is, it ensures that the tangent space is *centered* at :math:`x`
+ and the zero vector is the identity element. For more see
+ [Hertzberg]_ and section A.6.9 of [HartleyZisserman]_.
+
+ Let us consider two examples:
+
+ The Euclidean space :math:`R^n` is the simplest example of a
+ manifold. It has dimension :math:`n` (and so does its tangent space)
+ and :math:`\boxplus` is the familiar vector sum operation.
+
+ .. math:: \boxplus(x, \Delta) = x + \Delta
+
+ A more interesting case is :math:`SO(3)`, the special orthogonal
+ group in three dimensions - the space of 3x3 rotation
+ matrices. :math:`SO(3)` is a three dimensional manifold embedded in
+ :math:`R^9` or :math:`R^{3\times 3}`.
+
+ :math:`\boxplus` on :math:`SO(3)` is defined using the *Exponential*
+ map, from the tangent space (:math:`R^3`) to the manifold. The
+ Exponential map :math:`\operatorname{Exp}` is defined as:
+
+ .. math::
+
+ \operatorname{Exp}([p,q,r]) = \left [ \begin{matrix}
+ \cos \theta + cp^2 & -sr + cpq & sq + cpr \\
+ sr + cpq & \cos \theta + cq^2& -sp + cqr \\
+ -sq + cpr & sp + cqr & \cos \theta + cr^2
+ \end{matrix} \right ]
+
+ where,
+
+ .. math::
+ \theta = \sqrt{p^2 + q^2 + r^2}, s = \frac{\sin \theta}{\theta},
+ c = \frac{1 - \cos \theta}{\theta^2}.
+
+ Then,
+
+ .. math::
+
+ \boxplus(x, \Delta) = x \operatorname{Exp}(\Delta)
+
+ The ``LocalParameterization`` interface allows the user to define
+ and associate with parameter blocks the manifold that they belong
+ to. It does so by defining the ``Plus`` (:math:`\boxplus`) operation
+ and its derivative with respect to :math:`\Delta` at :math:`\Delta =
+ 0`.
+
.. code-block:: c++
class LocalParameterization {
@@ -1214,43 +1307,6 @@
virtual int LocalSize() const = 0;
};
- Sometimes the parameters :math:`x` can overparameterize a
- problem. In that case it is desirable to choose a parameterization
- to remove the null directions of the cost. More generally, if
- :math:`x` lies on a manifold of a smaller dimension than the
- ambient space that it is embedded in, then it is numerically and
- computationally more effective to optimize it using a
- parameterization that lives in the tangent space of that manifold
- at each point.
-
- For example, a sphere in three dimensions is a two dimensional
- manifold, embedded in a three dimensional space. At each point on
- the sphere, the plane tangent to it defines a two dimensional
- tangent space. For a cost function defined on this sphere, given a
- point :math:`x`, moving in the direction normal to the sphere at
- that point is not useful. Thus a better way to parameterize a point
- on a sphere is to optimize over two dimensional vector
- :math:`\Delta x` in the tangent space at the point on the sphere
- point and then "move" to the point :math:`x + \Delta x`, where the
- move operation involves projecting back onto the sphere. Doing so
- removes a redundant dimension from the optimization, making it
- numerically more robust and efficient.
-
- More generally we can define a function
-
- .. math:: x' = \boxplus(x, \Delta x),
-
- where :math:`x'` has the same size as :math:`x`, and :math:`\Delta
- x` is of size less than or equal to :math:`x`. The function
- :math:`\boxplus`, generalizes the definition of vector
- addition. Thus it satisfies the identity
-
- .. math:: \boxplus(x, 0) = x,\quad \forall x.
-
- Instances of :class:`LocalParameterization` implement the
- :math:`\boxplus` operation and its derivative with respect to
- :math:`\Delta x` at :math:`\Delta x = 0`.
-
.. function:: int LocalParameterization::GlobalSize()
@@ -1259,134 +1315,158 @@
.. function:: int LocalParameterization::LocalSize()
- The size of the tangent space
- that :math:`\Delta x` lives in.
+ The size of the tangent space that :math:`\Delta` lives in.
.. function:: bool LocalParameterization::Plus(const double* x, const double* delta, double* x_plus_delta) const
- :func:`LocalParameterization::Plus` implements :math:`\boxplus(x,\Delta x)`.
+ :func:`LocalParameterization::Plus` implements :math:`\boxplus(x,\Delta)`.
.. function:: bool LocalParameterization::ComputeJacobian(const double* x, double* jacobian) const
Computes the Jacobian matrix
- .. math:: J = \left . \frac{\partial }{\partial \Delta x} \boxplus(x,\Delta x)\right|_{\Delta x = 0}
+ .. math:: J = D_2 \boxplus(x, 0)
in row major form.
.. function:: bool MultiplyByJacobian(const double* x, const int num_rows, const double* global_matrix, double* local_matrix) const
- local_matrix = global_matrix * jacobian
+ ``local_matrix = global_matrix * jacobian``
- global_matrix is a num_rows x GlobalSize row major matrix.
- local_matrix is a num_rows x LocalSize row major matrix.
- jacobian is the matrix returned by :func:`LocalParameterization::ComputeJacobian` at :math:`x`.
+ ``global_matrix`` is a ``num_rows x GlobalSize`` row major matrix.
+ ``local_matrix`` is a ``num_rows x LocalSize`` row major matrix.
+ ``jacobian`` is the matrix returned by :func:`LocalParameterization::ComputeJacobian` at :math:`x`.
- This is only used by GradientProblem. For most normal uses, it is
- okay to use the default implementation.
+ This is only used by :class:`GradientProblem`. For most normal
+ uses, it is okay to use the default implementation.
-Instances
----------
+Ceres Solver ships with a number of commonly used instances of
+:class:`LocalParameterization`. Another great place to find high
+quality implementations of :math:`\boxplus` operations on a variety of
+manifolds is the `Sophus <https://github.com/strasdat/Sophus>`_
+library developed by Hauke Strasdat and his collaborators.
-.. class:: IdentityParameterization
+:class:`IdentityParameterization`
+---------------------------------
- A trivial version of :math:`\boxplus` is when :math:`\Delta x` is
- of the same size as :math:`x` and
+A trivial version of :math:`\boxplus` is when :math:`\Delta` is of the
+same size as :math:`x` and
- .. math:: \boxplus(x, \Delta x) = x + \Delta x
+.. math:: \boxplus(x, \Delta) = x + \Delta
-.. class:: SubsetParameterization
+This is the same as :math:`x` living in a Euclidean manifold.
- A more interesting case if :math:`x` is a two dimensional vector,
- and the user wishes to hold the first coordinate constant. Then,
- :math:`\Delta x` is a scalar and :math:`\boxplus` is defined as
+:class:`QuaternionParameterization`
+-----------------------------------
- .. math::
+Another example that occurs commonly in Structure from Motion problems
+is when camera rotations are parameterized using a quaternion. This is
+a 3-dimensional manifold that lives in 4-dimensional space.
- \boxplus(x, \Delta x) = x + \left[ \begin{array}{c} 0 \\ 1
- \end{array} \right] \Delta x
+.. math:: \boxplus(x, \Delta) = \left[ \cos(|\Delta|), \frac{\sin\left(|\Delta|\right)}{|\Delta|} \Delta \right] * x
- :class:`SubsetParameterization` generalizes this construction to
- hold any part of a parameter block constant.
+The multiplication :math:`*` between the two 4-vectors on the right
+hand side is the standard quaternion product.
-.. class:: QuaternionParameterization
+:class:`EigenQuaternionParameterization`
+----------------------------------------
- Another example that occurs commonly in Structure from Motion
- problems is when camera rotations are parameterized using a
- quaternion. There, it is useful only to make updates orthogonal to
- that 4-vector defining the quaternion. One way to do this is to let
- :math:`\Delta x` be a 3 dimensional vector and define
- :math:`\boxplus` to be
+`Eigen <http://eigen.tuxfamily.org/index.php?title=Main_Page>`_ uses a
+different internal memory layout for the elements of the quaternion
+than what is commonly used. Specifically, Eigen stores the elements in
+memory as :math:`(x, y, z, w)`, i.e., the *real* part (:math:`w`) is
+stored as the last element. Note, when creating an Eigen quaternion
+through the constructor the elements are accepted in :math:`w, x, y,
+z` order.
- .. math:: \boxplus(x, \Delta x) = \left[ \cos(|\Delta x|), \frac{\sin\left(|\Delta x|\right)}{|\Delta x|} \Delta x \right] * x
- :label: quaternion
+Since Ceres operates on parameter blocks which are raw ``double``
+pointers this difference is important and requires a different
+parameterization. :class:`EigenQuaternionParameterization` uses the
+same ``Plus`` operation as :class:`QuaternionParameterization` but
+takes into account Eigen's internal memory element ordering.
- The multiplication between the two 4-vectors on the right hand side
- is the standard quaternion
- product. :class:`QuaternionParameterization` is an implementation
- of :eq:`quaternion`.
+:class:`SubsetParameterization`
+-------------------------------
-.. class:: EigenQuaternionParameterization
+Suppose :math:`x` is a two dimensional vector, and the user wishes to
+hold the first coordinate constant. Then, :math:`\Delta` is a scalar
+and :math:`\boxplus` is defined as
- Eigen uses a different internal memory layout for the elements of the
- quaternion than what is commonly used. Specifically, Eigen stores the
- elements in memory as [x, y, z, w] where the real part is last
- whereas it is typically stored first. Note, when creating an Eigen
- quaternion through the constructor the elements are accepted in w, x,
- y, z order. Since Ceres operates on parameter blocks which are raw
- double pointers this difference is important and requires a different
- parameterization. :class:`EigenQuaternionParameterization` uses the
- same update as :class:`QuaternionParameterization` but takes into
- account Eigen's internal memory element ordering.
+.. math:: \boxplus(x, \Delta) = x + \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \Delta
-.. class:: HomogeneousVectorParameterization
+:class:`SubsetParameterization` generalizes this construction to hold
+any part of a parameter block constant by specifying the set of
+coordinates that are held constant.
- In computer vision, homogeneous vectors are commonly used to
- represent entities in projective geometry such as points in
- projective space. One example where it is useful to use this
- over-parameterization is in representing points whose triangulation
- is ill-conditioned. Here it is advantageous to use homogeneous
- vectors, instead of an Euclidean vector, because it can represent
- points at infinity.
+.. NOTE::
+ It is legal to hold all coordinates of a parameter block to constant
+ using a :class:`SubsetParameterization`. It is the same as calling
+ :func:`Problem::SetParameterBlockConstant` on that parameter block.
- When using homogeneous vectors it is useful to only make updates
- orthogonal to that :math:`n`-vector defining the homogeneous
- vector [HartleyZisserman]_. One way to do this is to let :math:`\Delta x`
- be a :math:`n-1` dimensional vector and define :math:`\boxplus` to be
+:class:`HomogeneousVectorParameterization`
+------------------------------------------
- .. math:: \boxplus(x, \Delta x) = \left[ \frac{\sin\left(0.5 |\Delta x|\right)}{|\Delta x|} \Delta x, \cos(0.5 |\Delta x|) \right] * x
+In computer vision, homogeneous vectors are commonly used to represent
+objects in projective geometry such as points in projective space. One
+example where it is useful to use this over-parameterization is in
+representing points whose triangulation is ill-conditioned. Here it is
+advantageous to use homogeneous vectors, instead of an Euclidean
+vector, because it can represent points at and near infinity.
- The multiplication between the two vectors on the right hand side
- is defined as an operator which applies the update orthogonal to
- :math:`x` to remain on the sphere. Note, it is assumed that
- last element of :math:`x` is the scalar component of the homogeneous
- vector.
+:class:`HomogeneousVectorParameterization` defines a
+:class:`LocalParameterization` for an :math:`n-1` dimensional
+manifold that embedded in :math:`n` dimensional space where the
+scale of the vector does not matter, i.e., elements of the
+projective space :math:`\mathbb{P}^{n-1}`. It assumes that the last
+coordinate of the :math:`n`-vector is the *scalar* component of the
+homogenous vector, i.e., *finite* points in this representation are
+those for which the *scalar* component is non-zero.
+Further, ``HomogeneousVectorParameterization::Plus`` preserves the
+scale of :math:`x`.
-.. class:: ProductParameterization
+:class:`LineParameterization`
+-----------------------------
- Consider an optimization problem over the space of rigid
- transformations :math:`SE(3)`, which is the Cartesian product of
- :math:`SO(3)` and :math:`\mathbb{R}^3`. Suppose you are using
- Quaternions to represent the rotation, Ceres ships with a local
- parameterization for that and :math:`\mathbb{R}^3` requires no, or
- :class:`IdentityParameterization` parameterization. So how do we
- construct a local parameterization for a parameter block a rigid
- transformation?
+This class provides a parameterization for lines, where the line is
+defined using an origin point and a direction vector. So the
+parameter vector size needs to be two times the ambient space
+dimension, where the first half is interpreted as the origin point
+and the second half as the direction. This local parameterization is
+a special case of the `Affine Grassmannian manifold
+<https://en.wikipedia.org/wiki/Affine_Grassmannian_(manifold))>`_
+for the case :math:`\operatorname{Graff}_1(R^n)`.
- In cases, where a parameter block is the Cartesian product of a
- number of manifolds and you have the local parameterization of the
- individual manifolds available, :class:`ProductParameterization`
- can be used to construct a local parameterization of the cartesian
- product. For the case of the rigid transformation, where say you
- have a parameter block of size 7, where the first four entries
- represent the rotation as a quaternion, a local parameterization
- can be constructed as
+Note that this is a parameterization for a line, rather than a point
+constrained to lie on a line. It is useful when one wants to optimize
+over the space of lines. For example, :math:`n` distinct points in 3D
+(measurements) we want to find the line that minimizes the sum of
+squared distances to all the points.
- .. code-block:: c++
+:class:`ProductParameterization`
+--------------------------------
- ProductParameterization se3_param(new QuaternionParameterization(),
- new IdentityTransformation(3));
+Consider an optimization problem over the space of rigid
+transformations :math:`SE(3)`, which is the Cartesian product of
+:math:`SO(3)` and :math:`\mathbb{R}^3`. Suppose you are using
+Quaternions to represent the rotation, Ceres ships with a local
+parameterization for that and :math:`\mathbb{R}^3` requires no, or
+:class:`IdentityParameterization` parameterization. So how do we
+construct a local parameterization for a parameter block a rigid
+transformation?
+
+In cases, where a parameter block is the Cartesian product of a number
+of manifolds and you have the local parameterization of the individual
+manifolds available, :class:`ProductParameterization` can be used to
+construct a local parameterization of the cartesian product. For the
+case of the rigid transformation, where say you have a parameter block
+of size 7, where the first four entries represent the rotation as a
+quaternion, a local parameterization can be constructed as
+
+.. code-block:: c++
+
+ ProductParameterization se3_param(new QuaternionParameterization(),
+ new IdentityParameterization(3));
:class:`AutoDiffLocalParameterization`
@@ -1464,7 +1544,7 @@
:class:`Problem` holds the robustified bounds constrained
non-linear least squares problem :eq:`ceresproblem_modeling`. To
create a least squares problem, use the
- :func:`Problem::AddResidualBlock` and
+ :func:`Problem::AddResidalBlock` and
:func:`Problem::AddParameterBlock` methods.
For example a problem containing 3 parameter blocks of sizes 3, 4
@@ -1489,7 +1569,7 @@
the parameter blocks it expects. The function checks that these
match the sizes of the parameter blocks listed in
``parameter_blocks``. The program aborts if a mismatch is
- detected. ``loss_function`` can be ``NULL``, in which case the cost
+ detected. ``loss_function`` can be ``nullptr``, in which case the cost
of the term is just the squared norm of the residuals.
The user has the option of explicitly adding the parameter blocks
@@ -1536,19 +1616,133 @@
delete on each ``cost_function`` or ``loss_function`` pointer only
once, regardless of how many residual blocks refer to them.
+.. class:: Problem::Options
+
+ Options struct that is used to control :class:`Problem`.
+
+.. member:: Ownership Problem::Options::cost_function_ownership
+
+ Default: ``TAKE_OWNERSHIP``
+
+ This option controls whether the Problem object owns the cost
+ functions.
+
+ If set to TAKE_OWNERSHIP, then the problem object will delete the
+ cost functions on destruction. The destructor is careful to delete
+ the pointers only once, since sharing cost functions is allowed.
+
+.. member:: Ownership Problem::Options::loss_function_ownership
+
+ Default: ``TAKE_OWNERSHIP``
+
+ This option controls whether the Problem object owns the loss
+ functions.
+
+ If set to TAKE_OWNERSHIP, then the problem object will delete the
+ loss functions on destruction. The destructor is careful to delete
+ the pointers only once, since sharing loss functions is allowed.
+
+.. member:: Ownership Problem::Options::local_parameterization_ownership
+
+ Default: ``TAKE_OWNERSHIP``
+
+ This option controls whether the Problem object owns the local
+ parameterizations.
+
+ If set to TAKE_OWNERSHIP, then the problem object will delete the
+ local parameterizations on destruction. The destructor is careful
+ to delete the pointers only once, since sharing local
+ parameterizations is allowed.
+
+.. member:: bool Problem::Options::enable_fast_removal
+
+ Default: ``false``
+
+ If true, trades memory for faster
+ :func:`Problem::RemoveResidualBlock` and
+ :func:`Problem::RemoveParameterBlock` operations.
+
+ By default, :func:`Problem::RemoveParameterBlock` and
+ :func:`Problem::RemoveResidualBlock` take time proportional to
+ the size of the entire problem. If you only ever remove
+ parameters or residuals from the problem occasionally, this might
+ be acceptable. However, if you have memory to spare, enable this
+ option to make :func:`Problem::RemoveParameterBlock` take time
+ proportional to the number of residual blocks that depend on it,
+ and :func:`Problem::RemoveResidualBlock` take (on average)
+ constant time.
+
+ The increase in memory usage is twofold: an additional hash set
+ per parameter block containing all the residuals that depend on
+ the parameter block; and a hash set in the problem containing all
+ residuals.
+
+.. member:: bool Problem::Options::disable_all_safety_checks
+
+ Default: `false`
+
+ By default, Ceres performs a variety of safety checks when
+ constructing the problem. There is a small but measurable
+ performance penalty to these checks, typically around 5% of
+ construction time. If you are sure your problem construction is
+ correct, and 5% of the problem construction time is truly an
+ overhead you want to avoid, then you can set
+ disable_all_safety_checks to true.
+
+ **WARNING** Do not set this to true, unless you are absolutely
+ sure of what you are doing.
+
+.. member:: Context* Problem::Options::context
+
+ Default: `nullptr`
+
+ A Ceres global context to use for solving this problem. This may
+ help to reduce computation time as Ceres can reuse expensive
+ objects to create. The context object can be `nullptr`, in which
+ case Ceres may create one.
+
+ Ceres does NOT take ownership of the pointer.
+
+.. member:: EvaluationCallback* Problem::Options::evaluation_callback
+
+ Default: `nullptr`
+
+ Using this callback interface, Ceres will notify you when it is
+ about to evaluate the residuals or Jacobians.
+
+ If an ``evaluation_callback`` is present, Ceres will update the
+ user's parameter blocks to the values that will be used when
+ calling :func:`CostFunction::Evaluate` before calling
+ :func:`EvaluationCallback::PrepareForEvaluation`. One can then use
+ this callback to share (or cache) computation between cost
+ functions by doing the shared computation in
+ :func:`EvaluationCallback::PrepareForEvaluation` before Ceres
+ calls :func:`CostFunction::Evaluate`.
+
+ Problem does NOT take ownership of the callback.
+
+ .. NOTE::
+
+ Evaluation callbacks are incompatible with inner iterations. So
+ calling Solve with
+ :member:`Solver::Options::use_inner_iterations` set to `true`
+ on a :class:`Problem` with a non-null evaluation callback is an
+ error.
+
.. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, const vector<double*> parameter_blocks)
-.. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, double *x0, double *x1, ...)
+
+.. function:: template <typename Ts...> ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, double* x0, Ts... xs)
Add a residual block to the overall cost function. The cost
function carries with it information about the sizes of the
parameter blocks it expects. The function checks that these match
the sizes of the parameter blocks listed in parameter_blocks. The
program aborts if a mismatch is detected. loss_function can be
- NULL, in which case the cost of the term is just the squared norm
- of the residuals.
+ `nullptr`, in which case the cost of the term is just the squared
+ norm of the residuals.
The parameter blocks may be passed together as a
- ``vector<double*>``, or as up to ten separate ``double*`` pointers.
+ ``vector<double*>``, or ``double*`` pointers.
The user has the option of explicitly adding the parameter blocks
using AddParameterBlock. This causes additional correctness
@@ -1583,10 +1777,10 @@
Problem problem;
- problem.AddResidualBlock(new MyUnaryCostFunction(...), NULL, x1);
- problem.AddResidualBlock(new MyBinaryCostFunction(...), NULL, x2, x1);
- problem.AddResidualBlock(new MyUnaryCostFunction(...), NULL, v1);
- problem.AddResidualBlock(new MyBinaryCostFunction(...), NULL, v2);
+ problem.AddResidualBlock(new MyUnaryCostFunction(...), nullptr, x1);
+ problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, x2, x1);
+ problem.AddResidualBlock(new MyUnaryCostFunction(...), nullptr, v1);
+ problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, v2);
.. function:: void Problem::AddParameterBlock(double* values, int size, LocalParameterization* local_parameterization)
@@ -1618,7 +1812,7 @@
jacobian, do not use remove! This may change in a future release.
Hold the indicated parameter block constant during optimization.
-.. function:: void Problem::RemoveParameterBlock(double* values)
+.. function:: void Problem::RemoveParameterBlock(const double* values)
Remove a parameter block from the problem. The parameterization of
the parameter block, if it exists, will persist until the deletion
@@ -1634,7 +1828,7 @@
from the solver uninterpretable. If you depend on the evaluated
jacobian, do not use remove! This may change in a future release.
-.. function:: void Problem::SetParameterBlockConstant(double* values)
+.. function:: void Problem::SetParameterBlockConstant(const double* values)
Hold the indicated parameter block constant during optimization.
@@ -1642,20 +1836,28 @@
Allow the indicated parameter to vary during optimization.
+.. function:: bool Problem::IsParameterBlockConstant(const double* values) const
+
+ Returns ``true`` if a parameter block is set constant, and false
+ otherwise. A parameter block may be set constant in two ways:
+ either by calling ``SetParameterBlockConstant`` or by associating a
+ ``LocalParameterization`` with a zero dimensional tangent space
+ with it.
+
.. function:: void Problem::SetParameterization(double* values, LocalParameterization* local_parameterization)
Set the local parameterization for one of the parameter blocks.
The local_parameterization is owned by the Problem by default. It
is acceptable to set the same parameterization for multiple
parameters; the destructor is careful to delete local
- parameterizations only once. The local parameterization can only be
- set once per parameter, and cannot be changed once set.
+ parameterizations only once. Calling `SetParameterization` with
+ `nullptr` will clear any previously set parameterization.
-.. function:: LocalParameterization* Problem::GetParameterization(double* values) const
+.. function:: LocalParameterization* Problem::GetParameterization(const double* values) const
Get the local parameterization object associated with this
parameter block. If there is no parameterization object associated
- then `NULL` is returned
+ then `nullptr` is returned
.. function:: void Problem::SetParameterLowerBound(double* values, int index, double lower_bound)
@@ -1671,13 +1873,13 @@
``std::numeric_limits<double>::max()``, which is treated by the
solver as the same as :math:`\infty`.
-.. function:: double Problem::GetParameterLowerBound(double* values, int index)
+.. function:: double Problem::GetParameterLowerBound(const double* values, int index)
Get the lower bound for the parameter with position `index`. If the
parameter is not bounded by the user, then its lower bound is
``-std::numeric_limits<double>::max()``.
-.. function:: double Problem::GetParameterUpperBound(double* values, int index)
+.. function:: double Problem::GetParameterUpperBound(const double* values, int index)
Get the upper bound for the parameter with position `index`. If the
parameter is not bounded by the user, then its upper bound is
@@ -1745,18 +1947,77 @@
blocks for a parameter block will incur a scan of the entire
:class:`Problem` object.
-.. function:: const CostFunction* GetCostFunctionForResidualBlock(const ResidualBlockId residual_block) const
+.. function:: const CostFunction* Problem::GetCostFunctionForResidualBlock(const ResidualBlockId residual_block) const
Get the :class:`CostFunction` for the given residual block.
-.. function:: const LossFunction* GetLossFunctionForResidualBlock(const ResidualBlockId residual_block) const
+.. function:: const LossFunction* Problem::GetLossFunctionForResidualBlock(const ResidualBlockId residual_block) const
Get the :class:`LossFunction` for the given residual block.
+.. function:: bool EvaluateResidualBlock(ResidualBlockId residual_block_id, bool apply_loss_function, double* cost,double* residuals, double** jacobians) const
+
+ Evaluates the residual block, storing the scalar cost in ``cost``, the
+ residual components in ``residuals``, and the jacobians between the
+ parameters and residuals in ``jacobians[i]``, in row-major order.
+
+ If ``residuals`` is ``nullptr``, the residuals are not computed.
+
+ If ``jacobians`` is ``nullptr``, no Jacobians are computed. If
+ ``jacobians[i]`` is ``nullptr``, then the Jacobian for that
+ parameter block is not computed.
+
+ It is not okay to request the Jacobian w.r.t a parameter block
+ that is constant.
+
+ The return value indicates the success or failure. Even if the
+ function returns false, the caller should expect the output
+ memory locations to have been modified.
+
+ The returned cost and jacobians have had robustification and local
+ parameterizations applied already; for example, the jacobian for a
+ 4-dimensional quaternion parameter using the
+ :class:`QuaternionParameterization` is ``num_residuals x 3``
+ instead of ``num_residuals x 4``.
+
+ ``apply_loss_function`` as the name implies allows the user to
+ switch the application of the loss function on and off.
+
+ .. NOTE:: If an :class:`EvaluationCallback` is associated with the
+ problem, then its
+ :func:`EvaluationCallback::PrepareForEvaluation` method will be
+ called every time this method is called with `new_point =
+ true`. This conservatively assumes that the user may have
+ changed the parameter values since the previous call to evaluate
+ / solve. For improved efficiency, and only if you know that the
+ parameter values have not changed between calls, see
+ :func:`Problem::EvaluateResidualBlockAssumingParametersUnchanged`.
+
+
+.. function:: bool EvaluateResidualBlockAssumingParametersUnchanged(ResidualBlockId residual_block_id, bool apply_loss_function, double* cost,double* residuals, double** jacobians) const
+
+ Same as :func:`Problem::EvaluateResidualBlock` except that if an
+ :class:`EvaluationCallback` is associated with the problem, then
+ its :func:`EvaluationCallback::PrepareForEvaluation` method will
+ be called every time this method is called with new_point = false.
+
+ This means, if an :class:`EvaluationCallback` is associated with
+ the problem then it is the user's responsibility to call
+ :func:`EvaluationCallback::PrepareForEvaluation` before calling
+ this method if necessary, i.e. iff the parameter values have been
+ changed since the last call to evaluate / solve.'
+
+ This is because, as the name implies, we assume that the parameter
+ blocks did not change since the last time
+ :func:`EvaluationCallback::PrepareForEvaluation` was called (via
+ :func:`Solve`, :func:`Problem::Evaluate` or
+ :func:`Problem::EvaluateResidualBlock`).
+
+
.. function:: bool Problem::Evaluate(const Problem::EvaluateOptions& options, double* cost, vector<double>* residuals, vector<double>* gradient, CRSMatrix* jacobian)
Evaluate a :class:`Problem`. Any of the output pointers can be
- `NULL`. Which residual blocks and parameter blocks are used is
+ `nullptr`. Which residual blocks and parameter blocks are used is
controlled by the :class:`Problem::EvaluateOptions` struct below.
.. NOTE::
@@ -1770,10 +2031,10 @@
Problem problem;
double x = 1;
- problem.Add(new MyCostFunction, NULL, &x);
+ problem.Add(new MyCostFunction, nullptr, &x);
double cost = 0.0;
- problem.Evaluate(Problem::EvaluateOptions(), &cost, NULL, NULL, NULL);
+ problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
The cost is evaluated at `x = 1`. If you wish to evaluate the
problem at `x = 2`, then
@@ -1781,7 +2042,7 @@
.. code-block:: c++
x = 2;
- problem.Evaluate(Problem::EvaluateOptions(), &cost, NULL, NULL, NULL);
+ problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
is the way to do so.
@@ -1799,6 +2060,12 @@
:class:`IterationCallback` at the end of an iteration during a
solve.
+ .. NOTE::
+
+ If an EvaluationCallback is associated with the problem, then
+ its PrepareForEvaluation method will be called everytime this
+ method is called with ``new_point = true``.
+
.. class:: Problem::EvaluateOptions
Options struct that is used to control :func:`Problem::Evaluate`.
@@ -1840,6 +2107,72 @@
Number of threads to use. (Requires OpenMP).
+
+:class:`EvaluationCallback`
+===========================
+
+.. class:: EvaluationCallback
+
+ Interface for receiving callbacks before Ceres evaluates residuals or
+ Jacobians:
+
+ .. code-block:: c++
+
+ class EvaluationCallback {
+ public:
+ virtual ~EvaluationCallback() {}
+ virtual void PrepareForEvaluation()(bool evaluate_jacobians
+ bool new_evaluation_point) = 0;
+ };
+
+.. function:: void EvaluationCallback::PrepareForEvaluation(bool evaluate_jacobians, bool new_evaluation_point)
+
+ Ceres will call :func:`EvaluationCallback::PrepareForEvaluation`
+ every time, and once before it computes the residuals and/or the
+ Jacobians.
+
+ User parameters (the double* values provided by the us) are fixed
+ until the next call to
+ :func:`EvaluationCallback::PrepareForEvaluation`. If
+ ``new_evaluation_point == true``, then this is a new point that is
+ different from the last evaluated point. Otherwise, it is the same
+ point that was evaluated previously (either Jacobian or residual)
+ and the user can use cached results from previous evaluations. If
+ ``evaluate_jacobians`` is true, then Ceres will request Jacobians
+ in the upcoming cost evaluation.
+
+ Using this callback interface, Ceres can notify you when it is
+ about to evaluate the residuals or Jacobians. With the callback,
+ you can share computation between residual blocks by doing the
+ shared computation in
+ :func:`EvaluationCallback::PrepareForEvaluation` before Ceres calls
+ :func:`CostFunction::Evaluate` on all the residuals. It also
+ enables caching results between a pure residual evaluation and a
+ residual & Jacobian evaluation, via the ``new_evaluation_point``
+ argument.
+
+ One use case for this callback is if the cost function compute is
+ moved to the GPU. In that case, the prepare call does the actual
+ cost function evaluation, and subsequent calls from Ceres to the
+ actual cost functions merely copy the results from the GPU onto the
+ corresponding blocks for Ceres to plug into the solver.
+
+ **Note**: Ceres provides no mechanism to share data other than the
+ notification from the callback. Users must provide access to
+ pre-computed shared data to their cost functions behind the scenes;
+ this all happens without Ceres knowing. One approach is to put a
+ pointer to the shared data in each cost function (recommended) or
+ to use a global shared variable (discouraged; bug-prone). As far
+ as Ceres is concerned, it is evaluating cost functions like any
+ other; it just so happens that behind the scenes the cost functions
+ reuse pre-computed data to execute faster.
+
+ See ``evaluation_callback_test.cc`` for code that explicitly
+ verifies the preconditions between
+ :func:`EvaluationCallback::PrepareForEvaluation` and
+ :func:`CostFunction::Evaluate`.
+
+
``rotation.h``
==============
@@ -2010,7 +2343,7 @@
.. code::
- const double data[] = {1.0, 2.0, 5.0, 6.0};
+ const double x[] = {1.0, 2.0, 5.0, 6.0};
Grid1D<double, 1> array(x, 0, 4);
CubicInterpolator interpolator(array);
double f, dfdx;
diff --git a/docs/source/nnls_solving.rst b/docs/source/nnls_solving.rst
index 713d54d..285df3a 100644
--- a/docs/source/nnls_solving.rst
+++ b/docs/source/nnls_solving.rst
@@ -58,8 +58,8 @@
algorithms can be divided into two major categories [NocedalWright]_.
1. **Trust Region** The trust region approach approximates the
- objective function using using a model function (often a quadratic)
- over a subset of the search space known as the trust region. If the
+ objective function using a model function (often a quadratic) over
+ a subset of the search space known as the trust region. If the
model function succeeds in minimizing the true objective function
the trust region is expanded; conversely, otherwise it is
contracted and the model optimization problem is solved again.
@@ -166,10 +166,10 @@
will assume that the matrix :math:`\frac{1}{\sqrt{\mu}} D` has been concatenated
at the bottom of the matrix :math:`J` and similarly a vector of zeros
has been added to the bottom of the vector :math:`f` and the rest of
-our discussion will be in terms of :math:`J` and :math:`f`, i.e, the
+our discussion will be in terms of :math:`J` and :math:`F`, i.e, the
linear least squares problem.
-.. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + f(x)\|^2 .
+.. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + F(x)\|^2 .
:label: simple
For all but the smallest problems the solution of :eq:`simple` in
@@ -648,11 +648,11 @@
access to :math:`S` via its product with a vector, one way to
evaluate :math:`Sx` is to observe that
- .. math:: x_1 &= E^\top x
- .. math:: x_2 &= C^{-1} x_1
- .. math:: x_3 &= Ex_2\\
- .. math:: x_4 &= Bx\\
- .. math:: Sx &= x_4 - x_3
+ .. math:: x_1 &= E^\top x\\
+ x_2 &= C^{-1} x_1\\
+ x_3 &= Ex_2\\
+ x_4 &= Bx\\
+ Sx &= x_4 - x_3
:label: schurtrick1
Thus, we can run PCG on :math:`S` with the same computational
@@ -693,7 +693,7 @@
.. _section-preconditioner:
Preconditioner
---------------
+==============
The convergence rate of Conjugate Gradients for
solving :eq:`normal` depends on the distribution of eigenvalues
@@ -726,34 +726,96 @@
based preconditioners have much better convergence behavior than the
Jacobi preconditioner, but are also much more expensive.
+For a survey of the state of the art in preconditioning linear least
+squares problems with general sparsity structure see [GouldScott]_.
+
+Ceres Solver comes with an number of preconditioners suited for
+problems with general sparsity as well as the special sparsity
+structure encountered in bundle adjustment problems.
+
+``JACOBI``
+----------
+
The simplest of all preconditioners is the diagonal or Jacobi
preconditioner, i.e., :math:`M=\operatorname{diag}(A)`, which for
block structured matrices like :math:`H` can be generalized to the
-block Jacobi preconditioner. Ceres implements the block Jacobi
-preconditioner and refers to it as ``JACOBI``. When used with
-:ref:`section-cgnr` it refers to the block diagonal of :math:`H` and
-when used with :ref:`section-iterative_schur` it refers to the block
-diagonal of :math:`B` [Mandel]_.
+block Jacobi preconditioner. The ``JACOBI`` preconditioner in Ceres
+when used with :ref:`section-cgnr` refers to the block diagonal of
+:math:`H` and when used with :ref:`section-iterative_schur` refers to
+the block diagonal of :math:`B` [Mandel]_. For detailed performance
+data about the performance of ``JACOBI`` on bundle adjustment problems
+see [Agarwal]_.
+
+
+``SCHUR_JACOBI``
+----------------
Another obvious choice for :ref:`section-iterative_schur` is the block
diagonal of the Schur complement matrix :math:`S`, i.e, the block
-Jacobi preconditioner for :math:`S`. Ceres implements it and refers to
-is as the ``SCHUR_JACOBI`` preconditioner.
+Jacobi preconditioner for :math:`S`. In Ceres we refer to it as the
+``SCHUR_JACOBI`` preconditioner. For detailed performance data about
+the performance of ``SCHUR_JACOBI`` on bundle adjustment problems see
+[Agarwal]_.
+
+
+``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL``
+----------------------------------------------
For bundle adjustment problems arising in reconstruction from
community photo collections, more effective preconditioners can be
constructed by analyzing and exploiting the camera-point visibility
-structure of the scene [KushalAgarwal]_. Ceres implements the two
-visibility based preconditioners described by Kushal & Agarwal as
-``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL``. These are fairly new
-preconditioners and Ceres' implementation of them is in its early
-stages and is not as mature as the other preconditioners described
-above.
+structure of the scene.
+
+The key idea is to cluster the cameras based on the visibility
+structure of the scene. The similarity between a pair of cameras
+:math:`i` and :math:`j` is given by:
+
+ .. math:: S_{ij} = \frac{|V_i \cap V_j|}{|V_i| |V_j|}
+
+Here :math:`V_i` is the set of scene points visible in camera
+:math:`i`. This idea was first exploited by [KushalAgarwal]_ to create
+the ``CLUSTER_JACOBI`` and the ``CLUSTER_TRIDIAGONAL`` preconditioners
+which Ceres implements.
+
+The performance of these two preconditioners depends on the speed and
+clustering quality of the clustering algorithm used when building the
+preconditioner. In the original paper, [KushalAgarwal]_ used the
+Canonical Views algorithm [Simon]_, which while producing high quality
+clusterings can be quite expensive for large graphs. So, Ceres
+supports two visibility clustering algorithms - ``CANONICAL_VIEWS``
+and ``SINGLE_LINKAGE``. The former is as the name implies Canonical
+Views algorithm of [Simon]_. The latter is the the classic `Single
+Linkage Clustering
+<https://en.wikipedia.org/wiki/Single-linkage_clustering>`_
+algorithm. The choice of clustering algorithm is controlled by
+:member:`Solver::Options::visibility_clustering_type`.
+
+``SUBSET``
+----------
+
+This is a preconditioner for problems with general sparsity. Given a
+subset of residual blocks of a problem, it uses the corresponding
+subset of the rows of the Jacobian to construct a preconditioner
+[Dellaert]_.
+
+Suppose the Jacobian :math:`J` has been horizontally partitioned as
+
+ .. math:: J = \begin{bmatrix} P \\ Q \end{bmatrix}
+
+Where, :math:`Q` is the set of rows corresponding to the residual
+blocks in
+:member:`Solver::Options::residual_blocks_for_subset_preconditioner`. The
+preconditioner is the matrix :math:`(Q^\top Q)^{-1}`.
+
+The efficacy of the preconditioner depends on how well the matrix
+:math:`Q` approximates :math:`J^\top J`, or how well the chosen
+residual blocks approximate the full problem.
+
.. _section-ordering:
Ordering
---------
+========
The order in which variables are eliminated in a linear solver can
have a significant of impact on the efficiency and accuracy of the
@@ -992,6 +1054,11 @@
search, if a step size satisfying the search conditions cannot be
found within this number of trials, the line search will stop.
+ The minimum allowed value is 0 for trust region minimizer and 1
+ otherwise. If 0 is specified for the trust region minimizer, then
+ line search will not be used when solving constrained optimization
+ problems.
+
As this is an 'artificial' constraint (one imposed by the user, not
the underlying math), if ``WOLFE`` line search is being used, *and*
points satisfying the Armijo sufficient (function) decrease
@@ -1125,7 +1192,7 @@
.. member:: double Solver::Options::min_lm_diagonal
- Default: ``1e6``
+ Default: ``1e-6``
The ``LEVENBERG_MARQUARDT`` strategy, uses a diagonal matrix to
regularize the trust region step. This is the lower bound on
@@ -1229,6 +1296,29 @@
recommend that you try ``CANONICAL_VIEWS`` first and if it is too
expensive try ``SINGLE_LINKAGE``.
+.. member:: std::unordered_set<ResidualBlockId> residual_blocks_for_subset_preconditioner
+
+ ``SUBSET`` preconditioner is a preconditioner for problems with
+ general sparsity. Given a subset of residual blocks of a problem,
+ it uses the corresponding subset of the rows of the Jacobian to
+ construct a preconditioner.
+
+ Suppose the Jacobian :math:`J` has been horizontally partitioned as
+
+ .. math:: J = \begin{bmatrix} P \\ Q \end{bmatrix}
+
+ Where, :math:`Q` is the set of rows corresponding to the residual
+ blocks in
+ :member:`Solver::Options::residual_blocks_for_subset_preconditioner`. The
+ preconditioner is the matrix :math:`(Q^\top Q)^{-1}`.
+
+ The efficacy of the preconditioner depends on how well the matrix
+ :math:`Q` approximates :math:`J^\top J`, or how well the chosen
+ residual blocks approximate the full problem.
+
+ If ``Solver::Options::preconditioner_type == SUBSET``, then
+ ``residual_blocks_for_subset_preconditioner`` must be non-empty.
+
.. member:: DenseLinearAlgebraLibrary Solver::Options::dense_linear_algebra_library_type
Default:``EIGEN``
@@ -1408,6 +1498,10 @@
on each Newton/Trust region step using a coordinate descent
algorithm. For more details, see :ref:`section-inner-iterations`.
+ **Note** Inner iterations cannot be used with :class:`Problem`
+ objects that have an :class:`EvaluationCallback` associated with
+ them.
+
.. member:: double Solver::Options::inner_iteration_tolerance
Default: ``1e-3``
@@ -1559,7 +1653,7 @@
.. member:: double Solver::Options::gradient_check_relative_precision
- Default: ``1e08``
+ Default: ``1e-8``
Precision to check for in the gradient checker. If the relative
difference between an element in a Jacobian exceeds this number,
@@ -1598,63 +1692,40 @@
which break this finite difference heuristic, but they do not come
up often in practice.
-.. member:: vector<IterationCallback> Solver::Options::callbacks
-
- Callbacks that are executed at the end of each iteration of the
- :class:`Minimizer`. They are executed in the order that they are
- specified in this vector. By default, parameter blocks are updated
- only at the end of the optimization, i.e., when the
- :class:`Minimizer` terminates. This behavior is controlled by
- :member:`Solver::Options::update_state_every_iteration`. If the user
- wishes to have access to the updated parameter blocks when his/her
- callbacks are executed, then set
- :member:`Solver::Options::update_state_every_iteration` to true.
-
- The solver does NOT take ownership of these pointers.
-
.. member:: bool Solver::Options::update_state_every_iteration
Default: ``false``
- If true, the user's parameter blocks are updated at the end of
- every Minimizer iteration, otherwise they are updated when the
- Minimizer terminates. This is useful if, for example, the user
- wishes to visualize the state of the optimization every iteration
- (in combination with an IterationCallback).
+ If ``update_state_every_iteration`` is ``true``, then Ceres Solver
+ will guarantee that at the end of every iteration and before any
+ user :class:`IterationCallback` is called, the parameter blocks are
+ updated to the current best solution found by the solver. Thus the
+ IterationCallback can inspect the values of the parameter blocks
+ for purposes of computation, visualization or termination.
- **Note**: If :member:`Solver::Options::evaluation_callback` is set,
- then the behaviour of this flag is slightly different in each case:
+ If ``update_state_every_iteration`` is ``false`` then there is no
+ such guarantee, and user provided :class:`IterationCallback` s
+ should not expect to look at the parameter blocks and interpret
+ their values.
- 1. If :member:`Solver::Options::update_state_every_iteration` is
- false, then the user's state is changed at every residual and/or
- jacobian evaluation. Any user provided IterationCallbacks should
- **not** inspect and depend on the user visible state while the
- solver is running, since they it have undefined contents.
+.. member:: vector<IterationCallback> Solver::Options::callbacks
- 2. If :member:`Solver::Options::update_state_every_iteration` is
- false, then the user's state is changed at every residual and/or
- jacobian evaluation, BUT the solver will ensure that before the
- user provided `IterationCallbacks` are called, the user visible
- state will be updated to the current best point found by the
- solver.
+ Callbacks that are executed at the end of each iteration of the
+ :class:`Minimizer`. They are executed in the order that they are
+ specified in this vector.
-.. member:: bool Solver::Options::evaluation_callback
+ By default, parameter blocks are updated only at the end of the
+ optimization, i.e., when the :class:`Minimizer` terminates. This
+ means that by default, if an :class:`IterationCallback` inspects
+ the parameter blocks, they will not see them changing in the course
+ of the optimization.
- Default: ``NULL``
+ To tell Ceres to update the parameter blocks at the end of each
+ iteration and before calling the user's callback, set
+ :member:`Solver::Options::update_state_every_iteration` to
+ ``true``.
- If non-``NULL``, gets notified when Ceres is about to evaluate the
- residuals and/or Jacobians. This enables sharing computation between
- residuals, which in some cases is important for efficient cost
- evaluation. See :class:`EvaluationCallback` for details.
-
- **Note**: Evaluation callbacks are incompatible with inner
- iterations.
-
- **Warning**: This interacts with
- :member:`Solver::Options::update_state_every_iteration`. See the
- documentation for that option for more details.
-
- The solver does `not` take ownership of the pointer.
+ The solver does NOT take ownership of these pointers.
:class:`ParameterBlockOrdering`
===============================
@@ -1715,62 +1786,6 @@
Number of groups with one or more elements.
-:class:`EvaluationCallback`
-===========================
-
-.. class:: EvaluationCallback
-
- Interface for receiving callbacks before Ceres evaluates residuals or
- Jacobians:
-
- .. code-block:: c++
-
- class EvaluationCallback {
- public:
- virtual ~EvaluationCallback() {}
- virtual void PrepareForEvaluation()(bool evaluate_jacobians
- bool new_evaluation_point) = 0;
- };
-
- ``PrepareForEvaluation()`` is called before Ceres requests residuals
- or jacobians for a given setting of the parameters. User parameters
- (the double* values provided to the cost functions) are fixed until
- the next call to ``PrepareForEvaluation()``. If
- ``new_evaluation_point == true``, then this is a new point that is
- different from the last evaluated point. Otherwise, it is the same
- point that was evaluated previously (either jacobian or residual) and
- the user can use cached results from previous evaluations. If
- ``evaluate_jacobians`` is true, then Ceres will request jacobians in
- the upcoming cost evaluation.
-
- Using this callback interface, Ceres can notify you when it is about
- to evaluate the residuals or jacobians. With the callback, you can
- share computation between residual blocks by doing the shared
- computation in PrepareForEvaluation() before Ceres calls
- CostFunction::Evaluate() on all the residuals. It also enables
- caching results between a pure residual evaluation and a residual &
- jacobian evaluation, via the new_evaluation_point argument.
-
- One use case for this callback is if the cost function compute is
- moved to the GPU. In that case, the prepare call does the actual cost
- function evaluation, and subsequent calls from Ceres to the actual
- cost functions merely copy the results from the GPU onto the
- corresponding blocks for Ceres to plug into the solver.
-
- **Note**: Ceres provides no mechanism to share data other than the
- notification from the callback. Users must provide access to
- pre-computed shared data to their cost functions behind the scenes;
- this all happens without Ceres knowing. One approach is to put a
- pointer to the shared data in each cost function (recommended) or to
- use a global shared variable (discouraged; bug-prone). As far as
- Ceres is concerned, it is evaluating cost functions like any other;
- it just so happens that behind the scenes the cost functions reuse
- pre-computed data to execute faster.
-
- See ``evaluation_callback_test.cc`` for code that explicitly verifies
- the preconditions between ``PrepareForEvaluation()`` and
- ``CostFunction::Evaluate()``.
-
:class:`IterationCallback`
==========================
@@ -1779,7 +1794,7 @@
:class:`IterationSummary` describes the state of the minimizer at
the end of each iteration.
-.. member:: int32 IterationSummary::iteration
+.. member:: int IterationSummary::iteration
Current iteration number.
@@ -2211,7 +2226,7 @@
Number of threads actually used by the solver for Jacobian and
residual evaluation. This number is not equal to
:member:`Solver::Summary::num_threads_given` if none of `OpenMP`
- or `CXX11_THREADS` is available.
+ or `CXX_THREADS` is available.
.. member:: LinearSolverType Solver::Summary::linear_solver_type_given
diff --git a/docs/source/nnls_tutorial.rst b/docs/source/nnls_tutorial.rst
index 3c39086..6c89032 100644
--- a/docs/source/nnls_tutorial.rst
+++ b/docs/source/nnls_tutorial.rst
@@ -111,7 +111,7 @@
// auto-differentiation to obtain the derivative (jacobian).
CostFunction* cost_function =
new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
- problem.AddResidualBlock(cost_function, NULL, &x);
+ problem.AddResidualBlock(cost_function, nullptr, &x);
// Run the solver!
Solver::Options options;
@@ -212,7 +212,7 @@
CostFunction* cost_function =
new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(
new NumericDiffCostFunctor);
- problem.AddResidualBlock(cost_function, NULL, &x);
+ problem.AddResidualBlock(cost_function, nullptr, &x);
Notice the parallel from when we were using automatic differentiation
@@ -220,7 +220,7 @@
CostFunction* cost_function =
new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
- problem.AddResidualBlock(cost_function, NULL, &x);
+ problem.AddResidualBlock(cost_function, nullptr, &x);
The construction looks almost identical to the one used for automatic
differentiation, except for an extra template parameter that indicates
@@ -261,7 +261,7 @@
residuals[0] = 10 - x;
// Compute the Jacobian if asked for.
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = -1;
}
return true;
@@ -358,13 +358,13 @@
// Add residual terms to the problem using the using the autodiff
// wrapper to get the derivatives automatically.
problem.AddResidualBlock(
- new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);
+ new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);
+ new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3)
+ new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3)
problem.AddResidualBlock(
- new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);
+ new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);
Note that each ``ResidualBlock`` only depends on the two parameters
@@ -496,7 +496,7 @@
CostFunction* cost_function =
new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
new ExponentialResidual(data[2 * i], data[2 * i + 1]));
- problem.AddResidualBlock(cost_function, NULL, &m, &c);
+ problem.AddResidualBlock(cost_function, nullptr, &m, &c);
}
Compiling and running `examples/curve_fitting.cc
@@ -568,7 +568,7 @@
.. code-block:: c++
- problem.AddResidualBlock(cost_function, NULL , &m, &c);
+ problem.AddResidualBlock(cost_function, nullptr , &m, &c);
to
@@ -697,7 +697,7 @@
bal_problem.observations()[2 * i + 0],
bal_problem.observations()[2 * i + 1]);
problem.AddResidualBlock(cost_function,
- NULL /* squared loss */,
+ nullptr /* squared loss */,
bal_problem.mutable_camera_for_observation(i),
bal_problem.mutable_point_for_observation(i));
}
@@ -941,11 +941,11 @@
.. [#f9] Giorgio Grisetti, Rainer Kummerle, Cyrill Stachniss, Wolfram
Burgard. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation
- Systems Magazine, 52(3):199–222, 2010.
+ Systems Magazine, 52(3):199-222, 2010.
.. [#f10] E. Olson, J. Leonard, and S. Teller, “Fast iterative optimization of
pose graphs with poor initial estimates,” in Robotics and Automation
- (ICRA), IEEE International Conference on, 2006, pp. 2262–2269.
+ (ICRA), IEEE International Conference on, 2006, pp. 2262-2269.
#. `slam/pose_graph_3d/pose_graph_3d.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_3d/pose_graph_3d.cc>`_
diff --git a/docs/source/version_history.rst b/docs/source/version_history.rst
index 0bef4ad..72ae832 100644
--- a/docs/source/version_history.rst
+++ b/docs/source/version_history.rst
@@ -4,6 +4,163 @@
Version History
===============
+2.0.0
+=====
+
+New Features
+------------
+#. Ceres Solver now requires a C++14 compatible compiler, Eigen
+ version >= 3.3 & CMake version >= 3.5, XCode version >= 11.2 (Sameer
+ Agarwal, Alex Stewart & Keir Mierle)
+#. C++ threading based multi-threading support. (Mike Vitus)
+#. :func:`Problem::AddResidualBlock`, :class:`SizedFunction`,
+ :class:`AutoDiffCostFunction`, :class:`NumericDiffCostFunction`
+ support an arbitrary number of parameter blocks using variadic
+ templates (Johannes Beck)
+#. On Apple platforms, support for Apple's Accelerate framework as a
+ sparse linear algebra library. (Alex Stewart)
+#. Significantly faster AutoDiff (Darius Rueckert)
+#. Mixed precision solves when using
+ ``SPARSE_NORMAL_CHOLESKY``. (Sameer Agarwal)
+#. ``LocalParameterization`` objects can have a zero sized tangent
+ size, which effectively makes the parameter block constant. In
+ particular, this allows for a ``SubsetParameterization`` that holds
+ all the coordinates of a parameter block constant. (Sameer Agarwal
+ & Emil Ernerfeldt)
+#. Visibility based preconditioning now works with ``Eigen`` and
+ ``CXSparse``. (Sameer Agarwal)
+#. Added :func:`Problem::EvaluateResidualBlock` and
+ :func:`Problem::EvaluateResidualBlockAssumingParametersUnchanged`. (Sameer
+ Agarwal)
+#. ``GradientChecker`` now uses ``RIDDERS`` method for more accurate
+ numerical derivatives. (Sameer Agarwal)
+#. Covariance computation uses a faster SVD algorithm (Johannes Beck)
+#. A new local parameterization for lines (Johannes Beck)
+#. A new (``SUBSET``) preconditioner for problems with general
+ sparsity. (Sameer Agarwal)
+#. Faster Schur elimination using faster custom BLAS routines for
+ small matrices. (yangfan)
+#. Automatic differentiation for ``FirstOrderFunction`` in the form of
+ :class:`AutoDiffFirstOrderFunction`. (Sameer Agarwal)
+#. ``TinySolverAutoDiffFunction`` now supports dynamic number of residuals
+ just like ``AutoDiffCostFunction``. (Johannes Graeter)
+
+Backward Incompatible API Changes
+---------------------------------
+
+#. ``EvaluationCallback`` has been moved from ``Solver::Options`` to
+ ``Problem::Options`` for a more correct API.
+#. Removed ``Android.mk`` based build.
+#. ``Solver::Options::num_linear_solver_threads`` is no more.
+
+Bug Fixes & Minor Changes
+-------------------------
+#. Use CMAKE_PREFIX_PATH to pass Homebrew install location (Alex Stewart)
+#. Add automatic differentiation support for ``Erf`` and ``Erfc``. (Morten Hennemose)
+#. Add a move constructor to ``AutoDiffCostFunction``, ``NumericDiffCostFunction``, ``DynamicAutoDiffCostFunction`` and ``DynamicNumericDiffCostFunction``. (Julian Kent & Sameer Agarwal)
+#. Fix potential for mismatched release/debug TBB libraries (Alex Stewart)
+#. Trust region minimizer now reports the gradient of the current state, rather than zero when it encounters an unsuccessful step (Sameer Agarwal & Alex Stewart)
+#. Unify symbol visibility configuration for all compilers (Taylor Braun-Jones)
+#. Fix the Bazel build so that it points GitLab instead of the old BitBucket repo for Eigen (Sameer Agarwal)
+#. Reformat source to be clang-format clean and add a script to format the repo using clang-format. (Nikolaus Demmel)
+#. Various documentation improvements (Sameer Agarwal, Carl Dehlin,
+ Bayes Nie, Chris Choi, Frank, Kuang Fangjun, Dmitriy Korchemkin,
+ huangqinjin, Patrik Huber, Nikolaus Demmel, Lorenzo Lamia)
+#. Huge number of build system simplification & cleanups (Alex
+ Stewart, NeroBurner, Alastair Harrison, Linus Mårtensson, Nikolaus Demmel)
+#. Intel TBB based threading removed (Mike Vitus)
+#. Allow :class:`SubsetParameterization` to accept an empty vector of
+ constant parameters. (Sameer Agarwal & Frédéric Devernay)
+#. Fix a bug in DynamicAutoDiffCostFunction when all parameters are
+ constant (Ky Waegel & Sameer Agarwal)
+#. Fixed incorrect argument name in ``RotationMatrixToQuaternion``
+ (Alex Stewart & Frank Dellaert)
+#. Do not export class template LineParameterization (huangqinjin)
+#. Change the type of parameter index/offset to match their getter/setter (huangqinjin)
+#. Initialize integer variables with integer instead of double (huangqinjin)
+#. Add std::numeric_limit specialization for Jets (Sameer Agarwal)
+#. Fix a MSVC type deduction bug in ComputeHouseholderVector (Sameer Agarwal)
+#. Allow LocalParameterizations to have zero local size. (Sameer Agarwal)
+#. Add photometric and relative-pose residuals to autodiff benchmarks (Nikolaus Demmel)
+#. Add a constant cost function to the autodiff benchmarks (Darius Rueckert)
+#. Add const to GetCovarianceMatrix#. (Johannes Beck)
+#. Fix Tukey loss function (Enrique Fernandez)
+#. Fix 3+ nested Jet constructor (Julian Kent)
+#. Fix windows MSVC build. (Johannes Beck)
+#. Fix invert PSD matrix. (Johannes Beck)
+#. Remove not used using declaration (Johannes Beck)
+#. Let Problem::SetParameterization be called more than once. (Sameer Agarwal)
+#. Make Problem movable. (Sameer Agarwal)
+#. Make EventLogger more efficient. (Sameer Agarwal)
+#. Remove a CHECK failure from covariance_impl.cc (Sameer Agarwal)
+#. Add a missing cast in rotation.h (Sameer Agarwal)
+#. Add a specialized SchurEliminator and integrate it for the case <2,3,6> (Sameer Agarwal)
+#. Remove use of SetUsage as it creates compilation problems. (Sameer Agarwal)
+#. Protect declarations of lapack functions under CERES_NO_LAPACK (Sameer Agarwal)
+#. Drop ROS dependency on catkin (Scott K Logan)
+#. Explicitly delete the copy constructor and copy assignment operator (huangqinjin)
+#. Use selfAdjoingView<Upper> in InvertPSDMatrix. (Sameer Agarwal)
+#. Speed up InvertPSDMatrix (Sameer Agarwal)
+#. Allow Solver::Options::max_num_line_search_step_size_iterations = 0. (Sameer Agarwal)
+#. Make LineSearchMinizer work correctly with negative valued functions. (Sameer Agarwal)
+#. Fix missing declaration warnings in Ceres code (Sergey Sharybin)
+#. Modernize ProductParameterization. (Johannes Beck)
+#. Add some missing string-to-enum-to-string convertors. (Sameer Agarwal)
+#. Add checks in rotation.h for inplace operations. (Johannes Beck)
+#. Update Bazel WORKSPACE for newest Bazel (Keir Mierle)
+#. TripletSparseMatrix: guard against self-assignment (ngoclinhng)
+#. Fix Eigen alignment issues. (Johannes Beck)
+#. Add the missing <array> header to fixed_array.h (Sameer Agarwal)
+#. Switch to FixedArray implementation from abseil. (Johannes Beck)
+#. IdentityTransformation -> IdentityParameterization (Sameer Agarwal)
+#. Reorder initializer list to make -Wreorder happy (Sam Hasinoff)
+#. Reduce machoness of macro definition in cost_functor_to_function_test.cc (Sameer Agarwal)
+#. Enable optional use of sanitizers (Alex Stewart)
+#. Fix a typo in cubic_interpolation.h (Sameer Agarwal)
+#. Update googletest/googlemock to db9b85e2. (Sameer Agarwal)
+#. Fix Jacobian evaluation for constant parameter (Johannes Beck)
+#. AutoDiffCostFunction: use static_assert to check if the correct overload of the constructor is used. (Christopher Wecht)
+#. Avoid additional memory allocation in gradient checker (Justin Carpentier)
+#. Swap the order of definition of IsValidParameterDimensionSequence. (Sameer Agarwal)
+#. Add ParameterBlock::IsSetConstantByUser() (Sameer Agarwal)
+#. Add parameter dims for variadic sized cost function (Johannes Beck)
+#. Remove trailing zero parameter block sizes (Johannes Beck)
+#. Adding integer sequence and algorithms (Johannes Beck)
+#. Improve readability of LocalParameterization code. (Sameer Agarwal)
+#. Simplifying Init in manual contructor (Johannes Beck)
+#. Fix typo in NIST url. (Alessandro Gentilini)
+#. Add a .clang-format file. (Sameer Agarwal)
+#. Make ConditionedCostFunction compatible with repeated CostFunction. (Sameer Agarwal)
+#. Remove conversions from a double to a Jet. (Kuang Fangjun)
+#. close the file on return. (Kuang Fangjun)
+#. Fix an error in the demo code for ceres::Jet. (Kuang Fangjun)
+#. Recheck the residual after a new call. (Kuang Fangjun)
+#. avoid recomputation. (Kuang Fangjun)
+#. Fix calculation of Solver::Summary::num_threads_used. (Alex Stewart)
+#. Convert calls to CHECK_NOTNULL to CHECK. (Sameer Agarwal)
+#. Add a missing <cstdint> to block_structure.h (Sameer Agarwal)
+#. Fix an uninitialized memory error in EvaluationCallbackTest (Sameer Agarwal)
+#. Respect bounds when using Solver::Options::check_gradients (Sameer Agarwal)
+#. Relax the limitation that SchurEliminator::Eliminate requires a rhs. (Sameer Agarwal)
+#. Fix three out of bounds errors in CompressedRowSparseMatrix. (Sameer Agarwal)
+#. Add Travis CI support. (Alex Stewart)
+#. Refactor Ceres threading option configuration. (Alex Stewart)
+#. Handle NULL permutation from SuiteSparseQR (Pau Gargallo)
+#. Remove chunk shuffle in multithreaded SchurEliminator (Norbert Wenzel)
+#. Add /bigobj to nist on MSVC. (Alex Stewart)
+#. Fix 'xxx.cc has no symbols' warnings. (Alex Stewart)
+#. Add a typedef to expose the scalar type used in a Jet. (Sameer Agarwal)
+#. Fix a use after free bug in the tests. (Sameer Agarwal)
+#. Simplify integration tests. (Sameer Agarwal)
+#. Converts std::unique_lock to std::lock_guard. (Mike Vitus)
+#. Bring the Bazel build in sync with the CMake build. (Sameer Agarwal)
+#. Adds a ParallelFor wrapper for no threads and OpenMP. (Mike Vitus)
+#. Improve the test coverage in small_blas_test (Sameer Agarwal)
+#. Handle possible overflow in TrustRegionStepEvaluator. (Sameer Agarwal)
+#. Fix lower-bound on result of minimising step-size polynomial. (Alex Stewart)
+#. Adds missing functional include in thread_pool.h (Mike Vitus)
+
+
1.14.0
======
@@ -966,7 +1123,7 @@
NULL, /* No cost */
&initial_residuals,
NULL, /* No gradient */
- NULL /* No jacobian */ );
+ NULL /* No jacobian */);
Solver::Options options;
Solver::Summary summary;
@@ -977,7 +1134,7 @@
NULL, /* No cost */
&final_residuals,
NULL, /* No gradient */
- NULL /* No jacobian */ );
+ NULL /* No jacobian */);
New Features